Why Smart Companies Make Dumb Mistakes

balanced stones with circuit board traces

“Intelligence is knowing that Frankenstein is not the monster. Wisdom is knowing that Frankenstein is the monster.”

“Eat rocks,” said Google’s AI, apparently confusing people with ostriches. Meanwhile, Apple decided the best way to sell iPads was to literally crush musical instruments. Welcome to the wacky world of AI marketing, where billion-dollar companies with genius-level employees somehow keep making decisions that would embarrass a sleep-deprived intern.

These facepalm-inducing moments aren’t just amusing anecdotes; they’re symptoms of a larger problem in the tech industry: a critical lack of wisdom. In this article, we’ll dissect why having the smartest people in the room doesn’t guarantee smart decisions, explore the crucial difference between intelligence and wisdom, and demonstrate how we can harness AI itself to help us make wiser choices. Spoiler alert: it involves dusting off some advice from good old Ben Franklin.

First off, let’s be clear about the difference between intelligence and wisdom.

Intelligence is easier to define. Wisdom is a bit squishier.

Intelligence is the ability to acquire and apply knowledge and skills. It’s about knowing how to do things.

Wisdom, on the other hand, is about understanding why and when to do things. It’s about making good decisions and sound judgments based on knowledge, experience, and understanding.

An old joke goes, “Intelligence is knowing that a tomato is a fruit; wisdom is not putting it in a fruit salad.”

(Not to detract from such thigh-slapping humor, but knowing that a tomato is a fruit is actually a matter of knowledge, not intelligence.)

Intelligence facilitates the acquisition and manipulation of information; wisdom guides the application of knowledge in ethically and socially appropriate ways, often factoring in long-term consequences and the broader context of decisions.

Let’s play the bloopers reel from the last six months or so of big tech marketing:

First, Google. Let’s give Google a pass on the faux pas of Bard, and just look at the last months.

  • December, 2023: Google Fakes The Duck Demo

Google’s best Gemini demo was a video called “Hands-on with Gemini: Interacting with multimodal AI”. TechCrunch did an article on that which showed that it “exaggerated” its capabilities.

  • February, 2024: Google And Nazi Diversity

Google’s AI generated images of US Founding Fathers and Nazi-era Germans were racially diverse rather than accurate, as (for example) were pictures of US senators from the 1800s. Google apologized and temporarily stopped generating images of people.

  • March, 2024: Google’s AI-Assisted Search Results Says “Eat Rocks”

Google’s experimental “AI Overviews” tool has:

  • Suggested sticking cheese to pizza with non-toxic glue
  • Said geologists recommend eating at least one small rock per day
  • Recommended staring at the sun

There’s a lot more here, if you want to see the twitter/x thread.

For a completely different kind of error in judgement, let’s take a look at OpenAI and Apple.

The biggest criticism of generative AI to date is that it is harming artists, writers, photographers, musicians, and other creative professionals by being trained on their work without consent or compensation.

So how did OpenAI and Apple respond to artists’ concerns?

  • May, 2024: OpenAI Releases “Sky” Voice Assistant

OpenAI’s GPT-4o (‘o’ for Omni) demos were great, but they included demos of the new “Sky” voice assistant that many people found reminded them of the voice of Scarlett Johansson in the movie “Her”. This was after Scarlett Johansson said they could not use her voice. OpenAI said the voice was that of another actress, however their intent was pretty clear since Sam Altman tweeted out “Her” prior to their voice assistant release.

  • May, 2024: Apple Releases iPad Pro ‘Crush’ Ad

Apple’s new iPad Pro ad, “Crush”, was a disaster. It showed artists tools and musical instruments, such as a piano, guitar, typewriter, cameras and paint cans, being slowly and inexorably crushed in an industrial press.

As with OpenAI, Apple appears to have been oblivious to the already negative perception of AI companies and their treatment of human artists. The ad was pulled after only a few hours.

According to the US Supreme Court, Corporations are “people” (of course, the justices seem quite confused by various other words too, such as “ethics”), but despite being “people” corporations don’t actually think for themselves. People made those foolish decisions, not corporations.

So if smart people are behind these dumb decisions, why?

“The Intelligence Trap: Why Smart People Make Dumb Mistakes” by David Robson

How is it that some of the smartest people in the world can make such dumb mistakes? David Robson’s book “The Intelligence Trap” has some insights.

We’re all familiar with the IQ test. Canadian psychology professor Keith Stanovich decided to study rationality instead. In fact, he designed a test to measure a “rationality quotient”. Interestingly, the correlation between IQ and rationality turns out to be very weak. Being smart isn’t a guarantee of being rational.

More surprisingly, people with higher IQ can be less rational than others. For example, the gambler’s fallacy is more common among more academically successful people (who generally score higher on IQ tests). The rationality score was about 3 times more important in determining if people will have a stressful life event, from minor things like getting a sunburn or missing a flight, to major things like catching an STD or being put in jail.

Smarter people tend to consume more alcohol and might be more likely to smoke or take illegal drugs. Smarter people are just as likely to face financial problems. Around 14% of people with an IQ of 140 had reached their credit limit compared to 8.3% of people with an IQ of 100.

In a word: sheesh.

What’s the connection between rationality and wisdom?

The modern scientific study of wisdom really took off after the 2008 financial crash. The Chicago Center for Practical Wisdom, headed by neurologist Howard Nusbaum, is one of the new institutions formed to study it.

Igor Grossman at the University of Waterloo has been studying “evidence-based wisdom,” and came up with a test to measure wisdom. It turns out it’s a much better predictor of success and happiness than IQ.

Fortunately, even if you’re not naturally the wisest person in the world, there are techniques you can use to simulate it, some of which Robson talks about in his book.

Everyone (including, especially, smart people) is susceptible to confirmation bias, which psychologists generalize to “myside bias”. Even lawyers, who are trained to consider opposing viewpoints, fall into the same trap.

There are lots of techniques for avoiding these kinds of biases, including self-distancing, seeking external advice, and journaling.

But for this post, I’ll focus on the “moral algebra” of Benjamin Franklin.

  • Benjamin Franklin developed a “moral algebra” to help him make wiser decisions, which involved suspending judgment and considering opposing viewpoints, and not just writing down pros and cons but assigning a weight to them and waiting a few days. These days we’d be more likely to call that an algorithm than algebra.

That’s where AI comes in. We’ll use AI to help us apply Franklin’s algorithm.

Wisdom in the Age of AI

But wait just a second… if Wisdom is so much more important than Intelligence, shouldn’t we lock the door to our wisdom clubhouse and tack up a sign saying “No AI Allowed”?

Nope. AI is a general purpose technology, which means if you can figure out how to use it appropriately, you can use it for anything you want to, including making better decisions. The key there is the word “appropriately”. As with any tool, knowing how and when to use it is essential. You use a hammer to pound nails into a board, not an iPhone.

The best person I know of for advice on how to effectively use AI is Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania who studies entrepreneurship, innovation, and the impact of artificial intelligence on work and education. Ethan Mollick is the author of a really great blog on using AI, One Useful Thing and a fascinating new book: Co-Intelligence: Living and Working with AI

Mollick’s book is full of surprises. We’ve all heard of the 10x developer–a mythical software engineer who is 10 times better than average. According to the book, while most of the studies are fairly old by now, the difference is even larger than that. The “gap between developers in the top 75% and those in the bottom 25% was as much as 27 times in some dimensions of programming quality”.

More relevant for us at the moment is the discussion of differences in decision making. Mollick says:

In my study of the video game industry, I found that the quality of the middle manager overseeing a game explained more than a fifth of the games eventual revenues. That was a bigger effect than the entire senior management team, and more than the designers who came up with the creative ideas for the game itself.

Clearly, anything that can help improve the quality of decisions is worth a lot.

Unfortunately, many people don’t realize that using AI effectively requires a bit of learning and experimentation. Mollick says:

most participants didn’t even bother editing the output once it was created for them. It is a problem I see repeatedly when people first use Al: they just paste in the exact question they are asked and let the Al answer it.

Let’s figure out how we can combine Benjamin Franklin’s “moral algebra” and AI together to help us construct a framework for using AI to help us make better decisions.

First, here’s a brief description of Franklin’s system:

Benjamin Franklin’s “moral algebra” is a systematic approach to making important decisions by weighing the pros and cons. Here are the steps:

  1. Draw a vertical line down the middle of a sheet of paper to create two columns. Label one column “Pros” and the other “Cons”.
  2. Over the next 3-4 days, list all the reasons or motives favoring the decision under the “Pros” column, and all the reasons against it under the “Cons” column. Write brief notes capturing each point.
  3. Once all the pros and cons are listed, estimate the relative importance or weight of each reason. If two reasons on opposite sides seem equally important, cross them both out.
  4. If one pro reason outweighs two con reasons, cross out all three. Similarly, if two con reasons outweigh three pro reasons, cross out those five reasons.
  5. Continue this process of crossing out reasons of equal weight from both sides until you are left with a set of unequal pros and cons.
  6. The side with the remaining, uncrossed reasons represents the preferable choice or “where the balance lies”.

Take a day or two more. If no new important reasons arise, make your decision accordingly based on the remaining pros and cons.

Franklin acknowledged that weighing reasons is not an exact science like algebra, but he found this systematic approach, which he called “moral or prudential algebra”, helped him judge better and avoid rash decisions. The key aspects are listing all reasons, estimating their weights, canceling out reasons of equal importance, and giving it adequate time for reflection before deciding.

Let’s try to simplify this a bit and put it in terms similar to a prompt Ethan Mollick mentions in his book:

Let’s convert this into a prompt that we could use with a chatbot like ChatGPT or Claude (see the Appendix at the end of this post for the prompt I’m using).

Here’s the result of applying this prompt to “Should my next car be an electric vehicle or a hybrid car?” I tried this same prompt in ChatGPT-4o, Claude 3.5, and Gemini 1.5, and the results are very similar. The results below are verbatim from ChatGPT-4o. (Also, interestingly, they all came to the same conclusion.)

(I’m just including the final summing up of weights and the recommendation. I suggest you try the prompt yourself on a problem of your own to get a better idea of the output.)

Total weights:

Electric Vehicle:
Pros total: 30
Cons total: 26

Hybrid Car:
Pros total: 26
Cons total: 23

Based on the weighted analysis, I recommend choosing an electric vehicle (EV) over a hybrid car. Here's my reasoning:

1. Higher overall score: The EV option has a higher total weight for pros (30) compared to its cons (26), while the hybrid car has a smaller margin between its pros (26) and cons (23). This suggests that the benefits of an EV outweigh its drawbacks more significantly than for a hybrid.
2. Environmental impact: The EV scores highest on "zero direct emissions" and "reduced dependence on fossil fuels" (both weighted 5), which are crucial factors for environmental sustainability. While hybrids are more efficient than traditional gas cars, they still produce emissions.
3. Long-term savings: Although EVs have a higher upfront cost (weighted 5 in cons), they offer lower fuel and maintenance costs (both weighted 4 in pros). Over time, these savings could offset the initial investment, especially with rising gas prices.
4. Technological advancement: EVs represent cutting-edge technology (weighted 3 in pros) and are likely to see continued improvements and support. Hybrids, while more established now, may become outdated as EV technology advances (weighted 3 in cons for hybrids).
5. Home charging convenience: The potential for home charging (weighted 4 for EVs) offers a significant advantage in terms of convenience and potentially lower fueling costs.

While hybrid cars do offer some advantages, such as no range anxiety and more established technology, the benefits of EVs appear to outweigh these factors, especially when considering long-term environmental impact and technological trends.

Are we done? Nope. Remember what Ethan Mollick said: “most participants didn’t even bother editing the output once it was created for them”. Let’s not do that. Are those weights what you would choose? Do the reasons make sense? When you look at the different pros and cons the AI came up with (there are a lot of them, so I omitted them from this post, but you can easily generate them with the code below) can you think of others? Do you agree with them? If you were Franklin, then next thing you’d do is wait a few days and see what else comes to mind.

Let’s Wrap Up

Our tech overlords, bless their binary hearts, are out there flexing their big brains and somehow still managing to step on every digital rake in sight.

We’ve got search engines suggesting we treat our digestive systems like rock tumblers, voice assistants doing their best “legally distinct from Scarlett Johansson” impressions, and ad campaigns that seem designed by someone who just discovered nihilism and a hydraulic press on the same day. It’s not that these companies aren’t smart – they’re like the valedictorians of the tech world. But they keep doing dumb stuff.

The missing ingredient? Wisdom. Fortunately, people have learned a bit about wisdom lately, and we have a few tricks up our sleeve to help us with it. We’ve created AIs smart enough to beat us at Go, but apparently not wise enough to know that “Go eat rocks” isn’t sound nutritional advice. We don’t just need Artificial Intelligence; we need Artificial Wisdom too. The good news? Wisdom is just another problem to solve, and we can use AI as a tool to help solve it.

Let’s see what we can do. What do you think?


Here is the prompt I used, in this case with ChatGPT:

You are going to use Benjamin Franklin's "moral algebra" approach to help make a decision about the following:

Follow these steps:
1. List the different options under consideration, such as:
– Option 1: 
– Option 2:
– Option 3:
2. Make a list of the possible pros and cons for each option for this decision. Format the list like this:
Option 1 Pros:
- Pro 1
- Pro 2
Option 1 Cons:
- Con 1
- Con 2
Option 2 Pros:
Option 2 Cons:
2. Critique your list. Are there any important considerations missing? Add at least three more pros and three more cons to the list for each option.
3. Create a table that assigns a weight to each pro and con, from 1 (low importance) to 5 (high importance). Format the table like this:
| Option 1 Pros | Weight |
| Pro 1 | <weight> |
| Pro 2 | <weight> |
| Option 1 Cons | Weight |
| Con 1 | <weight> |
| Con 2 | <weight> |
4. Calculate the total weight of the pros for each option and the total weight of the cons for each option.
Show your work for steps 1-4 in a <scratchpad> section before proceeding.
5. Based on the total weights, make a recommendation about which option to choose. Explain your reasoning.
Provide your final recommendation and explanation in an <answer> section.


Written By

Read more