0:00
/
0:00

3 lessons from ChatGPT's launch (And what your AI product can learn from it)

🎬 TL;DR: WATCH THE VIDEO (click above) for the full teardown with actionable examples. Only takes 〜4 minutes at 2x speed. ⏩

A SPECIAL OFFER FOR SAN FRANCISCO AI PRODUCT CONFERENCE: I went last year and loved it. Top Product Managers and Product Marketers from top tech companies all go! This year I’m speaking about AI adoption. Would love to see you there. You can get 25% off tickets until Sept 5th using this link and the code IRRATIONAL25.


OpenAI’s launch of GPT-5 was interesting, and not just because of the model. It taught us a lot about human behavior. Here are some key insights and also some tactical lessons you can apply to your AI rollout (or other launch) strategy.

Lesson #1: Endowment Effect is real (a.k.a. don't rip things from people's hands)

Before the launch, ChatGPT’s interface was wild. It had a long list of models with unclear descriptors. GPT-3 was actually many times better than GPT-4, but GPT-4 was the default. What?

So for this launch, ChatGPT did the reasonable thing: they defaulted everyone to GPT-5 and took away the other models. The new version automatically switches between models. You always get the best model, but you don't pay for the expensive one when you're doing simple queries. What’s not to love?

Except people didn’t. They hated it. Why? We hate losing things. And more than that we hate losing things we “own” (Endowment Effect). Many users liked GPT-4, and now it was taken away from them. People were paying for this model.

Secondly, people hate change. Netflix waits days after launching a new algorithm to figure out if it works. The immediate results skew towards negative simply because people hate change, not necessarily because people hate the algorithm. They need time to get used to it.

The fix: Don’t surprise people. Change is inevitable. If you’re running any kind of B2B software, warn users before changing something. It can be as simple as a big pop-up that says "we're redesigning this page, get ready." That won’t take the full sting away, but it will give some cushion and warning if users will flip out.

Lesson #2: “Magic” can kill willingness to pay

Dan Ariely often does a thought experiment when he talks about the “pain of paying”: would you be willing to pay for searching on Google? Intuitively, we think: nope. Google's free. It would be wild if we had to pay for every search. But why not pay? If you think of all the amazing engineers who worked to build it for us and answer our questions, it’s hard to deny the results are impressive and valuable. And yet we're not willing to pay for it.

Why? One, we’re anchored on free and the price of zero is hard to compete with. And two, we don’t really see or appreciate the effort from all of those engineers.

Louis CK has a nice bit on this. You're in an airplane and the first time you experience WiFi, you're like "this is crazy!" When WiFi was just launched on airplanes, it was wild. But then in the middle of the flight, the WiFi goes out and you're like "what, wtf. Where’s my WiFi?" We adapt really quickly to new norms.

The same is true for this magical world of AI.

Right now, we are still in the honeymoon phase of AI - we are willing to pay for it (unlike Google). But if I were ChatGPT or building an AI app, the number one thing I’d tell my design team is to build in cues that remind users how amazing AI actually is and how much effort goes into creating it.

Because when we see the effort that goes into something, we value it more. And when we value it more, we usually become more willing to pay for it.

Tracking? When GPT-5 magically selects the model for me, this lowers how much I see the effort that goes into the magical AI and thus may lower my willingness to pay for it.

Lesson #3: When to say “I don’t know”

One cool thing GPT-5 did is say "I don't know" instead of hallucinating. This is wonderful because we can trust it more, right?

But do we trust it more? ? There was an experiment with about 400 people that showed when the LLM says "I don't know," it actually decreases confidence in the system. That is rational. If the LLM doesn’t know something, we probably shouldn’t trust it more since it literally just told us it doesn’t know the answer!

But for LLMs, this logic is actually wrong. We should trust it more. Because the counterfactual is that the LLM just hallucinates and lies to you instead of admitting ignorance. Instead of saying “I don’t know” they make up the answer.

This is not unlike humans.

Some (small N) research on patient confidence in doctors has found a similar phenomenon. If a doctor says "I'm not exactly sure what your diagnosis is. You may have X, or you may have Y," confidence in them plummets. If they present it more like "This is what you have," we believe them, even if they're not actually sure.

So doctors don’t have an incentive to express uncertainty in a diagnosis, because the patient will not trust them.

It's a conundrum for the LLM, it's a conundrum for humans. Actually displaying a lack of confidence is a hindrance to trust, but it's the thing we should value.

Hopefully in this new AI world, we’ll eventually value this kind of uncertainty as a signal of trust.

Until then? Proceed with caution.

Working on an AI product launch? I'd love to hear how you're thinking about these behavioral challenges. Email me: kristen@irrationallabs.com. Our team at Irrational Labs helps companies navigate exactly these kinds of human-centered design problems.

🎬 This is just a sneak peek! WATCH THE VIDEO (click below) for the full teardown with actionable examples. Only takes 〜4 minutes at 2x speed.

Have a friend who would enjoy these teardowns? Click the button below to refer them (& earn some great rewards).👇

Refer a friend

📧 Questions about product adoption? Shoot me an email: kristen@irrationallabs.com.

Want to increase conversion, retention, engagement? Reach out to Irrational Labs.
We design products that change behavior, using behavioral science. Check out our case studies to see it in action.

Discussion about this video

User's avatar