Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Bard: Why User Trust is Still the Biggest Problem in AI (& How to Solve It) 🤖

There's only one way out of algorithm aversion: trust.

TLDR: WATCH THE VIDEO (click above)

Imagine if your doctor diagnosed you with a serious illness and was 100% confident that the diagnosis was correct. Then you found out you were healthy. You’d be upset, right? 🩻

This scenario sounds far-fetched. But it’s exactly how we feel when an AI tool gives us an inaccurate response. We lose trust fast. And that’s too bad, because very often these AI models get it right, and that’s huge.

So, how can we increase trust in these applications that have so much potential, but sometimes miss the mark in ways that seem almost human? 

The Trust Economy 🤝

Trust is social glue. Without it, everything from friendships and love to business and politics would be impossible. And it also shapes our economic behavior. Behavioral science tells us trust isn’t just a risk-reward calculation, as classic economic models might suggest. It involves a cocktail of emotions, expectations, and prior experiences, including innate social preferences, such as betrayal aversion.

When Machines ‘Betray’ 🤖

It's one thing when a fellow human breaches trust—it's layered, emotional, and sometimes understandable. But what happens when algorithms go awry? 

Algorithms don't have emotions, but a navigation app sending us on a wrong turn or ChatGPT giving us false ‘facts’ can feel like a small betrayal. This phenomenon is known as algorithm aversion; we’re less forgiving of mistakes made by algorithms compared to human errors. Our skepticism will fade over time as we’re more exposed to AI and algorithms improve. In the meantime, AI products can help us perceive them as more than their occasional errors by building trust consistently.

In Bard We Trust? 🤔

Today I dive into Google’s Bard to find out how, exactly, AI tools can do this—and bake trust into their product design. Along the way, I address the 2 most common (and interrelated) questions we get about these tools at Irrational Labs:

1️⃣ How do you get people to trust AI models?

2️⃣ How do you get people to engage with them?

Not everything I find is good: for instance, when I told Bard to ‘give me mortgage rates’ and checked the results in a Google search, I got different numbers. 👀 So I asked Bard, ‘Are you sure?’ And lo and behold, they were still different!. 😱

And this is our conundrum: we want to ask the AI oracle, but we don't know what we can or can't trust. Clearly the engineers of these tools are brilliant, because you CAN trust a lot of their output. But the uncertainty about what can be trusted decreases trust in the whole system.

Can behavioral science help? 💡

Things I cover in this teardown:

💡 How Bard could use ‘just-in-time education’ at the point of decision-making to enhance trust-building
💡 How making prompting easier could get more people to engage with AI (& how to do this)
💡 How generative AI models could help users recognize (& act on) their intent and why that matters

Bard has some mechanisms in place to increase trust. And even though the UX & UI may not fully be there yet, I’m optimistic about these tools. With a little behavioral design, I might even trust them. 😉

See you next week. 👋

Have a friend who would enjoy these teardowns? Click the button below to refer them (& earn some great rewards)👇

Refer a friend

Questions about your product? Email kristen@irrationallabs.com.

Share Product Teardowns

0 Comments
Product Teardowns
Product Teardowns