top of page

The 7 Deadly Sins of AI in Product Development | Pauline Kabitsis

  • Writer: Andrew Faulkner
    Andrew Faulkner
  • 5 days ago
  • 3 min read

Updated: 4 days ago

Event Review by Andrew Faulkner


Our speaker, Pauline Kabitsis, came to us from the Bay Area by way of Irrational Labs (https://irrationallabs.com), a team that cut its teeth inside Google’s behavioural economics group. They now work with an impressive list of companies trying to turn AI from “shiny investment” into “something people actually use more than once.” And that last part matters.

Because here’s the gut-punch:

$307 billion has been poured into AI product development… and 75% of those projects are failing to deliver ROI.

Think about it this way: 100 customers walk into your restaurant. Seventy-five sit down, read the menu, and walk straight out again.

That is exactly what’s happening with AI adoption.

So why?

According to Pauline: because we’ve massively over-invested in the technology, and massively under-invested in the humans using it.

Enter the Seven Deadly Sins of AI Product Development — not just a clever theme, but a painfully accurate mirror.

ree

1. Neglect — “Here’s the product. Good luck!”

The classic blank-state problem. A big empty text box.

A prompt that says “Ask me anything.”

And a confused user thinking, “Anything??”

AI onboarding today is often designed for power users — not for everyone else who needs a little support, a starting point, or at least a hint.

The antidote: scaffolding.

A few examples, a nudge, a first prompt, a simple walkthrough… anything that stops users being dropped into the deep end.

2. Obscurity — “We built an amazing AI feature and hid it.”

We saw examples where great AI features were buried behind obscure icons, tucked into menus, or invisible unless you already knew they were there.

Build it and they will come?

No.

Build it, hide it, and they’ll definitely never come.

The fix: make new AI features impossible to miss.

Labels, novelty cues, context-aware prompts — anything that draws the eye.

3. Gluttony — “Here are 47 options. Enjoy.”

Too much choice creates paralysis.

Most users can hold maybe 3–7 things in working memory. AI tools often present 20.

The fix: remove choice or guide the next best action.

Claude and Granola do this brilliantly.

4. Mediocrity — “We promised magic, delivered… meh.”

This one stings.

AI overpromises are everywhere.

A feature claims it can summarise your meeting — then politely tells you it can’t.

A tool claims it can rewrite your LinkedIn bio — then delivers something worse than what you had.

Expectation violation is deadly.

Once trust drops, users leave and don’t come back.

The antidote:

  • Let users co-create (the IKEA effect),

  • Give justifications, and

  • Be honest when the system is still learning.

Transparency builds trust, and trust buys you a second try.

5. Vanity — “AI-powered!” (…whatever that means)

AI-washing is everywhere — labels slapped on products without any clarity on what the AI actually does.

Irrational Labs tested it:

People will not pay more just because the word ‘AI’ is there.

Value comes from outcomes, not labels.

Superhuman’s “4 hours saved per user per week” example hit the room hard.

Clear, concrete, sellable.

6. Tyranny — “We’ve automated everything. You’re welcome.”

Removing too much control backfires.

Suddenly the AI is doing things you didn’t ask for, or blocking you from things you did ask for.

Users need boundaries, not chains.

The answer:

Give agency. Let them undo. Let them choose. Let them opt-in.

7. Envy — “Everyone else has a sidebar and a blank prompt… so we will too.”

The entire AI ecosystem is starting to look the same.

  • Same layouts.

  • Same UX.

  • Same everything.

This is a huge opportunity for differentiation — and a warning about lazy mimicry.

Pauline shared examples from Sora, modular prompting tools, and AI-powered browsers that actually break the pattern.

More of this, please.

The Big Takeaway

Every sin circles back to the same truth:

AI doesn’t fail because of the model. AI fails because of the human.

If users feel lost, overwhelmed, ignored, tricked, or locked out… they simply walk away.

The irony? Every one of these “AI sins” is really just a rediscovery of product lessons we already knew.

We just forgot them in the rush.

At OPMMA, we talk a lot about our mantra: Share. Learn. Grow.

Our last event was exactly that — a chance to learn from real-world behavioural science and bring the conversation back to where AI truly succeeds or fails with people.

Huge thank-you to Pauline for a fantastic, practical, and very human talk. And a huge debt of gratitude to our gracious hosts for hosting and sponsoring the event, Versaterm -check out their job postings: (https://www.versaterm.com).

See you all at our next event.


bottom of page