Most AI conversations are backwards, and I’m kind of done pretending they’re not.
We keep starting with features and promised outcomes, like that’s the hard part, and we skip the question of what the system is actually responsible for. Everyone wants to argue about what the model can do before anyone is willing to say what work it owns, end to end, without a human babysitting it.
That’s why adoption always slows down after the demo. It’s why GTM motions get weird. It’s why teams swear the tech works but somehow nothing really changes. The emails get drafted faster, the dashboards look nicer, and the business stays exactly as fragile as it was before.
People love to blame resistance or trust or readiness because those are flattering excuses. They make it sound like the org just needs more time or better change management. What’s actually happening is simpler and more awkward. We keep designing AI to sit next to humans and wait for permission instead of letting it take over execution. That way the org chart stays intact and nobody has to explain why entire layers exist.
AI doesn’t hit a wall because it can’t perform. It hits a wall because companies won’t let it own anything that would force real tradeoffs. And until that changes, this all stays stuck in the same loop everyone swears they’re about to break out of.
If Your AI Needs an Avatar, It’s Hiding Something
Most “AI SDR” demos that lead with a talking avatar are trash. Not “early.” Not “promising.” Trash.
A prospect told me he just survived a demo that led with the face. His reaction:
“Not great.”
“Not needed.”
“It’s a ways off.”
Think: chatbot in a Halloween mask. Creepy.
“Uncanny valley.”
“The lag actually gets worse once you add the face.”
That last one is the killer. Everyone’s obsessing over the avatar like buyers are lonely and need a digital coworker to smile at them.
No. Buyers want answers and they want them fast. If your AI SDR can’t do the basics, slapping an avatar on it is just lipstick on a broken robot.
Do the research
Respond instantly
Run a real omnichannel cadence
Qualify like an adult
Route correctly
Book the meeting
Log it clean
Avatars don’t build trust right now. They trigger suspicion. Ship something that actually works in email, chat, SMS, LinkedIn and phone first. Then earn the right to play Hollywood :)
Bad demos are obvious, but the deeper problem shows up after launch, when customers are given a choice.
This is Why You AI Rollout Failed
Your AI rollout failed because you never trained your customers to use it. I’ve watched this movie too many times. A team drops an AI agent into the flow. Announces it like a feature. Keeps the old path wide open. Then acts shocked when customers avoid the AI. Of course they do. You gave them a choice. And the default is always what they already know.
You don’t train customers by explaining AI. You train them by cutting off the escape route in low-risk moments… then letting the outcome speak for itself. Here’s what that looks like in practice. A customer calls support and asks for a human. The AI employee says:
“I can totally get you to a rep in a couple of minutes. While you’re waiting, let me take a crack at this. You might be surprised, I'm pretty dam good.”
They agree. The conversation keeps moving. The AI fixes the issue before the handoff ever happens. Next time, they don’t ask for a human. Not because they were educated. Because they experienced how good the AI actually is.
If your rollout needs customers to choose AI, it’s already broken. Real adoption is designed, not requested. But really, the problem isn’t adoption. It is how we think about what AI actually is.
AI is NOT a Tool
If you’re still calling AI a “tool,” stop. You’re bugging me. McKinsey & Company didn’t add 25,000 AI tools. They added 25,000 digital workers.
People cling to the word tool because it keeps the org chart safe. If it’s a tool, nothing has to change. If it’s a worker, the org suddenly looks old, bloated, and a lot less defensible than anyone wants to admit. So teams downplay it or rebrand it. Or they pretend it's just better software. But the work already moved.
McKinsey now runs with roughly 40,000 humans and 25,000 digital workers. That’s a different operating model whether you like the wording or not. And yeah, I get the resistance. Most white-collar jobs didn’t disappear overnight. They got exposed. A lot of what we called work was just waiting. Waiting on handoffs or reviews or maybe someone’s calendar to open up. The paycheck was tied to delay, not value.
Call it a tool if that keeps things familiar. Just don’t do it around me. I’m done pretending that language doesn’t matter.
Now that we have decided to stop calling AI a tool, a lot of GTM systems look…broken. Signal-based selling is one of the clearest examples.
The Fix to Signal Based Selling
Signal-based selling is broken and I know how to fix it. Every GTM team has the same junk drawer of tools:
Intent data
Triggers
Clay tables
Slack alerts lighting up all day
And then… nothing.
No meetings booked
No pipeline movement
No revenue
So leadership shrugs and says, “Signal-based selling is overrated,” and chases the next shiny thing. That line is a lie. It lets everyone blame the idea instead of admitting they built it in a way that was guaranteed to fail.
Signal-based selling didn’t fail. You neutered it. The second you routed the signal to a human, it was dead. You said you were building a revenue system, but all you built was a notification and a hope that a rep would magically care at the exact right moment.
They didn’t. They never do.
And instead of owning that, teams respond the only way they know how. Add more crap. More signals. Better scoring. More enrichment. Another dashboard. Another enablement session. More “alignment.” None of that fixes the actual problem, which is that execution is not working.
The only way signal-based selling works is when you pair the signal with an AI agent that can actually act on it. Not flag it. Not notify someone about it. Act on it.
Respond
Answer questions
Follow up
Book the meeting across whatever channel the buyer shows up on
Like an SDR whose only job in life is to work that signal. That’s the system. Not some stitched-together Frankenstein stack of 6sense, Clay, Instantly, Slack, LinkedIn, and whatever other tool you duct-taped in because a GTM engineer said it was “best practice.”
Signals without execution are just trivia. Signals + AI Agents equals pipeline creation.
It’s easy to talk about systems and say “We need better”, but it gets uncomfortable when the bottleneck is you.
I’m the Best Demoer
I’m the best demoer at my company. Not “one of.” Not “pretty good.” The best.
Everyone knows it. Prospects feel it and deals magically move the second I’m on the call. And yes… I’m half joking and half dead serious, which somehow makes it worse.
Here’s the problem. Every time I demo, I create a new lie. The lie is that this is scalable.
I’ve watched this loop for years. The founder runs the demos at first. Then the CRO. Then that one SE everyone pulls into deals when it really matters. Then the team grows, and everyone acts surprised when win rates slide and calls turn to garbage.
So instead of pretending everyone can be me, I’m replacing myself. We’re building a Sales Engineer Cloud Employee trained on how I actually demo.
The real flow, the tight one.The “don’t show that yet” instinct. The part where buyers wake up and go oh… I get it now.
Same Gabe demo. Without Gabe :)
It runs the game tape. Moves around the product like it owns the place and handles detours without spiraling. I’m fixing mine first. We’re opening beta soon.
If you’re convinced you’re the best demoer in the room, do the uncomfortable thing and try to make yourself unnecessary. Once you admit that, you notice how many operators are wrestling with the same shift.
They just do not have many places to talk about it honestly.
Real Operators in the Room
This week was our 4th TalkAI event. No pitches. No panels reading slides. Just operators in a room actually talking about what’s changing with AI and what’s breaking inside real companies.
We had 75 people show up. The conversation was honest, opinionated, and way more useful than anything you’ll get from LinkedIn or a vendor webinar.
This stuff is moving too fast to learn alone. If you’re not debating it, arguing about it, and pressure-testing ideas with other leaders, you’re honestly doing it wrong.
Huge thanks to Silicon Slopes and Tiffany Vail, Ryan Kell , and Danielle Karr for jumping in and making the night what it was.
If you’re building, leading, or getting uncomfortable with where AI is headed… pull up a chair next time.
This is how we’re turning this thinking into AI agents that actually own execution.
What all of this keeps circling back to is something most teams don’t want to talk about.
AI doesn’t fall short because it’s immature or not smart enough. It falls short because we never let it finish anything. We drop it into the middle of a workflow, surround it with approvals, handoffs, and escape hatches, and then act surprised when the result looks exactly like the old process with a new coat of paint.
You see it everywhere. The AI drafts, but a human decides. The AI suggests, but someone else routes it. The AI flags the issue, but execution still waits on a person who’s busy, distracted, or out of office. Then leadership looks at the numbers and says the AI didn’t move the needle.
Of course it didn’t. It wasn’t allowed to.
The real change isn’t about making the model smarter or adding another capability. It’s about deciding who owns the outcome. As long as AI is stuck assisting people who already control the work, nothing fundamental shifts. Adoption drags, systems get fragile, and scaling stays a slide deck idea instead of something you can feel in the business.
This isn’t about picking better tools. It’s about how the work is designed and who is trusted to carry it through. And most teams are very intentionally not making that call, even if they pretend it’s a technical limitation instead of a decision.
That’s it for today. Connect with me on Linkedin if you actually want to understand what an Autonomous Organization looks like in the real world.


