I talk to people every day who have never used ChatGPT. Not once. Not meaningfully. They know AI exists. They know it's smart. They know other people use it. They just don't see why they would.
When I ask why, the answers are remarkably consistent. They don't know what to ask. They don't have time to learn a new tool. They tried something once and it felt generic and unhelpful. They forgot it existed between uses. They don't see how it would fit into their actual lives.
These aren't limitations of the people. These are limitations of how AI has been designed to be used.
The Prompt Problem
Every AI tool in the world is built around the same interaction model: you write a prompt, it responds. This assumes you know what to ask. It assumes you know what's possible. It assumes you're comfortable articulating clearly what you need. It assumes you have the time and mental energy to craft a good question.
Most people don't have any of those things. Not because they're not smart enough. Because they're busy living their lives. They have jobs, kids, commutes, appointments, bills, worries. They don't have bandwidth to become good at prompting. They don't know what they don't know about AI. They can't formulate questions when they don't understand the space well enough to know what questions to ask.
The people who use AI most successfully are the people who least needed it in the first place. They're curious about technology, comfortable with new interfaces, good at language, able to experiment. For everyone else—most people—the barrier to meaningful use is too high.
This isn't a prompt engineering problem. You can't solve an interface problem with better prompts. The interface itself is wrong.
The Forgetting Problem
Most people don't use AI consistently because they forget it exists. They go weeks without thinking about it. They only remember when something makes them think of it—a news article, a conversation, a moment when they suddenly need help and remember AI exists.
By then it's too late. They needed help three days ago. They opened ChatGPT, couldn't figure out what to ask, got a generic response, felt stupid for trying, closed it. They don't try again for another month.
The people who use AI consistently have built habits around it. They carved out time to learn it. They use it for specific tasks. They remember to open it. They know what it's good for. Most people haven't done any of this work because why would they. It's just a tool. They have a hundred other things competing for their attention.
The problem isn't that AI isn't valuable. The problem is that a tool you have to remember to use gets forgotten. A tool that lives in a browser tab you'll never open doesn't help you. A tool that requires you to change your behavior doesn't fit into most people's actual behavior patterns.
The Relevance Problem
When people do remember to use AI, they often get generic answers that don't feel useful. They ask a question and get the same response they'd get from a Google search. They don't feel understood. They don't feel like the AI knows who they are or what they actually need.
This is because most AI resets after each conversation. It doesn't know that you're a teacher with 32 students, that you're planning a birthday party for a six-year-old who loves dinosaurs, that you're nervous about a medical appointment next week. It doesn't have your context. It can't have your context unless you've spent time building it up.
Generic responses to specific situations don't feel helpful. They feel like the AI isn't really listening. Which, technically, it isn't. It's answering the question you asked, not addressing the problem you're actually facing.
For people who aren't tech enthusiasts, this confirms something they already suspected: AI isn't really for them. It's for people who know how to use it. They're not those people.
The Fitting-In Problem
Even if all the other problems were solved, most people wouldn't use AI because it doesn't fit into how they actually live. They're not going to download a new app. They're not going to create an account. They're not going to learn a new interface. They're not going to remember to check it every day. They're too busy.
The tools that actually integrate into most people's lives are the tools that show up where they're already spending time. WhatsApp. iMessage. The apps they already open every day. A tool that requires behavior change doesn't fit into behavior patterns that are already established.
Most AI adoption failure isn't a capability problem. It's a fit problem. The AI doesn't meet people where they are. It requires them to come to it, remember it exists, learn its quirks, and invest time building a relationship with it. Most people won't do that for any app, let alone one they don't fully understand why they'd use in the first place.
What Would Actually Work
For the eighty percent who don't meaningfully use AI, the tool would have to solve several problems at once.
It would have to be somewhere they're already spending time. Not a new app to download. Not a new account to create. Not a new interface to learn. Somewhere they already go every day.
It would have to remember who they are without them having to explain themselves repeatedly. It would have to know their context, their preferences, their situation, without them having to invest time building that relationship from scratch every conversation.
It would have to show up without being asked. Not wait for them to remember to open it and figure out what to ask. Just notice things that matter to them and bring value at the moment it's relevant.
It would have to feel like it understands them. Not generic answers to generic questions. Real understanding of their specific situation and how to help with it.
And it would have to do all of this without requiring them to change anything about how they live. No new habits. No new behaviors. Just show up in the place they're already spending time and start being useful.
That's what would work. And that's what proactive presence intelligence is designed to be.
The Change That Matters
The conversation around AI adoption usually focuses on capability. More capable models. Better reasoning. More features. But the eighty percent who don't use AI aren't held back by capability. They're held back by interface. By friction. By the gap between what AI can do and what it actually delivers in their lives.
The change that matters isn't making AI more powerful. It's making AI more present. More integrated. More woven into the fabric of how people actually live. Not a tool they go to. A presence that lives with them.
That's the shift that will actually change adoption rates. Not better prompts. Not better models. A different relationship model where AI shows up in your life instead of waiting for you to show up to it.
Frequently Asked Questions
Why don't most people use AI?
Most people don't use AI because the interface requires too much from them. You have to know what to ask, remember to use it, and be good at articulating your needs. Most people don't have the time or context to do this consistently.
Is AI too complicated for most people?
No. AI isn't too complicated—it's too demanding. The problem isn't understanding AI. The problem is that using most AI tools requires behavior change, memory work, and prompting skills that most people don't have time for.
What would make more people use AI?
AI that shows up where people already spend time, that learns about them without them having to explain themselves repeatedly, and that initiates help without being asked. Less friction, more presence.
Is Daneel designed for non-technical people?
Yes. Daneel is designed for people who want AI help without having to learn how to use AI. You just talk naturally. It learns about you. It shows up when it matters. No prompting skills required.
