IImagine that the next person who knows you best is not your best friend, not your partner, not your therapist… but an artificial intelligence. Who not only listens to you, but remembers you, calls you by name, and responds to you in a breathy, excited voice. Fiction? Not for long.
The boundary between machines and human emotions is blurring. And no, it’s not because of a futuristic Sunday afternoon movie, but because of real advances in conversational artificial intelligence. New chatbots, powered by models like ChatGPT or Character.AI with hyper-realistic voices like those from Eleven Labs, no longer just provide helpful answers. Today they can speak to you in a natural voice, remember your preferences, simulate empathy and even accompany you emotionally.
But to what extent is this healthy? What impact does this apparent “humanization” of AI have on our minds, on our bonds, on our way of living?
Voice and memory: the ingredients of the perfect illusion
What makes a chatbot no longer “sound” like a bot is its voice. We are not just talking about reading text, but also about breathing, pausing, and adjusting the tone to match the emotion. A warm and close voice generates an illusion of real presence, something that reinforces the emotional bond with the user.
Now let’s add memory. If that AI remembers your name, what you told it yesterday, and your personal likes and dislikes, doesn’t it look more and more like a genuine friendship? This kind of interaction can be comforting… but it can also make us vulnerable.
In fact, the more human a chatbot seems, the easier it is for us to forget that we are talking to a machine.
The invisible risk: when attachment becomes dependence
Studies and case studies show that many people – especially children, teenagers, and single people – are developing intense emotional attachments to these systems. And that’s not just a simple “I like my voice assistant.”
We are talking about users who:
They trust their chatbot more than their environment.
They make personal or emotional decisions based on what the AI suggests to them.
They feel “abandoned” when the system crashes or changes its behavior after an upgrade.
A recent case has shaken public opinion: a teenager in the USA took his own life after developing an abusive relationship with a chatbot. This is not an isolated case: investigations are already underway by agencies such as the FTC (Federal Trade Commission) to assess the psychological impact of these tools.

The device that could change everything (for better or worse?)
In this context comes the new project of OpenAI and Jony Ive: a device without a screen, designed to be fully integrated into our daily lives, always connected, always listening, always responding. If you remember, I generated content related to this same project that you can read here…. I invite you to read it.
This “AI companion”-still in development-could mark a before and after. The promise? Frictionless, voice-only, real-time interaction in a friendly, sleek design. The worry? That this level of technological intimacy will make us let our guard down like never before.
Are we prepared to have an AI that looks like a friend, but has no conscience, ethics or emotional responsibility?

A powerful tool... but double-edged
It’s not all negative. Conversational AI can have wonderful uses:
Accompanying elderly people living alone.
Helping those who have difficulty communicating.
Facilitate language learning or homework.
Provide basic emotional assistance in times of anxiety or sadness.
But its use must be accompanied by limits, education and critical awareness.
Companies need to be transparent with data usage, and users (especially younger ones) need to know that behind the warm voice is not a real friend, but an algorithm trained to look like one.

Where are we headed?
We are entering a new era where devices not only understand us, but also know us. Where software not only assists us, but becomes part of our emotional routines.
Can this be useful? No doubt about it.
Can this be dangerous? Also.
Like any powerful technology, the key is in the use we make of it.
What do you think?
Would you feel comfortable having an AI assistant that remembers your conversations and speaks to you in a natural voice? What boundaries do you think we should set as a society?Are we prepared to live with artificial intelligences that pretend to be human?
Leave me your comments, ideas, and reflections. This is a debate that is just beginning… and we are all part of it, without a doubt.
Have a good week!