What are your workplace goals for the year ahead? Whether they are to do with learning, productivity, creativity, health or wellbeing, there’s no doubt that they can be supported by artificial intelligence companions.
These systems aren’t just designed to complete tasks, but to stay with people over time, aiding them in almost every endeavour.
It might feel like Chat GPT exploded out of nowhere – but the AI companions we interact with today have actually been a long time coming. They are the result of decades of progress in computing, language, and human–machine interaction.
At HelloSelf we believe that mental health is one of the most high-stakes – but potentially transformative – fields in which AI companions are going to affect change in the years ahead. But their influence touches everything in today’s world.
So having a potted history of where AI companions came from – and where they’re going – can help anyone at work today.
Learning to converse
The roots of AI companionship go back to 1966, when Joseph Weizenbaum created ELIZA, a simple programme that mimicked a psychotherapist by reflecting users’ words back as questions.
ELIZA had no understanding of psychology, emotion, or context. But something surprising happened: users still felt heard. Some even formed emotional attachments.
Weizenbaum himself found this unsettling. He realised early on that conversational systems could evoke trust and projection — a warning that still resonates today.
In 1972, psychiatrist Kenneth Colby introduced PARRY, a program simulating a person with paranoid schizophrenia. Compared with ELIZA, PARRY incorporated internal states and conversational strategy, foreshadowing the idea that software could model perspectives, not just language.
These early systems revealed two enduring truths – that humans are quick to anthropomorphise conversational machines and that conversation alone can feel meaningful, even without true understanding.
And yet AI companions remained largely theoretical curiosities for decades.
Chatbots and ‘How may I help you?’ scripts
Through the 1990s and early 2000s, chatbots existed mainly as novelty software and customer-service tools. They followed rules: if the user says X, reply with Y.
These systems were functional but brittle. They couldn’t adapt, remember, or generalise. They weren’t companions — just interfaces.
What changed wasn’t just better algorithms, but a shift in how humans interacted with technology.
The assistant era begins
In 2010, Siri launched as a standalone app. In 2011, it was embedded directly into the iPhone. This was a cultural turning point.
Talking to technology became normal. Voice interaction moved from novelty to habit.
Then, in 2014, Amazon introduced Alexa alongside the Echo device, bringing conversational AI into homes. Google Assistant followed soon after.
These systems weren’t emotionally intelligent, but they were present. They responded instantly, learned preferences, and became part of daily routines.
This era taught people to expect always-available support and natural language interaction. The foundations of companionship were being laid as technology that fits around human behaviour developed.
Social companions
In 2014, Microsoft launched Xiaoice, a social chatbot designed explicitly around emotional engagement rather than task completion. It marked a shift from “assistant” to “companion”. Then in 2017, Replika launched, positioning itself as an AI friend designed to talk, listen, and evolve with the user over time.
This period saw AI companions expand into social interaction, emotional support, habit building and creative collaboration.
At the same time, researchers began taking conversational agents more seriously as tools that could influence behaviour — for better or worse.
The question was no longer, ‘Can people relate to AI companions?’ That ship had sailed. It was, ‘What responsibility comes with that relationship?’
Large Language models change everything
The next major leap came with large language models (LLMs).
In 2020, GPT-3 demonstrated that a single model could generate fluent, contextually appropriate language across a vast range of topics.
Then, in November 2022, ChatGPT brought this capability to the public almost overnight.
In terms of AI companions, this changed the game.
Suddenly, systems could hold extended and coherent conversations, remember context, adapt tone and respond flexibly instead of following scripts.
AI companions were no longer constrained to narrow domains. They could support learning, coaching, reflection, creativity, and planning in ways that felt genuinely responsive.
But this power also introduced new risks for the workplace which have been much-debated. These include an over-reliance on AI judgement, blurred boundaries between tool and relationship, confident-sounding but incorrect responses and emotional attachment without safeguards.
Capability had outpaced governance, as a spate of law cases against AI-technology companies demonstrated.
A turn towards responsibility?
By 2023, it became clear that the future of AI companions wouldn’t simply be measured by looking at what they can achieve.
It would be defined by how they are designed, deployed, and governed.
Governments, regulators, and healthcare systems began setting expectations around transparency, safety, oversight and evidence of benefit.
Across sectors — education, healthcare, work, and consumer technology — the conversation shifted from asking what AI could potentially do, to what it should do.
This marked the transition from experimental companions to purpose-built systems, embedded within real-world frameworks.
The future of AI companions
Looking ahead, at HelloSelf we believe AI companions are evolving in five key ways.
- From reactive to proactive. Future companions won’t wait for users to ask for help. They’ll support routines, track progress, and offer timely prompts — while remaining transparent and user-controlled.
- Personalisation for the win. Companions will increasingly adapt to individuals over time, reinforcing goals, preferences, and strategies — not by replacing human expertise, but by extending it.
- From stand-alone tools to ecosystems. The most impactful companions will sit inside broader systems: workplaces, learning platforms, healthcare pathways, and support networks — not as isolated apps.
- From engagement metrics to outcomes. Success won’t be measured by how often people chat, but by whether companions help people achieve meaningful, lasting change.
- From imitation to augmentation. The goal isn’t to make AI more human. It’s to make it human-supporting: clear about its role, bounded in its function, and designed to strengthen — not replace — human relationships.
Why this matters for care and wellbeing
At HelloSelf, we see AI companions as an integral part of a shift towards continuity. Understandably, people are concerned about how AI will change their workplaces – and the world around them – but there are so many positives.
Whether someone is learning a new skill – or how to better manage their mental health – progress rarely happens in a straight line. Support that can be accessed between formal touchpoints matters.
That’s where well-designed AI companions can play a powerful role: reinforcing what works, surfacing patterns early, and helping people stay on track. But human connection should always remain central.
The future of AI companionship won’t be defined by novelty but rather trust, evidence, and intentional design.
We’re only at the beginning of that journey… and it’s going to be an exciting ride.
About the author
Rebecca Ley is a freelance journalist consulting on press and content at HelloSelf, a UK-based psychology technology company delivering therapy, coaching, and AI-supported care through a clinically led model. Trusted by the NHS, insurers, and doctors nationwide, HelloSelf combines clinician oversight, outcome measurement, and personalised care pathways—including its AI Companion—to build the future of mental healthcare. For more information visit www.helloself.com.
You might also like:









