Jill Kato, UC Irvine

Most AI chatbots feel like a goldfish. You have a conversation, maybe even a deep one, and the next time you interact, it’s as if you never met. No memory, no continuity, just a blank slate.
Bill Tomlinson, a professor at UC Irvine’s Donald Bren School of Information and Computer Sciences and UC Irvine’s School of Education, thinks we can do better. Much better. His project An AI That’s a Better Friend, with Rebecca W. Black, Donald J. Patterson, Anne Marie Piper, and Andrew W. Torrance, isn’t just another chatbot. It’s designed to form synthetic social relationships—meaning it remembers past conversations, builds an emotional history, and develops a genuine sense of continuity with users.
And while that might sound like the premise of a sci-fi movie where things go wrong (Tau, The Machine, M3gan to name a few), Tomlinson sees something different. He sees an opportunity to not just make AI more human, but to make humans better at being human—more connected, more collaborative, and maybe even a little wiser.
One of the most compelling aspects of Tomlinson’s AI is its ability to preserve and recall life stories. Unlike conventional AI, which has short-term memory and a limited long-term recall, An AI That’s a Better Friend builds a social memory graph, remembering past conversations, relationships, and key details.
This AI would not only listen to elders’ stories but remember them, organizing them into narratives that can be revisited and shared. It doesn’t just store information; it helps make sense of it.
“My dad tells me stories all the time,” Tomlinson says. “I’d love to remember all the connections, how this cousin relates to that story, or where certain events happened. AI could help create a structured archive, something richer than scattered emails or half-remembered conversations.”
This kind of AI-assisted storytelling could have profound implications, not just for personal memory but for history, culture, and intergenerational knowledge transfer. This AI could help us better understand our past by connecting events and relationships that we now often struggle to recognize.
Of course, giving AI the ability to form relationships comes with serious ethical questions. And Tomlinson isn’t naive about the scenario.
“There are all sorts of problems with AI,” he acknowledges. “Who is it really working for? A corporation? The user? Or, at some point, itself?”
That last part is where one’s dystopian alarm bells usually start ringing. But Tomlinson believes that if we build AI responsibly, we can avoid nightmare scenarios and use AI to address real social issues, like loneliness and social isolation.
According to polls, we’re living in a loneliness epidemic—52% of U.S. adults report feeling lonely. And while AI can’t replace human relationships, early research suggests that AI companions can provide meaningful emotional support, even reducing depression and suicidal thoughts.
Tomlinson’s AI could be different from existing chatbot companions like character.ai and Replika. Instead of offering generic, pre-programmed comfort, An AI That’s a Better Friend builds a real history with users. It knows their struggles, their joys, their relationships—because it remembers.
But Tomlinson’s ambitions go far beyond making AI a better conversationalist. He believes AI could play a role in solving civilization-level problems—things like climate change and governance challenges.
For the past 20 years, Tomlinson’s research has focused on sustainability and computing. This research has led him to have some major concerns: humans are failing at long-term thinking.
“If we continue on this trajectory, we may reach a point where we’re no longer able to sustain the kind of computational infrastructure we rely on today,” he says. “AI depends on stable governments, food supply, and energy resources. But as a civilization, we’re not addressing these foundational problems effectively.”
So how does a chatbot fit into this?
Tomlinson believes AI could enable humans to work together more effectively.
“If AI can help coordinate efforts, facilitate discussions, and improve how we organize ourselves as a society, maybe we have a shot at solving these problems,” he says.
It’s a bold vision. And while Tomlinson acknowledges that AI alone won’t fix everything (and may even cause as many problems as it solves), he also doesn’t see a better alternative. He likens the use of AI to the use of the microscope. Just as we use a microscope to see bacteria—revealing hidden worlds and leading to life-saving medical advancements—we can use AI to uncover patterns and connections that our own minds might miss. We can collect data, create models, and run simulations to make far more informed decisions. Tomlinson is hopeful that AI can help enhance our understanding of the world and guide us toward better solutions for the challenges we face.
“I don’t think AI is likely to solve all our environmental problems,” he says. “But I also don’t see any other viable path forward. If there’s even a small chance AI can help us, that’s better than the alternative.”
Turning an idea like this into a working product takes more than just inspiration—it takes funding and market validation. That’s where UC Irvine Beall Applied Innovation’s Proof of Product (PoP) grant and I-Corps come in.
The PoP grant provides critical funding to support early-stage development, feasibility studies, and customer discovery efforts—helping bridge the gap between academic research and real-world applications. Tomlinson applied as part of the A.I. track, which is a component of the larger NarrA.I.tive Story Studio from the UC Irvine Beall Applied Innovation. Tomlinson plans to use his grant to build and refine the AI’s social memory engine, develop a working prototype for real-world testing, and explore ethical safeguards to ensure responsible AI interactions.

Meanwhile, I-Corps helped the team explore the potential market fit for their AI. Through hands-on learning and customer engagement, they worked to understand how elders could benefit most from this technology, as well as how it might be commercialized. The team interviewed both elders and their adult children, to understand how the system might support their goals, and learned more about the nuts and bolts of actually starting a business.
Together, these programs are helping bring an ambitious, ethically grounded AI project closer to reality.
At first glance, An AI That’s a Better Friend might seem like a simple upgrade to chatbot technology, but at its core, it’s about something much bigger. It’s about using AI to enhance human connection, preserve stories, and maybe even improve our ability to collaborate at scale. Tomlinson isn’t just building a better chatbot. He’s asking: What if AI could help us become better people?
And in an era where technology often feels like it’s pulling us apart rather than bringing us together, that’s a question worth exploring.
Interested in getting involved with NarrA.I.tive? Contact Stuart Mathews
Support projects like this through Proof of Product: Contact Grace Han