2wai and the Ethics of Digital Immortality: When AI Brings Back the Dead
A new AI app lets you create avatars of deceased loved ones from just minutes of video. The technology is impressive—but should we use it?

The Viral Video That Sparked a Debate
In early November, a promotional video for an AI app called 2wai went viral, accumulating over 22 million views. It showed a pregnant woman revealing her baby bump to an AI-generated version of her deceased mother—a "HoloAvatar" created from just three minutes of old video footage.
The response was immediate and visceral. While some viewers were moved by the possibility of preserving memories, many others recoiled. Terms like "nightmare fuel," "demonic," and "dystopian" flooded the comments. Comparisons to the Black Mirror episode "Be Right Back"—where a woman creates an AI replica of her dead boyfriend—were inevitable.
How 2wai Works
The technology itself is genuinely impressive. 2wai's platform uses proprietary FedBrain™ technology to create lifelike digital avatars from just minutes of smartphone video footage. These "HoloAvatars" can:
- Engage in real-time, two-way conversations
- Speak in over 40 languages
- Share memories and knowledge from training data
- Learn new information and skills
Co-founded by Disney Channel actor Calum Worthy and entrepreneur Russell Geyser, the company raised $5 million in pre-seed funding and has partnerships with British Telecom and IBM. The app is currently in beta on iOS.
The Legitimate Use Cases
It's worth noting that grief technology is just one application—and not the primary intended use. 2wai promotes itself as a platform for:
- Content creators: Enabling fans to have personalized conversations with their favorite creators
- Brands: Creating AI brand ambassadors and customer service representatives
- Education: Interactive learning with AI tutors and historical figures
- Personal use: Creating digital twins for communication and entertainment
These applications raise fewer ethical concerns. Having a conversation with an AI version of a celebrity or historical figure—created with consent—is qualitatively different from recreating a deceased family member.
The Ethical Minefield
The grief-tech application, however, treads into deeply uncomfortable territory. Ethicists at Cambridge's Leverhulme Centre for the Future of Intelligence have raised several concerns:
Consent of the Deceased
Can you create a talking avatar of someone who never consented to be digitally recreated? The deceased person cannot agree to how they're represented, what they "say," or who interacts with them.
Impact on Grieving
Psychologists worry that interacting with AI replicas could interfere with the natural mourning process. Instead of processing loss and eventually finding closure, users might become trapped in a loop of artificial interaction, unable to move forward.
Children and Death Comprehension
The promotional video specifically showed a grandchild interacting with a deceased grandmother. Experts at Cambridge noted this could confuse young children who are still developing their understanding of death—a fundamental concept that AI facsimiles could dangerously blur.
Memory Distortion
Perhaps most troubling: these avatars will inevitably say things the real person never said. Over time, the AI's responses could overwrite genuine memories, replacing the real person with an algorithmic approximation.
The Industry Context
2wai isn't alone in this space. Several companies are exploring grief technology:
- HereAfter AI: Creates interactive memory apps from recorded stories
- StoryFile: Preserves interactive video testimonies
- Project December: GPT-powered chatbots trained on text messages
The difference is that 2wai's technology is significantly more realistic—crossing what some call the "uncanny valley" into genuinely lifelike territory. That realism is exactly what makes it more concerning.
What This Means for AI Development
This controversy highlights a growing tension in AI development: just because we can build something doesn't mean we should. The technology behind 2wai is remarkable, but the ethical framework hasn't kept pace.
As AI capabilities accelerate, we'll face more of these moments where technical achievement runs ahead of societal readiness. The companies building these tools have a responsibility to think deeply about second-order effects—not just "Can we build this?" but "What happens when we do?"
The Balanced View
It would be unfair to dismiss 2wai entirely. The same technology that can create ethically questionable grief avatars can also create genuinely valuable educational and entertainment experiences. The platform includes security and moderation features to prevent misuse.
And for some people, the ability to "talk" to a deceased loved one—with full awareness that it's an AI—might provide genuine comfort. Grief is deeply personal, and who are we to say what helps someone heal?
But the technology's power is precisely why it demands careful thought. This isn't a feature that should be marketed virally with emotional manipulation. It's a capability that needs guardrails, informed consent, and perhaps even mental health guidance for users.
Conclusion
2wai represents a significant technical achievement in AI avatar technology. For content creators, brands, and educators, it offers exciting possibilities. But its application to grief and deceased loved ones raises questions that the tech industry—and society—aren't fully prepared to answer.
The Black Mirror comparisons aren't entirely fair; that episode ended in horror, while 2wai's technology could be used responsibly. But the comparisons aren't entirely unfair either. We're entering territory that requires more wisdom than we currently have.
As AI continues to blur the line between presence and absence, between memory and invention, we'll need to develop new ethical frameworks. The viral backlash to 2wai suggests that, instinctively, many people feel something is wrong here—even if we can't yet articulate exactly what.
Explore more: View 2wai in our AI Software catalog →