Rachel Feltman: For Scientific American’s Science Quickly, I’m Rachel Feltman.
The idea of digital life after death is something science fiction has been exploring for ages. Back in 2013 a chilling episode of the hit show Black Mirror called “Be Right Back” followed a grieving woman who came to rely on an imperfect AI copy of her dead partner. More recently the idea of digital copies of the deceased even made it into a comedy with Amazon Prime’s show Upload.
That shift from psychological horror to satire makes sense because in the decade or so between the premieres of those shows, the idea of preserving our dead with digital tools has become way less hypothetical. There’s now a growing industry of what some experts call “griefbots,” which offer AI-powered mimics of users’ departed loved ones. But these services come with a whole host of ethical concerns—for both the living and the deceased.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
My guest today is Katarzyna Nowaczyk-Basińska. She’s a research fellow at the Leverhulme Center for the Future of Intelligence at the University of Cambridge. Her research explores how new technologies like these bots are reshaping our understanding of death, loss and grief.
Thank you so much for coming on to chat today.
Katarzyna Nowaczyk-Basińska: Thanks so much for having me. I’m super excited about this.
Feltman: So how did you first get interested in studying, as you call them, “griefbots” or “deadbots”?
Nowaczyk-Basińska: I’m always laughing that this topic has found me. It wasn’t me who was searching for this particular topic; it was, rather, the other way around. When, I was still a student we were asked to prepare an assignment. I was studying media studies and with elements of art and performance, and the topic was very broad, simply “body.” So I did my research, and I’m—I was looking for some inspiration, and that was the very first time I came across a website called Eterni.me, and I was absolutely hooked by this idea that someone was offering me digital immortalization.
It was almost a decade ago, and I thought, “It’s so creepy; it’s fascinating at the same time. It’s strange, and I really want to know more.” So I prepared that assignment, then I chose digital immortality as a subject for my master’s, and master’s evolved into Ph.D., and after 10 years [laughs] I’m still in this field working professionally on this topic.
Feltman: Yeah, I imagine that the sort of technologies behind the idea of digital immortalization have changed a lot in 10 years. What kinds of advances are powering this field?
Nowaczyk-Basińska: So actually, 10 years ago commercial companies sold promise …
Feltman: Mm.
Nowaczyk-Basińska: And today we have a real product. So that’s the big change. And we have generative AI that makes the whole thing possible. We have the whole know-how and technological infrastructure to make it happen.
To create this kind of technology, to create your postmortem avatar, what you need is the combination of two things: huge amount of personal data and AI. And so if you want to create this avatar, you need to grant access to your personal data to the commercial company. And it means that you share your video recordings, your messages, your audio recordings, and then AI makes sense of it …
Feltman: Mm.
Nowaczyk-Basińska: And [tries] to find links between different pieces of information and extrapolates the most possible answer you would give in a certain context. So obviously, when your postmortem avatar is speaking, it’s just the, it’s just the, the prediction of: “How would that person react in this particular moment and in this particular context?” It’s based on a very sophisticated calculation, and that’s the whole magic behind this.
Feltman: So what does this landscape look like right now? What kinds of products are people engaging with and how?
Nowaczyk-Basińska: Mostly what’s available on the market are postmortem avatars or griefbots or deadbots. We use these different names to cover, actually, the same type of technology: so virtual representation of yourself that can be used long after your biological death. I often use this phrase borrowed from Debra Bassett that we live in a moment when we can be biologically dead but at the same time virtually present and socially active. So there are many companies, mostly based in United States—and United States seems to be, like, the epicenter for incubating this idea and distributing this whole narrative around digital immortality across the world. So we have different start-ups and companies who offer this type of, of services, either in the form of bots or holograms.
Feltman: And are we seeing any differences culturally in, in how different people are reacting to and engaging with these products?
Nowaczyk-Basińska: So that’s the main question that I am trying to pursue right now because I’m leading a project that is called “Imaginaries of Immortality in the Age of AI: An Intercultural Analysis”. And in this project we try to understand how people from different cultural backgrounds perceive the idea of digital immortality, so Poland, India and China are our three selected countries for this research, because it’s not enough to hear only a perspective and to know the perspective from the West and this dominant narrative.
So we are still in the data-collection phase, so I can only share some observations, not yet findings. What we know for sure is that for experts and nonexperts that we work in these three locations—when I say experts I mean people who work at the intersection of death, technology, grief: so people representing very different fields and industries, like palliative care professionals, academics, people who work in funeral industries, spiritual leaders; so people who could help us understand what digital immortality may mean in this context.
Feltman: Mm.
Nowaczyk-Basińska: So definitely, what we know for sure [is] that digital immortality is perceived as a technology that can profoundly change the way we understand and we experience death and immortality. And experts agree on that we need much more discussion on this and we need much more ethical guardrails and framework that will help us to make sense of this new phenomenon, that we need much more [well-thought-out] regulations and responsible design. We also need protective mechanisms for users of these technologies because at the moment there is no such thing, and it might be surprising at the same time, super alarming. And also that we need collaboration, and we need collaboration because there is no such thing as in one expert in digital immortality, [one] person who can thoroughly address all the issues and dilemmas and questions. And we need shared expertise, or collective expertise, to better grasp all the challenges that we are facing at the moment.
Feltman: Yeah, obviously this sounds like a really complex issue, but what would you say are some of the biggest and most pressing ethical concerns around this that we need to figure out?
Nowaczyk-Basińska: So the list is pretty long, but I would say the most pressing issues are the question on consent. Because when you create postmortem avatar for yourself, so you are data donor, the situation seems to be pretty straightforward because if you do this, we can assume that you explicitly consent to use your personal data. But what about the situation when we have a third party engage in this situation? So what if I would like to create a postmortem avatar of my mother? Do I have the right to share my private correspondence with her and to share this with the commercial company and let them make use of and reuse this material?
And another variation on the question of consent is something that we called the “principle of mutual consent.” We use this in the article that I co-authored with my colleague from CFI, Dr. Tomasz Hollanek. We introduced this idea because I think that we quite often lose sight of the fact that when we create postmortem avatar, it’s not only about us …
Feltman: Hmm.
Nowaczyk-Basińska: Because we are creating this for specific users, for the intended users of this technology, which is often our family and friends, and the thing is that they may not be ready to use them and they may not be so enthusiastic about this. For some people it can definitely bring comfort, but for others it can be additional emotional burden, so that’s why we think we should be able to create a situation when different engaged parties will consent to be exposed to these technologies in the first place so they can decide whether they want to use these technologies in the long or short term.
The other thing: data profit exploitation. Digital immortality is a part of commercial markets. We have the term “digital afterlife industry,” which I think speaks volumes where we are. Ten years ago it was a niche—niche that has evolved into fully fledged industry: digital afterlife industry.
Our postmortem relationships are definitely monetized, and we can imagine situations that commercial companies will go even further and will use these platforms, for example, to sell us products. And these griefbots can be a very sneaky product-placement space. So data profit exploitation, but also I think we should bear in mind that there are particularly vulnerable groups of potential users that, in my opinion, shouldn’t be exposed to these technologies at all, like children, for example.
Feltman: Hmm.
Nowaczyk-Basińska: We don’t have empirical-based research that could help us to understand how these technologies influence grieving process, but I think that in this particular case, we should act preemptively and protect the most vulnerable because I don’t think children are ready to cope with grief or to go through grieving process being accompanied by AI….
Feltman: Hmm
Nowaczyk-Basińska: and a griefbot of, I don’t know, their parents. It may be devastating and really hard to cope with.
Feltman: Yeah, absolutely. We’ve talked about the obvious ethical concerns. Do we know anything or do you have any personal thoughts about whether there could be benefits to technologies like these?
Nowaczyk-Basińska: I think they could serve as a form of interactive archives. It’s very risky to use them in a grieving process, but when we put them in different context, as a source of knowledge, I think that’s a potential …
Feltman: Mm.
Nowaczyk-Basińska: Positive use of this technology: so that we can learn from some scientists that were immortalized through this technology.
Feltman: Sure, and maybe even in personal use, less like, “Oh, this is my grandmother who I can now have personal conversations with while grieving,” and more like, “Oh, you can go ask your great-grandmother about her childhood in more of a, like, family history kind of way.” Does that make sense?
Nowaczyk-Basińska: Yes, absolutely, absolutely. So to,to change the accents and to not necessarily focus on grieving process, which is a very risky thing, but rather try to build archives …
Feltman: Mm.
Nowaczyk-Basińska: And new sources of knowledge, accessible knowledge.
Feltman: Yeah, very cool. What do you think is important for consumers to keep in mind if they’re considering engaging with griefbots or deadbots?
Nowaczyk-Basińska: So first of all, that it’s not universal remedy. It works for some people, but it doesn’t necessarily have to work the same way for me because I’m a different person, I go through the grieving process entirely different. So definitely, that’s a very personal thing, and grief is also a very personal and intimate experience, so we should keep in mind that it’s not for everyone.
Second, that these technologies, [laughs] it’s only technology. It’s not on the other side. It’s not your deceased loved one. It’s a very sophisticated technology that impersonate this person. And also that this technology can be addictive—I mean that this technology is designed in a way to keep you engaged, and you can be quite easily manipulated. So I think commercial companies should ensure that users are aware of the fact that they contact with technology through, for example, disclaimers. But at the same time we see that we have very conflicting interests here because what commercial company wants is to engage us and, like, keep us in this [relationship].
Feltman: Thank you so much for coming on to talk through this with us. I’m really looking forward to seeing your future research on it.
Nowaczyk-Basińska: Thank you so much for the invitation. It was pleasure.
Feltman: That’s all for today’s episode. We’ll be back on Friday to talk about why the world needs to start paying more attention to fungi.
Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Naeem Amarsy and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.
For Scientific American, this is Rachel Feltman. See you next time!