Interacting with the dead is no longer science fiction. Digital technologies have evolved to the extent that they can be—and are being used—to emulate the dead. During the last five years, media reports recount efforts of people who have tried and developed, with different degrees of success, chatbots based on a deceased person’s digital data: ghostbots.
Just to name some interesting examples, in Washington State, data scientist Muhammad Aurangzeb Ahmad developed a ghostbot out of his deceased father’s data, so his children “could talk to their grandfather.” Elsewhere in the country, in California, James Vlahos also developed a ghostbot based on his deceased father’s voice notes, messages and pictures, “dadbot” as he refers to it. However, Vlahos moved one step forward and founded his own company, “HereAfter AI,” that would allow users to “have conversations with virtual representations of loved ones,” so everybody could have a ghostbot. By December 2021, HereAfter AI launched a YouTube video advertising this new product, with plans that start from USD $39 a year.
Having that said, many readers might recall the Black Mirror episode “Be Right Back,” where a widower attempts to bring his beloved husband back from the dead thanks to an online service which data-mines digital information of the deceased, such as messages and social media profiles. Edwards and Harbinja have conducted a comprehensive legal analysis on what happens to our digital data after we die and—more recently and based on the hypothetical scenario that “Be Right Back” sets forth—the complexities of digital avatars of the deceased in terms of legal regimes, applicable norms and possible rights that would come into question.
They analyzed different legal issues from the American and EU Law perspective, such as access, management and ownership of digital remains (that is, pictures, videos, messages, profiles, locations, and so forth) and who can data-mine that information to create a digital avatar (ghostbot) of someone who has died. They also elaborate on personality rights and its possible implications for the dead, such as using the name, voice and likeness of the deceased.
However, the recent move posed by HereAfter AI (which was preceded by Eternime) may challenge the legal analysis introduced by existing literature on ghostbots. This is because, as opposed to the “Be Right Back” ghostbot, HereAfter AI products are based on consent:
The subject signs an agreement with HereAfter AI prior to their death, as opposed to having their information data mined by a third company without any specific consent for such a purpose; and
The subject records and uploads their own photos, chats and voice notes to the HereAfter AI system, which will be used for the development of the ghostbot. That is, HereAfter AI is not necessarily collecting information from other platforms, such as Facebook/Meta or Google.
The consent given by the subject for the explicit purpose of developing their ghostbot may override certain legal questions of privacy law, contract law (in terms of access and management of digital data) and personality rights. However, certain issues are still to be discussed: commodification and responsibility.
It is still unclear whether that ghostbot can be altered to advertize certain products with the people it interacts with. For example, if the ghostbot is based on a deceased who was a smoker or coffee drinker, can it be used to disseminate advertisements and marketing products for a particular brand of cigarettes or coffee? Or what would happen if it were then used to disseminate certain marketing products which were not related to the deceased’s features and likes? Law at this point is not clear and the internal ethics code may seem to be the only feasible solution.
HereAfter AI CEO says the company is not planning to monetize the ghostbots and the information stored for data mining and marketing. The truth is that this is only a set of good intentions, and it is well explored that the XXI century has brought a new economic logic where the market would not hesitate to use any means to modify our behavior. As Shoshana Zuboff explains, every piece of information is “datafied, abstracted, aggregated, analyzed, packaged, sold, further analyzed and sold again… [to produce] rewards and punishments aimed at modifying and commoditizing behavior for profit.”
In that regard, we need to reflect what is going to be the role of law in framing or preventing any kind of abuse, especially when the traditional theory of human rights flourishes the living over the dead. It seems that digital technologies have changed various parts of the equation to the extent that the living and the dead are merging into a new context never seen before. Information is now produced, recorded, copied, shared, identified and sold as never before. The existing legal categories and institutions may be insufficient to challenge any abuse regarding the commodification of personality traits in ghostbots, but there must be some kind of opportunity for the law to evolve and take into consideration the limits of the market and the implications of post-mortem privacy.
The other element which needs a particular analysis is the issue of harm and responsibility. What happens when the ghostbot discloses information with the living that may be harmful, things that the living possibly did not want to know: for example, they were thinking of filing for divorce, they discovered they were adopted, etc. Also, what would happen in the event that the ghostbot, as opposed to alleviating grief, may cause dependence and frustration, preventing the living from “moving on”? Who, if any, should be held responsible? Again, this is an issue where Artificial Intelligence (AI) Law and contract law should come into play, but it does not seem a straight-forward answer.
The launch of HereAfter AI is disruptive, to the extent that it brings to a wider audience the opportunity to buy a ticket for a digital afterlife, that is, to be a ghostbot once they are physically gone. The company was actually founded by someone who interacts with a ghostbot himself and now is seeing a new opportunity to bring that experience into the market and to profit out of it. This new type of business may soften even more the frontiers between the dead and the living, a division which has increasingly been blurred in the past two decades with the advent of social media platforms. It is also an opportunity for legal reflection and analysis, since it may also indicate that it is time for the law to evolve and cope with this new reality, our reality.
Mauricio Figueroa is a Ph.D. Candidate at Newcastle University, Law School, United Kingdom. He holds an LL.M. from the Buchmann Faculty of Law, Tel Aviv University. He tweets as @mfiguerres_
Suggested citation: Mauricio Figueroa, Ghostbots, the Quest for Digital Immortality & the Law, JURIST – Academic Commentary, January 18, 2022, https://www.jurist.org/commentary/2022/01/Mauricio-Figueroa-ghostbots-digital-immortality-law/.
This article was prepared for publication by Esther Chihaavi, a JURIST staff editor. Please direct any questions or comments to her at commentary@jurist.org