Sinead Butler
Dec 20, 2023
Euronews Business / VideoElephant
A "death bot" has been developed which can allegedly predict with high accuracy when people will die.
Scientists at the Technical University of Denmark (DTU) created an artificial intelligence (AI) Life2vec model comparable to ChatGPT which uses personal data such as health, education, occupation and income to base its results on.
The robot used the personal data of Denmark’s population to improve its accuracy.
When analysing the health and labour market data collected between 2008 to 2020 of 6 million people, the death bot has an accuracy rate of 79 per cent.
Following on from this, the AI was then able to predict other factors such as personality and time of death with high accuracy after recognising the patterns within the data.
According to the first author Sune Lehmann, the death bot meticulously examines "human life as a long sequence of events similar to how a sentence in a language consists of a series of words."
To put this to the test, researchers gathered a group of people's data aged 35 to 60 half of whom had died between 2016 and 2020 to ask the death bot to anticipate who died and who is still alive.
Results show that the death bot was 11 per cent more accurate than any other AI model and is also more accurate than other existing models currently being used by life insurance policies.
"This is usually the type of task for which transformer models in AI are used, but in our experiments, we use them to analyze what we call life sequences, i.e. events that have happened in human life,” Dr.life eventslife event Lehmann explained.
He added the death bot was used to address the fundamental question: "To what extent can we predict events in your future based on conditions and events in your past?”
“Scientifically, what is exciting for us is not so much the prediction itself, but the aspects of data that enable the model to provide such precise answers."
Ethical concerns have been raised over life2vec, as well as how sensitive data is protected as well as the way bias can impact data.
"We stress that our work is an exploration of what is possible but should only be used in real-world applications under regulations that protect the rights of individuals," the researchers said on this matter.
Meanwhile, scientists have also warned of the ethical problems that would arise if life insurance firms were to use this model.
“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this burden,” Dr. Lehmann told New Scientist.
This study was published in the journal Nature Computational Science.
How to join the indy100's free WhatsApp channel
Sign up to our free Indy100 weekly newsletter
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
Top 100
The Conversation (0)
x