Godfather of AI Says Killer Robots Not Sci-Fi Anymore

Nobel laureate Geoffrey Hinton contends governments worldwide do not want AI’s military use to be regulated.

Mon Dec 23 2024
icon-facebook icon-twitter icon-whatsapp

ISLAMABAD: The rise of artificially intelligent machines is shaping much of the discourse on the future of the human race.

However, Nobel laureate and “Godfather of AI” Geoffrey Hinton’s take that killer robots are not science fiction anymore has shocked many.

In a recent interview with Bloomberg’s Emily Chang, the AI pioneer raised the alarm on the perils of the military use of the latest tech.

“It’s not far-fetched, it’s coming very soon,” warned the tech buff while explaining the dangers posed by killer robots devoid of human control.

The interview also revolved around questions like, “will a synthetic species with human-like intelligence improve the human experience — or end it?”

His answers, however, were chilling.

“Look at the EU, they talk about regulating this and that but there is no specific clause that seeks to regulate AI’s military use,” he said, adding that “they don’t want to regulate themselves.”

He said governments worldwide that sell arms don’t want AI-powered killer robots to be regulated.

“Look at the EU, they talk about regulating this and that but there is no specific clause that seeks to regulate AI’s military use.” – Nobel laureate Geoffrey Hinton.

According to Bloomberg, autonomous weapons systems are already proliferating on the battlefields of Ukraine and Gaza. Using a combination of big data and drones, they help military planners to select targets.

Some loitering munitions can be programmed to strike without a human pulling the trigger.

That trend toward outsourcing life-and-death decisions to machines has opened a Pandora’s Box of ethical, legal and technological questions.

icon-facebook icon-twitter icon-whatsapp