The Experts Concerns about Our Hi-Tech Future

The Experts Concerns about Our Hi-Tech Future
Spread the love

What happens if a hacker takes control of a drone? And if an algorithm fails? Who guarantees us that artificial intelligence is not “corruptible”? Here’s what the experts worry about.

We are increasingly talking about artificial intelligence. But human rights experts are worried: “The AI – reads a Cambridge University report – has many positive applications, but it is a dual-use technology and researchers should be aware of the potential for abuse.”

Whether we are dealing with algorithms, drones or robots, in the future we will increasingly deal with autonomous and “intelligent” systems, which will have an influence on almost every aspect of our lives from an economic, social and personal point of view. It seems inevitable: new technologies only make progress and become, in some ways, more accessible.

Hi-Tech Future

But this also involves risks: more and more observers are asking, as technology becomes more invasive, what impact it could have on human rights: from the right to life to privacy, from freedom of expression to social and economic rights. And so: How can we defend human rights in a technological landscape that is increasingly shaped by robotics and artificial intelligence (AI)?

Killer Drones

Scholars such as Christof Heyns, professor of human rights at the University of Pretoria and UN rapporteur, expressed in several places concern for laws, under study in some countries, which would allow military drones to decide for themselves when hitting a target, without a remote human control. This would make them much more rapid and effective weapons of war. “But, for example,” asks Heyns, “will a computer be able to express a value judgment that a group of people in simple robes carrying rifles are not enemy fighters, but hunters? Or soldiers surrendering?”.

Even groups such as the International Committee for Robot Arms Control (ICRAC) have recently expressed their concern with a public letter for Google’s participation in Project Maven, a military program that uses machine learning to analyze drone surveillance footage: the information thus obtained could in theory be used to kill people. ICRAC has appealed to Google to ensure that the data collected by its users are never used for military purposes, thus joining the protests of the employees of the great G for the involvement of the company in the project. Google has nevertheless announced that it will not renew its contract.

Racist and Sexist Algorithms

While the recent controversy surrounding the collection of personal data by Cambridge Analytic through the use of social media platforms such as Facebook continues to cause apprehension about manipulation and interference in elections, on the other hand data analysts Warnings on discriminatory practices associated with what they call the “white problem” of artificial intelligence: AI systems currently in use, in fact, would be trained on existing data that would replicate racial and gender stereotypes and perpetuate discriminatory practices in areas such as security, judicial decisions or in the search for work.

The Harvard researcher, Cathy O’Neil has lined up in the book Weapons of Mathematical Destruction (Mondadori) a long series of cases in which mathematical algorithms are not so objective, but show preferences: from the algorithms used in high finance to those that measure the probability that an individual can fall into criminal behavior, O’Neil believes that, under equal conditions, the algorithms favor male whites. And she’s not the only one who thinks so.

According to the psychologist Thomas Hill, one of the world’s leading scholars of the relationship between psychology and big data, the algorithms would even suffer from real mental disorders. He wrote it in an essay published by Aeon, where he claims that this happens because of the way they are built. For example, they can forget old things, when they learn new information and suffer from the so-called “catastrophic oblivion”, in which the whole algorithm can no longer learn or remember anything.

Digital Nightmares

The potential threat of new technologies to human rights and physical, political and digital security was highlighted by the University of Cambridge in a study on the harmful use of artificial intelligence: 26 experts from emerging technologies security published a report on use of artificial intelligence (AI) by “rogue states”, criminals and terrorists. The concern is that the growth of cybercrime in the next decade is unstoppable. And that an ever-increasing use of “bots” can get to manipulate anything from elections to newsletters and social media.

All this requires, according to researchers, attention and intervention of politicians: “AI – we read – has many positive applications, but it is a dual-use technology and artificial intelligence researchers and engineers should be aware of the potential for abuse “.

The authors come from organizations such as: the Future of Humanity Institute at Oxford University, the center for the study of existential risk of the University of Cambridge, the non-profit artificial intelligence research company OpenAI, the Electronic Frontier Foundation and other organizations. The 100-page report identifies three security domains (digital, physical, and political security) as particularly relevant.

Among the dangers we could face in the future, there would be new cyber-attacks such as automated hacking, targeted spam emails with precision using our information collected on social networks or exploiting the vulnerabilities of artificial intelligence systems themselves. There are those who could transform commercial drones into weapons, while in politics, someone could  manipulate public opinion, with  targeted propaganda and fake news , reaching levels of effectiveness that until now were unimaginable. And not only.

A Dystopian Scenario?

 Not according to Seán Ó hÉigeartaigh, executive director of the Center for the Study of the Existential Risk of the University of Cambridge, who declared: “Artificial intelligence is a turning point and this relationship has imagined how the world might look in the next five or ten years. We live in a world that could become fraught with everyday dangers from the misuse of AI and we must take responsibility for the problems – because the risks are real. There are choices we must make now”.

Leave a Reply

Your email address will not be published. Required fields are marked *