Rise of the Mindless Machines
When people discuss the future of artificial intelligence, a commonly voiced concern is the emergence of an adversarial superintelligence that might spell the end of humankind as we know it.
Matthias Plaue, Chief Data Scientist
November 21, 2018
When people discuss the future of artificial intelligence, a commonly voiced concern is the emergence of an adversarial superintelligence that might spell the end of humankind as we know it. Indeed, we should already think about precautions in order to coexist with AI safely once it will have reached human-level intelligence, and beyond.
However, a more immediate threat does not get the spotlight that it deserves: not that our lives will be controlled by superintelligent artificial general intelligence — but by mindless agents that only mimic intelligence, and which perform tasks that they are not adequately equipped for.
There are problems where we do not […] readily grasp the long-term consequences on human society if we were to implement inadequate AI to make those decision for us on a large scale.
Examples for such decisions are:
– Recruiting: Which applicant is the best qualified for the job?
– Dating: Who is that special person that will make me happy?
– Crime: Who is likely to commit crime in the future?
– Banking: Which customers will get that loan?
In each example, the algorithm may introduce bias against certain features that would be ethically questionable to discriminate against, such as ethnicity, age, or gender. Frighteningly, these are all decisions that have great impact on many individuals’ fate and future, and the way we will be making these decisions will shape society as a whole.
We might have to deal with a different dystopia much earlier than a rogue artificial superintelligence turning humankind into a large puddle of computronium: a swarm of mindless machines will control social status and well-being of billions of individuals by deciding who will find work, a mate, or a friend, or who will receive a loan, or who will go to prison. They will, of course, also control which governments we vote for, and decide over life and death on battlefields around the world.
They will do so without scrutiny or any sense for consequence as each of them has less cognitive faculty than a fruit fly. They will be everywhere, and they will busily communicate with each other without learning anything new but perpetuating old bias and falsities in many feedback loops. They lack any intrinsic motivation, unable to truly understand or explain their environment or actions. Some machines will pretend that they are not the philosophical zombies that they in fact are but disguise themselves as your “teacher”, “friend”, “co-worker”, or “mate”. Other machines will hide from you, and we do not even know that they are there, pulling your strings from some dark, forgotten corners of the ubiquitous computational infrastructure.
What can we do to prevent such a bleak future? Of course, everyone has to do their part, and I conclude with some suggestions:
– In my opinion, causal inference and reinforcement learning will be key concepts for machine learning research to advance. Understanding causal relationships and interaction with the environment are prerequisites of developing human-level intelligence.
– In the meantime, we must not leave the regulation and auditing of intelligent algorithms to the major AI companies alone. Data protection laws need to enforce policies that prevent algorithmic bias, globally.
– Data scientists and machine learning engineers need to agree on ethical guidelines and best practices for building sensitive AI applications.
Finally, as artificial intelligence becomes more and more a part of everyday life, the safety, benefits, and risks of the technology need to become part of the everyday political and public discourse just as naturally as, for example, road traffic safety.
Read full article, here.