The increasing prominence of AI and machine learning is posing increasing moral and logistical conundrums for humanity. This scenario has long since been predicted by the heavyweights of science-fiction writing, both in literature and cinema, and today it has become a reality.
Turning against humanity
Machines manufactured by humanity turning against humanity has become a genuine concern in this environment. This may not necessarily occur due to some unanticipated self-awareness, as in The Terminator series, and certainly such a scenario is many, many years away, if indeed it is remotely plausible.
But regulation are needed to safeguard human life and well-being as AI and robots become a more ingrained part of our everyday society.
Perhaps the most worrying part of our modern culture is the en masse implementation of algorithms. This technology is something that relatively few people consider, yet it already has a rather large influence over our daily lives. Algorithms already have an influence on what we read, what advertising we encounter, what music we listen to, and many other areas. With AI medical diagnosis, self-driving cars, and machine writing all on the horizon, or here already, it is clear that this aspect of our lives much be as transparent as possible, and carefully regulated.
And there are also practical concerns about the economic and labor consequences of advanced AI and robotics. Some advocates and economists have attempted to suggest that jobs replaced by technology will be regenerated elsewhere, but this seems rather fanciful. When robots can do physical human work to the same standard, it is extremely difficult to believe that anything could then possibly compensate for the vast number of manufacturing jobs that would be under threat. And that’s just one facet of the workforce that is threatened by AI and robotics.
The implementation of AI and machine learning on the battlefield must also be a major cause for concern. There have been other incidents with malfunctioning AI, not least when a self-driving car killed a pedestrian in Arizona, but AI in a war context obviously poses massive moral questions. There surely needs to be strict international law put in place to legislate against autonomous weapon systems and potentially lethal robots. But even now, AI and autonomous processes are also heavily utilized in weapons systems; undoubtedly a legal and moral gray area.
If we’re going to ensure a healthy relationship between human beings and machines in this environment then certain principles must be upheld legally:
- AI should be subjected to stringent testing, in order to ensure that it is safe and reliable;
- AI systems must ensure digital, physical and political security, especially with regard to privacy;
- Algorithms should be completely transparent;
- Public engagement and the exercise of individual rights should be guaranteed in all aspects of the AI, machine learning and robotics industries;
- Tech companies should be required to provide customers with detailed and accurate information regarding the purpose, function, limitations and impact of AI, machine learning, robots and machine learning systems;
- And, finally, humans should always be in charge. AI can facilitate human decision-making, but should never replace it.
Only by putting such critical safeguards in place can we hope to ensure that the AI revolution doesn’t get out of control.