The rate at which there are technological advances in most markets is happening at, what seems like, an exponential rate. The fact of the matter is that as the processing power of computers gets faster and faster, they are able to solve more complex tasks in shorter amounts of time. So, in some way, yes, we are moving along at an exponential speed.
However, the latest ‘craze’ for which many tech companies are investing in is with artificial intelligence (AI) – developing a computer ‘brain’ which can think and learn for itself.
Already there are ‘dumb’ AI systems out there with Google Assistant and Amazon Echo some market available AI systems so to speak. However, in terms of anything actually clever, we are still quite some time away.
Be this as this may, Tesla’s CEO, Elon Musk, voiced his opinion recently about artificial intelligence, stating that he is worried about regulations with AI creativity and scared we might create something actually concerning. He states:
“I think we should be really concerned about AI and I think we should… AI’s a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.
“Normally the way regulations are set up is that a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators. It takes forever.
“That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the risk of human civilisation, in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individual in society, but they were not harmful to society as a whole.
“AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that.”
Should we be worried?
It is extremely interesting what Elon Musk had to say about AI. The best comparison of this is with two very well known films, where AI caused a potential end to civilisation, the Terminator series and Avenger’s: Age of Ultron.
It goes without saying that if we are reactive with AI rather than proactive, there is the possibility that human kind could produce an AI that is so clever it could turn into ‘Skynet’ or some sort of ‘Ultron’ virus. Saying this, what are the actual chances of this happening?
Musk’s voice is definately one that many people respect. But, it does need to be taken with a pinch of salt. For example, AI testing, at the moment, will of course be done in very controlled enviroments, even for the ‘stupid’ AI systems that are currently being developed. I don’t think there would ever be a situation where a human could produce something that does not have a ‘kill switch’.
The problem is that AI will learn. AI will understand there is a kill switch and disable it. It has it’s own mind – it has it’s own conscious. When a real AI system is created, we may not be able to control what it thinks and I think this is exactly what Elon Musk is worried about. It might not be a case of ‘if’ AI will cause a concern for the human race but, without regulation, when.