There’s been much chatter lately in regards to Artificial Intelligence, or AI for short. Many are for AI and others say beware of it. For quite a few people, when you mention the term AI, they think of movies such as Terminator and I, Robot where basically the AI is out to destroy humans.
I’m never one to hold back innovation. My philosophy is that if we don’t innovate, we de-evolve. But, truthfully, AI does scare me a bit.
Elon Musk had posted on twitter the other day that things such as cars, planes, food, drugs, etc, that’s a danger to the public is regulated and that AI should be too. This I agree with.
There is good and bad with anything. Electricity powers everything in our lives, but it can also kill people, so do we stop using it? No. Car accidents kill approximately 1.3m people per year, but do we stop driving? No. Fire burns and kills, but we use it on a daily basis from cooking to the combustion engines in our cars.
We should not fear what we don’t know. But I urge caution with developing it.
Many people think we’re already using AI with Siri, but this isn’t true. Siri isn’t even close. It’s a complex application that tries to respond to your inquiries or requests. In my opinion, Siri is not even a baby AI. I’ve had very poor and very inaccurate responses from Siri but I won’t get into that. My point is that Siri, in my opinion, is sort of a verbal search engine right now, not even close to what a true artificial intelligence is.
Many of the world’s tech leaders such as Elon Musk, Bill Gates and Steve Wozniak have all warned against developing AI. Stephen Hawking himself said that it could mean the end of the human race.
But, here’s my take on this and it’s sort of a backwards way of saying that we need to do this. Think about nuclear power and bombs. Nuclear bombs can kill millions of people with one blast. If we did not have our own nuclear bombs as a deterrent, terrorists and other organizations with alternative motives could be running the show across the globe.
So, artificial intelligence will get developed no matter what is what I’m trying to say and more than likely, there are those looking to develop it for their own purposes and not necessarily for good. So, we also need to move forward with developing AI, some of which would be needed to protect our connected infrastructure.
There’s a saying that I’ve heard over the years and it’s that law enforcement and cyber security is always a step behind the bad guys. The bad guys always find new ways to break into things and then the good guys try to develop something to block them from doing it again. Do we really want the bad guys with an AI wreaking havoc first?
Computer viruses and worms can already create havoc across the globe since everything is connected now. Imagine if there’s an AI behind a new cyber threat. We would need our own AI to stop the threat.
In the end, there’s good and bad to all new inventions. We just have to develop and use it responsibly.