Early this month, May 2023, The New York Times reported a rather unusual event in which Big names in tech leadership such as Elon Musk, pundits, and researchers signed a letter warning about the profound risks to society and humanity posed by Artificial Intelligence (AI) technologies. The letter urged the AI labs to halt their development until there is a clear insight into how the risks associated with AI can be managed and there is adequate confidence in AI’s positive contribution to society. This is one move to show you that AI can go quite wrong!
Two in more than 27,000 signatures are of significant interest. First is Elon Musk who is building an AI start-up and is the primary donor to the organization that wrote the letter. Seeing his signature in that letter should send some chills down your spine!
What evil have they detected in AI that others aren’t aware of? What is the reward in the Thorndike box that they keep hiding from the world? Those who are sceptical about these AI developments have opined that Musk only wants to be careful rather than sorry!
Another person who appended his signature to the letter is Dr Yoshua Bengio – a professor and AI researcher at the University of Montreal. If you do not know him yet then be aware that he’s one of the experts with the Nobel Prize in computing under his belt for his work on neural networks. He has spent four decades of his life developing technology that drives the most feared GPT-4.
Bing ChatGtp has been accused of lying, getting upset with users, emotional manipulation, and grudge against Trump.
So, what are some of the risks that makes us think AI can go quite wrong?
Difficulty to Separate Truth from Fiction
The AI tool is designed to provide answers based on statistical predictions that is their response are wholly consistent with the mathematical training from the data. However, their ability to converse in natural language makes it difficult to separate the real from the fake. The risk of disinformation becomes real when you draw conclusions to make medical decisions or inform engineering designs. Therefore, without the guarantee that these AI systems will correctly accomplish the tasks, there is more than meets the eye.
GPT-4 has demonstrated the potential to complement human workers across different sectors. While accountants, doctors, and lawyers have a reason to smile, 19% of workers are likely to risk 50% of their work due to AI. The possible risks that occur to the projected magnitude give us a reason to worry!
Lack of Emotional intelligence
AI sources its knowledge from algorithms. Unfortunately, there is no emotional intelligence in algorithms – it’s simply linear mathematical outputs. From the experts’ worry perspective is that AI if allowed to develop and run its code will quickly spin out of control.
These are speculations that require attention and response before it’s time to be sorry.
Nonetheless, these projections remind you of the movie Bambi. An Owl befriends a Chipmunk which is its favourite food. In the real world, we have witnessed Owl landing on a Chipmunk, ripping it, and pulling the flesh out as the creature slowly loses breath.
Like the Bambi Movie, could it be that we’re reading into an AI that doesn’t really exist? Could it be that the experts are pushing down the throat of AI, sentience, morality, and consciousness that does not apply? This is why we say that AI can go quite wrong!