OpenAi Is Assembling A ‘Superalignment’ Team To Ensure AI Doesn’t Take Over

Though the probability of a “rise of the machines” scenario may seem a little far-fetched and more like the stuff of sci-fi fiction than the plausible future, some of the AI industry leaders aren’t so sure.

In a blog posted July 5th, it was announced that ChatGPT‘s creator, OpenAI is assembling a ‘Superalignment’ team to ensure AI doesn’t take over.

But what is meant by ‘superalignment’, exactly? Well, add it to your urban dictionary because it’s probably a word that you are going to be hearing a lot more often, the more Artificial Intelligence becomes a part of our everyday life. Superalignment refers to making sure that supercomputers align with human interests, rather than make decisions that are detrimental to society, or go against what they are programmed to do.

OpenAI‘s blog posting comes on the heels of Geoffrey Hinton, dubbed one of the “godfathers of AI,” having recently quit his job at Google to warn about the possibility that AI could mean “the end of people.”

And in May, CEO of OpenAI Sam Altman joined hundreds of other tech industry figures in signing an open letter containing one sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Shortly thereafter, Elon Musk called for a six-month pause on AI development in order to assess the current situation and take some time to initiate safeguards and regulations before it gets completely out of hand.

Ilya Sutskever (cofounder and Chief Scientist of OpenAI) and Jan Leike (Head of Alignment), both authors of the blog post, hail the positive potential of AI saying: “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems.” But at the same time, they also echo concerns, saying “…the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

They went on to say: “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

To address these concerns, OpenAI plans to invest a whopping 20% of its acquired computing power to create the new superalignment team to ensure its artificial intelligence remains safe for humans – with the end goal being to eventually use AI to supervise itself. The team aims to solve the core technical challenges of controlling superintelligent AI over the next four years.

And how are they going to accomplish this? In a nutshell, the endgame is to train AI systems using human feedback, then train AI to assist in evaluating other AI systems and ultimately build AI that can do alignment research.

It’s hypothesized that AI will become far more advanced than humans in that trying to regulate a system smarter than us will not be possible, hence training AI to eventually self-regulate. If artificial Intelligence keeps advancing at the current rate, it’s thought that superintelligent AI could be a reality by the year 2030.

Being able to control the AI monster that’s been created, you would think is kind of a given, so the technology doesn’t start working in a way that is not functioning for, or worst-case scenario, becomes a threat to humanity.

But there are those that say this vision of a dystopian future that pits man against machine is distracting from the current threats that AI poses right now. Advancing technology before we know how to properly reign it in is, no doubt, putting the cart before the horse, but so is creating new technologies without proper regulation. The biggest threat humanity faces with Artificial Intelligence right now is not from the AI itself, but from humans using AI for nefarious purposes. It may be a while yet before we are not our own worst enemy. Until then, regulating the use of AI also has to be a primary concern.

Author