Since Geoffrey Hinton, also known as the Godfather of AI, quit his job at Google to warn about the risks that unregulated AI could pose, many have come forward to paint a dystopian future wherein AI could potentially take over if we’re not careful.
Others still are of the opinion that all of this talk of “us vs. them” is actually distracting from the real issues that AI is posing right now.
Regardless of which side of the fence you sit, there is no denying that leaving AI to advance without regulation is not an option.
The Biden Administration has heeded the warnings and put forth a plan of action that has had seven leading AI companies commit to managing risks of Artificial Intelligence.
US President Joe Biden was joined by representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to make the announcement on Friday.
The agreement addresses three key topics: Safety, Security and Trust.
Where safety is concerned, the agreement calls for AI systems to be tested by internal and external experts before their release. It has also called for the companies to publicly report AI capabilities and limitations on a regular basis.
For trust, the agreement has the companies committing to prioritizing research on avoiding harmful bias and discrimination, and ensuring that people can spot AI by implementing watermarks.
This is especially important for Americans as they approach the 2024 presidential elections and the fears of misinformation being spread.
President Joe Biden said: “We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can pose – don’t have to, but can pose – to our democracy and our values,” during the announcement on Friday.
We are at a point right now that we cannot believe what we see anymore. One of the main goals of the agreement is for it to be easy for people to tell whether or not online content was created by AI.
The agreement also outlines a commitment to using AI for the benefit of humanity. The companies have committed to “develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.”
The security aspect ties in with safety and trust in that, if it’s not secure, you’re not safe. And if it’s not safe, you can’t trust those who have created the tech, or those who are supposed to be regulating it.
Protecting privacy and proprietary ownership are key issues that even in its infancy, AI technology has had many crying foul as copyrighted material has been used to train its datasets. This has allowed people to use AI to create art and music in the style of artists that never gave them permission to use their art or music in the first place.
The agreement also has measures to add safeguards to restrict the use of AI for the general public to determine weaknesses in cybersecurity. Essentially, implementing safeguards to prevent people from using AI to hack into a public or private systems. No one wants the waterworks, electric company, hospitals, or transit and emergency systems going down and being held for ransom.
We’ve opened pandora’s box, and there’s no closing it now. Best we can do is try and regulate it. The voluntary safeguards signed on Friday are a step towards better regulation of AI technology in the US. Although there are still lots of questions as to how all of this is going to be achieved and where the oversight will be, you have to admit, to get the 7 leading AI companies to agree on anything is a feat unto itself.