HowOpenAI seeks to mitigate the hazards associated with AI

Spread the love
HowOpenAI seeks to mitigate the hazards associated with AI
An internal team has been formed by OpenAI, the firm that created the well-known chatbotChatGPT, to investigate and resolve any “catastrophic risks” that could arise from using AI models.
The team, named “Preparedness,” will “track, evaluate, forecast, and protect” against potential dangers to AI systems in the future.
These include their ability to trick and influence people—for instance, through phishing assaults or malicious malware.

The group will evaluate nuclear and other AI dangers.
The Preparedness team will look into a number of risk categories associated with AI models, such as “autonomous replication,” or an AI replicating itself, and “chemical, biological, radiological, and nuclear” hazards.
The group has launched a competition to solicit public suggestions for risk research, and it is also interested in looking into “less obvious” aspects of AI risk.
The top ten submissions will receive a job at Preparedness as well as $25,000, or roughly Rs. 20.8 lakh.

Being ready like a SWAT team for AI safety
The Preparedness group will perform extensive evaluations of OpenAI’s state-of-the-art AI models and act as an AI safety SWAT team.
To proactively find flaws, the team will “red team” OpenAI’s own AI systems.
Their task will be to create a “risk-informed development policy” (RDP) that outlines OpenAI’s approach to building tools for monitoring and evaluating AI models.
Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, will serve as the team’s leader.

Previously, OpenAI said that it would be managing “superintelligent” AI forms.
The formation of the Preparedness team coincides with a major summit on AI safety being held by the UK government.

This follows OpenAI’s previous announcement that it would assemble a group to investigate, direct, and oversee the development of “superintelligent” AI forms.
Ilya Sutskever, chief scientist and co-founder of OpenAI, and CEO Sam Altman both believe that within ten years artificial intelligence (AI) will be able to surpass human intelligence.
Since such AI may not always be benign, research on ways to restrict and manage it is necessary.

Leave a Reply

Your email address will not be published. Required fields are marked *