您的当前位置:首页 > 关于我们 > Amidst controversies, OpenAI insists safety is mission critical 正文

Amidst controversies, OpenAI insists safety is mission critical

时间:2024-09-22 07:17:30 来源:网络整理 编辑:关于我们

核心提示

OpenAI has addressed safety issues following recent ethical and regulatory backlash. The statement p

OpenAI has addressed safety issues following recent ethical and regulatory backlash.

The statement published on Thursday, was a rebuttal-apology hybrid that simultaneously aimed to assure the public its products are safe and admit there's room for improvement. OpenAI's safety pledge reads like a whack-a-mole response to multiple controversies that have popped up. In the span of a week, AI experts and industry leaders including Steve Wozniak and Elon Musk published an open letter calling for a six-month pause of developing models like GPT-4, ChatGPT was flat-out banned in Italy, and a complaint was filed to the Federal Trade Commission for posing dangerous misinformation risks, particularly to children. Oh yeah, there was also that bug that exposed users' chat messages and personal information.

SEE ALSO:Nonprofit files FTC complaint against OpenAI's GPT-4

OpenAI asserted that it works "to ensure safety is built into our system at all levels." OpenAI spent over six months of "rigorous testing" before releasing GPT-4 and said it is looking into verification options to enforce its over 18 age requirement (or 13 with parental approval). The company stressed that it doesn't sell personal data and only uses it to improve its AI models. It also asserted its willingness to collaborate with policymakers and its continued collaborations with AI stakeholders "to create a safe AI ecosystem."

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

Toward the middle of the safety pledge, OpenAI acknowledged that developing a safe LLM relies on real-world input. It argues that learning from public input will make the models safer, and allow OpenAI to monitor misuse. "Real-world use has also led us to develop increasingly nuanced policies against behavior that represents a genuine risk to people while still allowing for the many beneficial uses of our technology."

OpenAI promised "details about [its] approach to safety," but beyond its assurance to explore age verification, most of the announcement read like boilerplate platitudes. There was not much detail about how it plans to mitigate risk, enforce its policies, or work with regulators.


Related Stories
  • The ChatGPT bug exposed more private data than previously thought, OpenAI confirms
  • OpenAI's GPT-4 aced all of its exams – except for these
  • ChatGPT plugins: How to get access
  • ChatGPT is down, even for paying Plus subscribers

OpenAI prides itself on developing AI products with transparency, but the announcement provides little clarification about what it plans to do now that its AI is out in the wild.