OpenAI hires full-time AI threat response lead... with an annual salary of 790 million KRW

robot
Abstract generation in progress

As concerns over the safety of artificial intelligence grow, OpenAI has begun recruiting for new senior positions to proactively detect and address potential risks associated with highly advanced technology. The position is called “Preparedness Officer,” and will play a central role in identifying potential harms caused by AI models, developing internal policies, and influencing the release methods of models.

This role will lead the implementation of OpenAI’s “Preparedness Framework,” responsible for monitoring and assessing the potential misuse of AI on society and systems, cybersecurity threats, and broad social impacts. The actual framework is closely linked to OpenAI’s research, policy, and product departments, and will have a substantial impact on the timing and manner of new model releases.

The recruitment will be conducted within the Security Systems Department, with the person in charge overseeing threat modeling, capability measurement, risk standard setting, and deployment restriction conditions. OpenAI explicitly states that this Preparedness Officer position requires extremely high standards, with practical experience in large-scale technical systems, crisis management, security, and governance. The base annual salary is $550,000 (approximately 790 million KRW), in addition to stock options.

This strengthening of personnel arrangements by OpenAI is also a strategic move to fill the vacancy created by internal leadership changes. In mid-2024, former Preparedness Officer Alexander Madrid left the company, and his successors Joaquin Gineonero Candra and Lillian Ong also left or transferred to other departments, resulting in a prolonged vacancy in the preparedness organization.

OpenAI CEO Sam Altman recently emphasized the importance of this position in an interview, stating, “As model capabilities increase, the preparedness organization is one of the most important roles internally.” He noted that only when the relevant systems operate well can the potential social side effects of artificial intelligence be effectively managed.

Industry-wide concerns about AI misuse also provide context for this recruitment. Specifically, risks such as AI-based cyberattacks, software vulnerabilities, and impacts on users’ mental health are listed as major risks. In October last year, OpenAI publicly revealed that millions of users had shared serious psychological distress through ChatGPT, highlighting its awareness of the intersection between AI and mental health issues. At that time, the company pointed out that ChatGPT might not be the root cause of the distress but noted a surge in cases of sharing sensitive concerns with AI.

OpenAI’s initiative is also closely watched by regulators and the industry. In the context of rapidly expanding influence of cutting-edge AI technology, strengthening the preparedness organization is seen not just as a manpower supplement but as a foundation for restoring trust in the entire AI ecosystem and gaining social acceptance.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)