A report has claimed that in 2023, OpenAI, supported by Microsoft Corp, faced a security breach where a hacker gained unauthorized access to internal messaging systems.Â
The breach specifically targeted online forums used by employees, resulting in the theft of proprietary details concerning OpenAI’s artificial intelligence technologies.
Notably, the incident did not compromise the systems responsible for housing and developing ChatGPT and other AI models.
Although OpenAI executives were aware of the breach, they assessed it as not constituting a national security threat, believing the hacker to be an individual without ties to any foreign government.
Consequently, the company chose not to publicly disclose the breach, citing the absence of compromised customer or partner information.
In other developments, OpenAI recently thwarted five covert influence operations in May that aimed to misuse its AI models for deceptive online activities.
Meanwhile, the Biden administration announced plans to introduce safeguards around advanced AI technologies, including those developed by OpenAI, to shield U.S AI capabilities from foreign threats.
Additionally, sixteen leading AI firms have committed to advancing AI technology responsibly amidst ongoing regulatory efforts to manage innovation and mitigate associated risks.