Hackers broke into ChatGPT creator OpenAI, report claims
Support trulyindependent journalism
Find out moreClose
Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.
Whether $5 or $50, every contribution counts.
Support us to deliver journalism without an agenda.
A hacker broke into ChatGPT creator OpenAI’s systems, according to a new report.
The cyber attackers were able to see internal chats and may have stolen details about the design of its artificial intelligence products, the report claimed.
But the company did not inform law enforcement about the intrusion, the report said.
The New York Times said the incident saw a hacker lift details from discussions in an internal forum between OpenAI employees about the technologies being worked on by the company.
But they did not get into the systems where OpenAI’s product are built and housed, the report said.
The US firm has found itself at the forefront of the recent AI boom, sparked by the release of its generative AI chatbot, ChatGPT, in late 2022.
Since then, many of the world’s largest technology companies have started moving into the sector, with many experts also identifying generative AI as the key innovation of this generation.
According to the report, OpenAI executives told staff and the company’s board about the breach in April last year, but did not make the details public because no customer or partner data had been stolen.
OpenAI also did not inform US law enforcement agencies of the incident, the report said, because the company believed the hacker was a private individual with no known ties to a foreign government.
OpenAI has been contacted for comment.
The global AI race has become a matter of national security for many countries; therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented start-ups to tech giants like Google or OpenAI
Dr Ilia Kolochenko, ImmuniWeb
Dr Ilia Kolochenko, cybersecurity expert and chief executive at security firm ImmuniWeb, warned that attacks on AI firms are likely to continue, and increase, given the growing importance of the technology.
“While the details of the alleged incident are not yet confirmed by OpenAI, there is a strong possibility that the incident actually took place and is not the only one,” he said.
“The global AI race has become a matter of national security for many countries; therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented start-ups to tech giants like Google or OpenAI.
“The hackers mostly focus their efforts on the theft of valuable intellectual property, including technological research and know-how, large language models (LLMs), sources of training data, as well as commercial information such as AI vendors’ clients and novel use of AI across different industries.
“More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently.
“All corporate users of GenAI vendors should be particularly careful and prudent when they share, or give access to, their proprietary data for LLM training or fine-tuning, as their data – spanning from attorney-client privileged information and trade secrets of the leading industrial or pharmaceutical companies to classified military information – is also in the crosshairs of AI-hungry cybercriminals that are poised to intensify their attacks.”
Additional reporting by agencies