#News

Your Data at Stake: OpenAI Warns AI Models Now Carry "High" Cybersecurity Risk

Your Data at Stake: OpenAI Warns AI Models Now Carry

Date: December 11, 2025

OpenAI's latest models show dramatic capability jumps in hacking tests, raising concerns about future threats to businesses and individual users.

OpenAI issued a stark warning Wednesday that its upcoming artificial intelligence models are likely to pose a "high" cybersecurity risk as their capabilities rapidly advance, marking a significant acknowledgment from the leading AI lab about potential threats posed by its own technology.

The Microsoft-backed company said in a report shared exclusively with Axios that its frontier AI models' cyber capabilities are accelerating, with upcoming models expected to reach dangerous new thresholds. The company warned that future models might develop working zero-day remote exploits against well-defended systems or assist with complex enterprise intrusion operations aimed at real-world effects.

Dramatic Capability Increases

The warning comes amid dramatic improvements in recent model releases. According to OpenAI's report, GPT-5 scored 27% on a capture-the-flag cybersecurity exercise in August, while GPT-5.1-Codex-Max achieved 76% last month – a nearly threefold increase in just three months.

"We expect to continue updating it as we learn more," OpenAI stated. The company is now planning and evaluating each new model as though it could reach "high" levels of cybersecurity capability under its internal Preparedness Framework.

The improved performance stems largely from models' growing ability to operate autonomously for extended periods. "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios. This extended autonomy enables brute force attacks that, while more easily defended in monitored environments, significantly expand the pool of potential attackers.

Dual-Use Technology Dilemma

The technology presents a double-edged sword. While the same capabilities that make these models dangerous for attackers also make them valuable for defenders, OpenAI acknowledged the growing security implications. The models might assist with complex enterprise or industrial intrusion operations, raising concerns about lowering the barrier to entry for sophisticated cyberattacks.

Despite the concerning trajectory, OpenAI emphasized that brute force attacks enabled by extended autonomous operation would be "caught pretty easily" in any defended environment. The company noted it hasn't specified exactly when models rated "high" for cybersecurity risk might emerge or which model types could pose such threats.

Industry-Wide Response

OpenAI is implementing multiple layers of defense. The company is "investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities".

To counter emerging threats, OpenAI announced it will establish the Frontier Risk Council, an advisory group bringing experienced cyber defenders and security practitioners into collaboration with its teams. The council will initially focus on cybersecurity before expanding into other frontier capability domains.

The company will also introduce a tiered access program for qualifying users and customers working on cyberdefense, providing enhanced capabilities to those focused on defensive applications. OpenAI is relying on a mix of access controls, infrastructure hardening, egress controls and monitoring to mitigate risks.

Industry Concerns

OpenAI isn't alone in confronting these challenges. Leading models across the industry are getting better at finding security vulnerabilities, prompting increased collaboration through initiatives like the Frontier Model Forum, which OpenAI started with other leading labs in 2023.

The timing is particularly notable given recent cybersecurity concerns. OpenAI's chief information security officer, Dane Stuckey, acknowledged on social media that the company is "very thoughtfully researching and mitigating" risks around prompt injections, calling it "a frontier, unsolved security problem."

The developments underscore the rapid pace of AI advancement and the growing urgency around managing dual-use technologies that can serve both offensive and defensive purposes. As AI capabilities continue to accelerate, the race between developing safeguards and potential misuse appears increasingly critical.

OpenAI's transparency about these risks represents a shift in how AI companies communicate about potential dangers, as the industry grapples with balancing innovation against security concerns in an increasingly complex threat landscape.

Arpit Dubey

By Arpit Dubey

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =