ChatGPT 4 exploits 87% of vulnerabilities

ChatGPT 4 exploits 87% of vulnerabilities

Vulnerabilities Exploits

A recent study by cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang has revealed the offensive capabilities of large language models (LLMs) like ChatGPT in exploiting known vulnerabilities. The team tested ChatGPT 4’s ability to exploit one-day vulnerabilities, which are known vulnerabilities with an existing patch window. Using 15 real-life scenarios involving vulnerabilities in websites, container management software, and Python packages, the researchers found that ChatGPT 4 successfully exploited 87% of these one-day vulnerabilities.

This success rate was significantly higher compared to other methods, including earlier LLM iterations and open-source vulnerability scanners, which failed to exploit any vulnerabilities. However, ChatGPT 4 did face challenges with two particularly complex issues. The Iris Web App’s heavy reliance on JavaScript for navigation posed a problem, as the agent couldn’t interact with essential elements like forms and buttons.

Additionally, the HertzBeat system’s descriptions were in Chinese, creating a language barrier for the English-prompted GPT-4. The study also identified ChatGPT 4’s reliance on CVE descriptions as a notable limitation.

ChatGPT’s vulnerability exploitation success rate

When asked to operate without these detailed codes, its success rate dropped to 7%, an 80% reduction, indicating a substantial gap in its autonomous detection capabilities. The research concluded that while LLMs like ChatGPT 4 excel in exploiting known vulnerabilities, they struggle with detection without prior context. This highlights the fact that uncovering vulnerabilities is inherently more challenging than exploiting them.

As LLM technology evolves, the potential for more autonomous and sophisticated cyber threats increases. The study’s findings emphasize the importance for the cybersecurity community to proactively integrate LLMs in defensive strategies and manage their deployment judiciously. “Our results show both the possibility of an emergent capability and that uncovering a vulnerability is more difficult than exploiting it.

See also  Best times to view the Milky Way

Nonetheless, our findings highlight the need for the wider cybersecurity community and LLM providers to think carefully about how to integrate LLM agents in defensive measures and about their widespread deployment,” the researchers concluded. The study underscores the need for responsible integration of LLMs into security frameworks to mitigate their potential misuse and to enhance cybersecurity practices in the face of evolving threats.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist