The cybersecurity firm F5 is using AI to secure large language models (LLMs) against threats like prompt injections and data exfiltration. These attacks exploit the unpredictable nature of LLMs and have already caused significant monetary damage. The largest security breach in monetary terms I’m aware of happened recently against OpenAI,” said Chuck Herrin, the field chief information security officer of F5.
Herrin cited an incident involving DeepSeek, an LLM from a Chinese company, which was suspected of using prompts from OpenAI’s ChatGPT to train its model. This raised serious concerns about intellectual property theft. LLMs are trained on large datasets and designed to respond to a wide variety of user prompts.
While a model doesn’t typically “memorize” its training data, prompting it thousands of times and analyzing the results can allow a third party to emulate the model’s behavior through distillation. This is why securing the application programming interface (API) used to access the model is crucial. Sanjay Kalra, the head of product management at cloud security company Zscaler, pointed out that while traditional data can be deleted from databases, there’s no easy way to roll back information with LLM chatbots.
Cybersecurity companies are tackling this problem with a two-pronged approach. The first involves traditional cybersecurity measures such as access control, authentication, and logging user access. “Authenticating users for an LLM doesn’t really change compared to other services, but it remains crucial,” Herrin said.
Securing LLMs with AI tools
Kalra also emphasized the importance of restricting access based on roles and locations. The other part of the solution is employing more AI.
Because of LLMs’ “black box” nature, it’s challenging to predict which prompts will bypass safeguards or exfiltrate data. However, cybersecurity firms are now using AI to train models that act as watchdogs. These models position themselves as an intermediary layer between the LLM and the user, examining prompts and responses for signs of malicious activity.
Herrin described this as an arms race, stating, “It takes a good-guy AI to fight a bad-guy AI.” F5, for example, offers services that allow clients to deploy such security-tuned AI models. However, this approach can be costly. High-capability models like OpenAI’s GPT-4.1 are expensive, making their use impractical for many situations.
To address this, Kalra suggested using smaller language models that require less computation and memory, thus being more cost-effective. Even though they have relatively fewer parameters, they can still provide robust security. Zscaler, for instance, has an internal AI and machine learning team that trains its own models.
As AI continues to evolve, organizations face a unique security challenge: the technology that introduces vulnerabilities is also becoming essential for defense. A multilayered approach, combining cybersecurity fundamentals with security-tuned AI models, can help fill the gaps in an LLM’s defenses.
Noah Nguyen is a multi-talented developer who brings a unique perspective to his craft. Initially a creative writing professor, he turned to Dev work for the ability to work remotely. He now lives in Seattle, spending time hiking and drinking craft beer with his fiancee.
























