devxlogo

Beyond the Model: How Arjun Chakraborty Builds AI Systems That Survive the Real World

With AI rapidly integrating into business systems and everyday life, engineers’ discussions are usually focused on the performance advancements of the different models. But in sectors like cybersecurity, where false positives and undetected threats cost millions and put critical systems at risk, making AI functional for front-line analysts is more important than the latest ChatGPT coding demo or generative AI fad.

Arjun Chakraborty AI Models

Enterprise systems at large corporations now ingest massive amounts of security data, but sorting the real risks from false positives has become an increasingly difficult challenge. With hackers already employing AI to devise novel and sophisticated cyberattacks, making AI detection work in real-world conditions is critical to protecting sensitive systems and data.

Arjun Chakraborty is an engineer who has emerged as a leading developer and advocate for AI models that go beyond theoretical upgrades and enhance front-line cybersecurity professionals’ capabilities.

Here’s a closer look at his life, career, and valuable contributions to the field of cybersecurity.

Building a Foundation in Cybersecurity, Deep Learning, and Scalable Infrastructure

Arjun Chakraborty has spent his career at the intersection of AI and cybersecurity, helping businesses integrate AI into their workflows to detect and respond to cyber vulnerabilities at enterprise scale.

“I started working in AI and security when it was still a very nascent field, and faced significant skepticism from industry veterans who doubted that AI would provide any practical value to security operations,” Chakraborty recalls.

He would soon prove the naysayers wrong. His career began in 2016 as part of the team applying deep learning to security tasks at Symantec, where previously identified malware code and email phishing attempts were used to train nascent AI models that could spot threats nearly as accurately as human analysts, but in a fraction of the time.

See also  From Research to Global Deployment: Building AI Systems Used by Millions

This defensive infrastructure would be integrated into products used by millions around the world, including Office 365 and Microsoft Exchange email services.

Chakraborty would go on to build a cloud-based data infrastructure for Home Depot, which was designed to scale as demand for AI training and customer interactions grew. He then took a position as a data science manager at Guidewire, where he built AI models to help predict cybersecurity risks for insurance companies, one of the first production applications of machine learning in the highly regulated insurance industry.

Chakraborty’s experience developing AI infrastructure and predictive models would eventually lead him to Nvidia, where he led the development of the Digital Fingerprinting framework which detects malicious activity across millions of AWS accounts.

His valuable contributions to scalable infrastructure allowed hundreds of distinct AI models to simultaneously search for threat scenarios, uncovering patterns that traditional rules-based detection often missed and laying the groundwork for what eventually became Nvidia Morpheus.

Arjun Chakraborty AI Models

Legitimizing Machine Learning Lessons for Cybersecurity

As his career progressed, Chakraborty realized that the advanced AI research he was doing was relevant beyond his work at individual companies.

After Nvidia, he went on to optimize machine learning for threat detection at the global data, analytics, and artificial intelligence company Databricks. He shared his work in a BSidesSF 2023 presentation on using natural language processing for security log analysis, which involved treating the logs as a language of their own that can be used to train AI models.

He also spoke at BSidesSF 2024, this time about how AI training could be enhanced in industries like healthcare and cybersecurity by creating synthetic datasets that mimic real-world inputs without revealing sensitive user data. By providing censored versions of existing security logs, LLMs are increasingly capable of producing training data similar enough to source material that it can be analyzed to strengthen AI models, but differentiated enough to introduce new potential threats and challenges before they appear in the real world.

See also  From Research to Global Deployment: Building AI Systems Used by Millions

“I take pride in knowing that my research has had a positive impact on the community, contributing in my own small way to improving defenses against ever-evolving cyber adversaries,” says Chakraborty.

Sharing his knowledge has become a cornerstone of his career, and these two key takeaways from his work form a basis for building even stronger cybersecurity defenses as attacks continue to evolve:

Accurate Detection Is Only Half the Battle:

AI models that gather insights from petabytes of data incident data can increase cyber defense capabilities, but transparency is essential for real-world utility. Maintenance processes that identify and remove false positives, and clear documentation that show human analysts what different models actually screen for, build trust and reduce inefficient manual reviews.

Scalable Infrastructure and Tooling Transform Theory into Practice:

Using AI to monitor security threats increases early detection and spots novel attack vectors before they can be exploited. But these models need deployment infrastructure, monitoring tools, and stress-testing frameworks that can be rapidly expanded as users scale and product features are added to ensure they can be secured as they roll out. Analysts need tools designed to monitor for and rapidly react to zero-day threats to secure production systems.

Chakraborty’s career has focused on building systems from the ground up to handle security threats under real-world conditions, not theoretical frameworks that might be useful down the line. By designing scalable infrastructure with a proven ability to triage security incidents and give analysts an edge in detecting new attack vectors, Chakraborty is proving that AI models benefit cybersecurity practices for every type of business.

See also  From Research to Global Deployment: Building AI Systems Used by Millions

Combining AI Efficiency with Cybersecurity Analyst Expertise

Today, Chakraborty is a principal applied AI engineer at Microsoft, where he is building new models to address alert fatigue, one of cybersecurity’s most painful bottlenecks. By learning from historical patterns, high-confidence alerts are escalated to analysts more quickly, while low-level threats are deprioritized, allowing analysts to focus where they can have the greatest impact.

But building models and infrastructure is only a part of Chakraborty’s focus going forward. He has plans to continue sharing insights and best practices developed at the intersection of AI and cybersecurity, bridging the gap between academic research and practical implementation to translate theoretical advancements into front-line security improvements.

With the AI arms race between cybersecurity professionals and sophisticated hackers in full swing, Chakraborty’s deep career experience could prove essential to keeping enterprise security one step ahead of those who seek to infiltrate it.

Photo by Igor Omilaev; Unsplash

Kyle Lewis is a seasoned technology journalist with over a decade of experience covering the latest innovations and trends in the tech industry. With a deep passion for all things digital, he has built a reputation for delivering insightful analysis and thought-provoking commentary on everything from cutting-edge consumer electronics to groundbreaking enterprise solutions.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.