Growing AI risks prompt cybersecurity shift

Growing AI risks prompt cybersecurity shift

AI Risks

Artificial intelligence is becoming more prevalent in business operations, but this also brings new risks. Cybersecurity experts warn that AI system poisoning is a growing threat that companies need to prepare for. Consulting firm Protiviti recently dealt with a client that experienced an unusual cyberattack.

Hackers tried to manipulate the data fed into one of the client’s AI systems to skew its output. While such incidents are still rare, experts predict they will become more frequent. Hackers are expected to increasingly target AI systems by corrupting data or manipulating the models.

The National Institute of Standards and Technology has identified four main types of AI poisoning attacks:

1. Availability Poisoning compromises the entire AI model, causing a denial-of-service attack for all users. 2.

Targeted Poisoning prompts the model to make incorrect predictions for certain inputs. 3. Backdoor Poisoning adds small triggers to training data that make the model misclassify these inputs during operation.


growing risks in AI cybersecurity

Model Poisoning directly modifies the trained model to introduce malicious functions.

AI systems are also vulnerable to other attacks like privacy breaches and prompt injections. Security experts note that AI poisoning attacks can come from both inside an organization and external hackers. Nation-states are a particular concern given their resources to conduct sophisticated attacks.

The motives mirror those of traditional cyberattacks – causing disruption, stealing data, or monetary extortion through ransomware-like threats. Tech companies developing AI systems are the most likely targets. But other organizations using compromised systems could also be indirectly impacted.

Many organizations currently lack robust ways to detect and respond to AI poisoning attacks. Reports show many CISOs are worried about malicious AI use but feel underprepared to secure against these threats. Preparing involves multi-layered defenses:

See also  First US social security retirement benefit approaching

– Implementing strong access and identity management
– Using Security Information and Event Management systems
– Deploying anomaly detection tools to monitor for unusual activities

“Responding proactively will help protect organizations against the inevitable rise of AI poisoning attacks,” says Mary Carmichael, managing director at Momentum Technology.

While AI system poisoning is still an emerging risk, its growing relevance means defensive measures need to be prioritized. Security experts and CISOs must work together to evolve security practices. This will ensure AI technologies can be deployed safely.

The time to act is now, as preparing today will safeguard against the complex threats of tomorrow.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist