Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Four Tips for Closing AI Security Holes


December 2, 2017 12:29 PM

Artificial intelligence is finally moving beyond the pages of science fiction. This is causing a range of emotions among those who follow the technology, from excitement to concern.

The excitement side of the AI revolution is clear: It makes big data truly useful, simplifies computing for end-users, and might finally deliver on the dream of computers that truly serve humans, among other known benefits. 

But the downsides of AI also are clear: What if the machines take over eventually? What if humans unleash something they cannot control? What if AI crushes employment as it picks up speed? 

This article is not about those concerns, however. 

Those concerns are in the future. A more pressing problem today is the pedestrian concerns around AI security. As we hand data and control over to the machines, even if AI still is rudimentary, how can businesses maintain proper security controls? This is a concern for today, not some far-off future. 

“Machines will be able to slice and dice giant lakes of data in many different dimensions and angles, and draw conclusions in a very fast time. That makes vast quantities of corporate data even more delicious to hackers,” notes Mark Bentkower at data security firm, Commvault. 

“Imagine grabbing millions of endpoint records from a healthcare company and matching that data from stolen data derived from another hack at a credit rating agency,” he says. “With AI, there’s a limitless amount of mischief for the hacking community to get into.” 

So here are four ways that your business can improve security in the face of AI as it exists right now. 

Protect Data with Encryption

Artificial intelligence does its magic with access to data. This also is the biggest place where security becomes an issue. So the first way you can help close AI security holes is through end-to-end encryption. If AI is connected with data, this data should be encrypted. This limits the damage done by a security breach. 

That means both encrypted data at rest, when it is sitting in data stores waiting to be used, and encryption in transit when the AI system actually accesses the data. 

“Backed-up or archived data is delicious for bad guys,” notes Bentkower, “so you want to make sure that if they do manage to get their hands on it, that they can’t read it. Make it useless to them. That’s what encryption does.” 

Use AI to Protect Against AI

If hackers are potentially using AI for more advanced security attacks, a second way that businesses can combat these threats is by using AI for more advanced security protection. Let AI battle itself in a match between good and evil. 

“It is very difficult to keep on top of the ever-emerging threats and vulnerabilities that organization, end users and mobile devices are subject to,” says Wes Gyure at IBM Security. “In order to keep up, enterprises must have a cognitive solution that will proactively look for emerging threats and vulnerabilities, and provide remediation to secure the environment.” 

Looking for security holes and emerging threats is complex, doubly so when hackers are using AI to find vulnerabilities. So hand the job over to AI systems such as IBM’s Watson to keep pace. 

Roughly 81 percent of executives said they have already implemented some form of automated security monitoring, according to the 2017 Executive Application & Network Security Survey conducted by Radware, with 57 percent indicating that they trust AI for security more than human security professionals. 

Monitor the Bots

Bots and AI systems are often given unrestricted access to data so they can leverage these data stores, and real-time security monitoring systems are frequently configured to ignore these services since they are trusted by the system. But ignoring AI access opens a security hole because hackers can masquerade as AI and sneak into systems undetected.

So a third way that businesses can close AI security holes is by watching AI the same way human or other device access is monitored. 

If a hacker uses the same login credentials as your AI system, you don’t want such activity to go unnoticed. 

Watch for Bad Data

A fourth tip for closing AI security holes is watching for malicious data manipulation. 

Machine learning uses data both for learning and analysis, so one way that cyber criminals can mess with corporate systems is through manipulation of this data to trick AI systems into learning the wrong lessons. 

This can be done by giving AI access to data that guides these systems in the wrong direction. Imagine, for instance, what could happen to financial or healthcare systems when AI is informed by a large set of incorrect data; there’s much room for mischief and exploitation. 

The way to close or at least reduce this security hole is by watching for unusual outlier data both manually and with AI, and by carefully monitoring the provenance of data fed into the system. 

There might be spectacular, headline-grabbing security issues down the road in the form of AI-gone-wild. But before then, we still have to deal with AI’s more pedestrian security issues of access and enhanced attack.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center
CyberPedia