Can AI Solve Our Cybersecurity Challenges?
Can artificial intelligence keep enterprise data safe and stay ahead of hackers' latest techniques?
- By Brian J. Dooley
- November 21, 2017
Cybersecurity is becoming more difficult today as vulnerabilities are exploited in ever more sophisticated ways by an increasing array of malignant actors. One problem is the enormous amount of sensitive data stored online, creating new liabilities and challenges for security analysts. The Internet of Things (IoT) both vastly increases the volume of stored data and adds new threats based on real-world interactions from devices such as webcams and autonomous vehicles. This is not to mention the challenges in maintaining security on remote devices (discussed in my Upside article "The Internet of Things and the Security of Us").
The rapidly increasing threat levels make it difficult (if not impossible) to manage all aspects of security without employing a very large team of analysts. Yet the threat extends to companies of all sizes, and data volume and velocity are expanding as digitization proceeds. The result is that many companies struggle to keep up with security needs. Standard automated systems, although improving in sophistication, find it increasingly difficult to quickly react to unanticipated or newly coined security issues.
In this environment, it is hardly surprising that many enterprises consider turning to AI, machine learning, or cognitive computing for potential salvation. In fact, it has been suggested that machine learning alone could ultimately provide a panacea, but success has been patchy. We are not yet ready to turn over all elements of security to a deep learning algorithm. One problem is that although a machine learning tool may recognize potential security breaches or attacks, the identified "threats" include far too many false positives. Machine learning lacks the general knowledge required to distinguish real threats -- and we are still far from creating an artificial general intelligence.
Machine learning can play an important role in cybersecurity. It must, however, be treated with caution and applied to problems it can solve. It is useful for specific issues that can be extended by composite solutions and works best in conjunction with a team of human analysts.
Man-Machine Analyst Teams
The main problem is still the lack of trained security analysts. A hybrid human-machine approach is likely to be necessary and will appear in different areas, such as model building and categorization based on analyst screening, unsupervised anomaly detection with analyst review, and even analysis of cybersecurity literature to determine current threats.
One example of a hybrid approach is AI2 from MIT's Computer Science and Artificial Intelligence Lab (CSAIL). AI2 is an adaptive cybersecurity platform that uses machine learning combined with expert analysis to adapt and improve security response over time. In this system, human analysts handle discrimination tasks and help to build models while the machine learning algorithms are trained and act upon the enormous data stores made available to the system. AI2 is already able to detect 85 percent of attacks and reduce the number of false positives by a factor of 5.
Use of some machine learning components in combination with human analysts is becoming commonplace within larger corporations. Many vendors are operating in this area, including the major AI companies and a galaxy of startups focusing on specific cybersecurity problems. Examples include use of IBM's Watson for Cyber Security (to analyze documents and logs for security issues), Microsoft's recently acquired Hexadite (for agentless, automatic incident investigation and remediation), Google's cybersecurity efforts (including a recent machine-learning based Android security initiative), and China's search giant Baidu (using a deep learning approach to identify malware). Startups included Interset (which employs a library of more than 300 machine-learning and advanced-analytics models), Massive Intelligence (which provides a range of detection products based on machine learning), and Deep Instinct, Darktrace, and Cylance (providing variations of machine learning cybersecurity).
Machine learning is capable of making security issues less complex by categorizing incidents and reducing minutiae; if applied in areas for which it is a good fit, it is an excellent way to extend analyst resources and locate issues that might easily be overlooked in overwhelmingly large data streams.
Into the Future
Cybersecurity is a key development area for machine learning and AI tools. In particular, as IoT continues to expand, we can expect increased interest in the use of machine learning to counter security threats. Further evolution is likely to see greater automation and greater accuracy in detection and fewer false positives. It is likely, however, that human analysts will continue to be required to bridge the gap between machine learning outputs and the more general issues arising from AI's current capabilities and the interactions between security, social engineering, and machine learning.
Even as cybersecurity applications continue to develop, AI and machine learning techniques are being adopted by hackers to exploit weaknesses in corporate and consumer security systems. As machine learning becomes more prevalent in security, hackers will look for ways to predict machine learning response. This will further escalate the ongoing arms race between security analysts and hackers.
There will be notable successes and notable failures, but the data being generated will continue to grow in volume and velocity beyond the capabilities of manual or even simple programmatic methods. The role of machine learning will certainly continue to expand.
Brian J. Dooley is an author, analyst, and journalist with more than 30 years' experience in analyzing and writing about trends in IT. He has written six books, numerous user manuals, hundreds of reports, and more than 1,000 magazine features. You can contact the author at firstname.lastname@example.org.