The Secret of Protecting Society Against AI: More AI?
As with many societal breakthroughs, there is a dark side to the upcoming AI revolution. It will fall to analytics professionals to help manage the resulting cybersecurity risks and their secret weapon will be more AI.
- By Troy Hiltbrand
- September 27, 2023
Lately, we have heard a chorus of industry experts singing the praises of generative AI and its potential beneficial impact on society. With use cases that span all areas of the business from marketing, sales, and human resources to accounting and finance, this technology has versatility that is truly mind-blowing. As more companies race to integrate AI capabilities into their products, we can start to envision a world where employees empowered with augmented technology achieve previously unimaginable levels of productivity.
However, on the periphery of this hype are another set of use cases that paint a darker picture. The same technologies that will transform the productivity and efficiency of employees will also arm cyber criminals with the tools they need to ramp up their game as well. The field of cybersecurity operations will need new tools, techniques, and employees with a whole new set of analytical skills to be successful.
Where do analytics professionals within an organization fit into this game of cybersecurity measures and countermeasures?
The answer at the heart of all these problems is data. Data is the medium that powers the productivity gains associated with AI and it is the medium that will provide cybersecurity operations the assets they need to effectively counter the new generation of attacks on the horizon. Analytics professionals will be called on to assist across the organization in the implementation, refinement, and ongoing operations of AI capabilities because of their long-established expertise in areas such as data and feature engineering, data quality management, modeling and simulation, big data management, and model generation and validation.
With the changing landscape of cybersecurity operations, let’s dig into how analytics professionals can assist in addressing the challenges associated with protecting the business from AI-powered nefarious actors.
One of the areas of greatest concern with generative AI tools is the ease with which deepfakes -- images or recordings that have been convincingly altered and manipulated to misrepresent someone -- can be generated. Whether it is highly personalized emails or texts, audio generated to match the style, pitch, cadence, and appearance of actual employees, or even video crafted to appear indistinguishable from the real thing, phishing is taking on a new face. To combat this, tools, technologies, and processes must evolve to create verifications and validations to ensure that the parties on both ends of a conversation are trusted and validated.
One of the methods of creating content with AI is using generative adversarial networks (GAN). With this methodology, two processes -- one called the generator and the other called the discriminator -- work together to generate output that is almost indistinguishable from the real thing. During training and generation, the tools go back and forth between the generator creating output and the discriminator trying to guess whether it is real or synthetic. When the process reaches a level of predefined quality, the output is delivered. Focusing on the augmentation of these discriminators will be the core of combatting the proliferation of malicious generative AI.
Discriminatory models will need to be continuously evolved and strengthened to distinguish between artificial and real content. They will also need to be optimized to assess streams of data in real time and provide to end users probability scores that evaluate whether the conversation is real or not. This will require investments in hardware and tools, and will put a demand on analytics professionals to leverage their experience in developing, training, and testing these discriminatory models to support cybersecurity operations.
Breach-and-Attack Simulation and Response
As the attack surface increases and new threat vectors arise, cybersecurity operations must become more adept at anticipating the unanticipated. Analytics professionals have had decades of experience supporting business modeling and simulation exercises, whether they be budgetary and financial simulations, operational what-if scenario planning, or business resource optimization problems. These same skills will be useful as the cybersecurity operations teams race to get ahead of malicious generative AI scenarios.
Because much of this is new and uncharted territory, there will be limited data sets to support their breach-and-attack simulations. This is where AI can be leveraged to help an organization create synthetic data sets representing the new and evolving threat landscape. With this synthetic data, cybersecurity operations teams will be able to simulate scenarios using evolving circumstances and dynamically generate plans and responses to this dynamic threat landscape. The unique perspective that analytics professionals have in the areas of data engineering and data management will be critical in handling the complexities and nuances associated with these efforts.
Digital Risk Protection Services
Another risk on the horizon is the threat of data, whether real or fake, spreading across the Internet that has the potential for hurting an organization or tarnishing its brand. Public and private proprietary data gets disseminated across the surface web, social media, and dark and deep web sources. This data can pose a risk to a company if it gets into the wrong hands. Information about what is out there needs to be gathered, cleansed, and presented in a manner wherein decisions can be made about how to manage the risks associated with this distributed data.
In addition, bad actors are increasingly leveraging generative AI to create seemingly legitimate data, including audio, video, and text, with the singular objective of hurting companies or their employees. This can include their executives and their spokespeople -- all with the intent of harming the company’s reputation. Analytics professionals, combined with specialized services focused on this type of data discovery, will be able to partner with cybersecurity operations to surface potential risks and suspicious activity across these obscure data sources to alert the business and assist in recovery and mitigation plans. This activity requires the ability to identify outliers in data and identify connections between data living on multiple data sources to create and validate inferences. With the magnitude of available data and the apparent veracity of the counterfeit information that exists, it is almost impossible for this to be accomplished without the assistance of augmented AI to discover, synthesize, validate, and respond to this increased digital threat landscape. Analytics professionals will be critical in assisting in this effort.
The future of AI is bright and the landscape is changing quickly. However, at the same pace that positive and value-generating use cases are imagined and implemented, those looking to harm a business are coming up with fraudulent and malicious use cases. Analytics professionals need to be paired with cybersecurity operations teams to help assess the landscape and implement analytics solutions to react and respond to this new level of organizational risk. Analytics skills, techniques, and resources that have gone unused should be poised for a resurgence of interest as companies look at both how to leverage AI for its good and protect against its bad.
Troy Hiltbrand is the chief information officer at Amare Global where he is responsible for its enterprise systems, data architecture, and IT operations. You can reach the author via email.