Depending on your point of view, artificial intelligence (AI) could either be the best thing ever to happen to cybersecurity, or the worst. In either case, expect the two to be inexorably intertwined for a very long time. Which means it’s time for MSPs and other tech companies to know how they will work together, how to operationalize and monetize it—and be prepared to walk customers through what it all means.
Innovations in AI, including the remarkable ascension of the ChatGPT chatbot and other large-language models (LLMs) over the last several months, are on the brink of totally disrupting cybersecurity, business, everything—and are also accelerating innovation at a pace we’ve not seen before, said cybersecurity and tech leaders at RSA Conference 2023 in San Francisco.
“Whether you succeed, survive or fail will be determined on getting this transition right. You have to develop a sense of urgency. Disrupt or be disrupted,” said John Chambers, founder and CEO of JC2 Ventures and former longtime CEO of Cisco Systems, during an RSA keynote. “We went from the internet era to the era of cloud and now we’re seeing the same with AI—just in the last six months. And it’s going to be bigger than the internet and cloud combined.”
A Permanent, Profound Impact on Everything
AI and cybersecurity are going to impact everything and everything we do from now on, and we need to plan and respond appropriately, according to Vivian Schiller, executive director, Aspen Digital, Aspen Institute.
“This is not just about the U.S. military; this is about every realm of our society. We need to stop thinking about AI as a service to the way we do things now,” said Schiller, moderator of the keynote presentation. “We need to think about an entirely different paradigm, and the way AI is going to impact every aspect of life.”
The marriage of AI and cybersecurity will even have a profound impact on our critical infrastructure and military, said U.S. Army Gen. (Ret.) Richard D. Clarke, during the keynote with Chambers and Schiller.
It’s imperative that the United States be on top—and stay on top—of both AI and cybersecurity innovations in order to thwart nation-state threats both physical and virtual.
“I expect AI to provide a lot more autonomy, for example one person controlling 20 planes. We have to envision with AI and autonomy how we’re going to fight with swarms and autonomous things small and dispersed across broad sectors,” Clarke said. “Until we start getting to that point and pointing out where threats are coming from, I think we haven’t won.”
Amid concerns that AI can and will be used to create deepfake videos and spread misinformation, the same technology can also help us identify the fakes and avoid escalation, Clarke said.
“We have to think about how we’re going to share information, how we’re going to highlight deep fakes. The companies that can get to and figure out deep fakes and share them, it’s going to make a difference,” he said.
Leveling the Threat Playing Field
At debate is whether AI-powered capabilities will ultimately help or hinder the cybersecurity landscape. After all, if cyber companies can leverage AI to thwart the current threat landscape, can’t bad actors leverage AI to find new attack strategies as well?
The answer, cyber experts acknowledge, is that AI will dramatically shorten the time that a new piece of malware or other threat is viable in the market, they said. Ultimately, it levels the playing field and should have a positive impact on business globally, according to George Kurtz, CEO of CrowdStrike.
“The cost of cyberattacks is expected to reach $24 trillion by 2027 (according to Statista). Just imagine what we could do with $24 trillion if we could invest it,” Kurtz said during an RSA keynote.
Maybe we can invest some of it if we can learn to harness the power of LLMs combined with the tremendous amount of data constantly created by every click, transaction, incident, response, everything, Kurtz said.
“We need to understand these actors. When threat intel meets hyperscale meets AI that’s when security-specific AI comes into play,” Kurtz said. “That can help us defend at machine scale and machine speed. Generative AI can interact and build on the LLMs. It learns and augments and finds context. What used to take days, weeks, months can now take minutes.”
Using security-specific AI will help us respond more quickly and more accurately to new threats, lessening the impact and damage that threats can cause in early days.
AI will help us transform from a defensive-minded approach to offensive to predictive. It can also help us reduce the cybersecurity skills gap, because LLMs can help us personalize cyber education and training to fit any individual’s learning preferences and abilities.
“We all learn differently. What if you could learn cyber in a language you understand? This could help humans address things they’ve not been able to do before and do it in minutes,” said Michael Sentonas, CrowdStrike president during the RSA keynote.
“Technology can be overwhelming. Now you have an ally. You can pull in a skill set you don’t have to help you or interact with one you have. It gives you time back and puts the power back in your hands,” Sentonas said.
Proceed, Thoughtfully, With Caution
Perhaps the most insightful guidance on how to manage AI innovations and expectations in the near future is to make sure any consumers of AI-generated content know that there are flaws and there always will be. After all, an AI model is based upon coding and data that humans provide, and includes our biases, foibles and … humanity.
“There is no perfect model, no platonic ideal of any system. Period,” said Dr. Rumman Choudhary, founder of Bias Buccaneers, in an RSA session called Security as Part of Responsible AI: At Home or At Odds? “We don’t have that expectation of software or toothbrushes. We must have reasonable expectations. We know nothing is perfect.”
Championing responsibility at the governmental, corporate and even individual levels will be extremely important, said Vijay Bolina, CISO at Deep Mind, during the Responsible AI session.
“AI will require cross collaboration with various expertise. Within a quarter’s timeframe, it’s extremely important for all flocks exploring these technologies to understand what the threat model is and take appropriate risk management to manage near-term risks vs. long-term risks,” Bolina said. “There should always be a human in the loop for the foreseeable future to validate what comes from systems, whether it’s a recommendation or a summarization.”
Added Daniel Rohrer, vice president of software product security at NVIDIA, “The very large models are interesting because you see progressive properties, new capabilities. But the vast majority of use cases don’t need or want these (LLMs) because they’re expensive to run. Other smaller models are more appropriate to run. And what happens if you ask the same question three days apart? Will you get the same answer? We want to be consistent and accurate. This is a big shift.”
Want to join the conversation?
Register for CompTIA’s AI Technology Interest Group and share your opinions and questions with like-minded peers.