AppSec in the Age of AI: Predicting Challenges and Opportunities
Generative AI became the ultimate must-watch technology in 2023, as developers began to generate code with the help of large language models (LLMs). AI-generated code entered code bases for the first time, and security leaders were forced to grapple with the impact of AI on their organization’s application security posture.
If this year made one thing clear to the AppSec world, it’s this: artificial intelligence will have far-reaching implications for every person involved in developing, testing, securing, or maintaining code.
Where will 2024 take AppSec – and how will new and emerging technologies impact open source security in years to come?
In this blog, I’m going to make some predictions I think are likely for the upcoming year – but then I’d like to go a little bit further, and talk about where I think these technologies will lead in five and even ten years, as the entire tech and security market adjusts to this sea change to how software is made, attacked, and defended.
No one can tell exactly what’s to come, but the history of innovation gives us many lessons to draw on and patterns to help inform our predictions.
I see the coming decade as doing to AppSec what the Cambrian Explosion did to life on planet Earth: adding a huge amount of diversity to both the challenges and solutions of keeping up in a rapidly changing environment. While the AppSec world of 2024 may look much like that of 2023, by 2034, I believe we’ll have seen permanent, structural changes to how we solve the problem of securing software.
Let’s start with what’s right around the corner – 2024.
AppSec in 2024: Rapid-Fire Changes
Before we take a look at the massive changes coming to the AppSec landscape, I’d like to start with a caveat from history.
Think about what the internet means to people’s lives today – how frequently we depend on it, and how much labor it saves. But of course, that didn’t happen overnight.
ChatGPT debuted on November 30, 2022. By the end of 2024, it will be just over 2 years old. The World Wide Web was the same age way back in 1995 – when websites still looked like documents, and search engines were in their infancy.
In 1995, we were a long way from online banking, ecommerce, collaborative editing … even Wikipedia and Google were still years away. But even in that first two years of the modern internet, the first players were already on the scene – and you could begin to see what was taking shape.
In San Jose back in that pivotal year of 1995, a new auction website called Ebay was being built, and a pair of graduate students at Stanford incorporated a new company they called Yahoo!, built with an intent to index the rapidly growing number of documents on the world wide web.
Here’s my take on how AppSec and AI will come together in this pivotal year of 2024:
- AI-adjacent attacks In the news – Software companies and security leaders will be spurred to action when AI is used to insert malicious code (made deliberately difficult to detect) into open source projects. AI can take different approaches from human-created attacks, which could make it possible for AI-generated malicious code to evade detection by some automated tools.
In other attacks, AI will be the target – not the method. Companies’ AI models and training data represent a juicy opportunity for attackers to steal information. In the worst case scenario, an attacker could even alter an AI model covertly, causing it to generate intentionally incorrect responses. It’s not hard to see how in some contexts – for example, a missile targeting system, or a thermal sensor for a power plant – this alteration could have serious, even deadly, consequences.
- A million flowers bloom – Money is flowing into AI development, and nearly every company in the world today is trying to leverage LLMs. An explosion of AI tools will be the immediate result. While many of the new AI companies will never get past the “minimum viable product” stage, others will find new and viable solutions to previously unsolved problems.
As these applications become more important to our lives and work, a large number of new companies will need to secure code bases consisting largely of AI models. For the first time, these models will be the “crown jewels” of entire companies. That makes them secrets attackers will want to steal or modify … which in turn will give rise to new solutions designed to secure the models themselves.
- More code, more problems – As more developers use AI-generated code to complete engineering tasks faster, they’ll generate a lot more code. That could be good news for their product managers and customers … but from a security perspective, more code will almost always mean more vulnerabilities.
In many organizations, vulnerability management is already at a breaking point, with backlogs climbing as nearly a hundred new CVEs are reported every single day. As code velocity increases, organizations will have to narrow their aperture, focusing only on the vulnerabilities that represent a genuine source of risk.
With developer time stretched thin, tools that generate a large number of non-exploitable findings – necessitating that developers either prove non-exploitability or waste time fixing everything, exploitable or not – will increasingly be seen as a liability, rather than an asset.
- Compliance gets complicated – As AI-generated code and organization-specific AI models become an important part of corporate IP, the overall compliance picture becomes vastly more complex. When code is generated by AI, it’s actually being woven together from a number of code snippets available publicly online. What happens when that code is created from an open-source library with a license type that doesn’t permit your intended use?
Data can also be compromised in other ways that jeopardize intellectual property and sensitive information. Reconnaissance tools may be developed that automatically extract corporate information revealed to AI models. Developers may reveal secrets to coding assistant AI applications, and compliance leaders will need to consider a range of differing standards for how developers are allowed to use AI coding assistants, depending on the level and type of risk the application will be exposed to when deployed.
AppSec in 2029: Taming the Beast
In five years, AI will no longer be the new kid on the block in the technology community. Established players will have staked out territory, and both organizations and individuals will have started to align themselves broadly with specific brands.
But in some ways, the honeymoon will be over. People will no longer be impressed by the types of results from AI models that were shockingly powerful in 2023. The big players will undoubtedly have weak spots – and a nascent second generation of competitors will be starting to form in response to these perceived weaknesses.
For most people – in and out of the tech industry – the biggest challenge will be what to do with the vast amount of information generated by AI models and tools.
How will this play out in AppSec? Here’s what I think:
- VEX goes automated – VEX (short for Vulnerability Exploitability Exchange) is a companion artifact to an SBOM that shows you which vulnerabilities are exploitable. Today, these artifacts are usually generated manually, by expensive consultants.
That’s just not sustainable. The exploitability information for any piece of software is constantly in flux, as new CVEs are disclosed. Automating the task of VEX creation, allowing an up-to-the-minute assessment of exploitability, will be key to ensuring that customers can get a clear picture of risk from vendors.
- Curation > generation - Across many industries, there will be a gradual transition away from the idea that value comes from generating content. Just as search engines will have to contend with a web filling up with AI-generated content, developers will have to contend with a rapidly increasing number of open source packages generated by AI-enhanced coding techniques. Possibly, they may face floods of totally AI-generated content, potentially of dubious value or with malicious code inserted.
This will lead to an astronomical rise in the number of published open source packages and known vulnerabilities. There will come a time when having a list of all known vulnerabilities existing anywhere in your code base would be like printing off a list of all the websites on the internet – possible only in an earlier era.
As these numbers skyrocket, the primary role of AppSec tools will move from generating a list of vulnerabilities to curation and focus, helping organizations identify their most urgent risks.
- Real-time threat detection – Detection of breaches and attacks has generally only been possible only after damage has been done. By 2029, AI will have changed the way attacks are detected.
AI will be combined with emerging tools that enable deeper, more contextual visibility into applications, detecting abnormal behaviors automatically to identify (and even block) attacks in progress in the application layer. In addition to helping limit the damage from breaches, real-time anomaly and threat detection may make it easier to catch cyber criminals responsible for launching attacks.
AppSec in 2034: AI Hits Maturity
After ten years living and working with AI, the technology will be a part of everyday life for nearly everyone. Even people who don’t work directly with AI in any capacity will live in a world where AI-generated data is woven into customer interactions, infrastructure development, and entertainment options.
For those tasked with securing applications, AI by 2034 will be a friend – and a foe. But it will also be “the water we’re swimming in,” and imagining life without it will become increasingly difficult.
By the mid-2030s:
- Challenges to Open-Source Development – If copyright law challenges to AI-generated content fail and code generation AI continues to improve, AI-generated code created in-house may become a meaningful competitor to using open-source libraries.
Of course, a lot of this depends on regulatory decisions that can’t be accurately forecasted – but fundamentally, organizations use open-source libraries because it’s a cheaper and faster way to develop applications than building everything in-house. If open-source development platforms fill with low-quality, AI-generated code, the balance of effort may shift toward generating code in-house.
This would have significant implications for application security across the board, as security tools designed to check exploitability of AI-generated code become more important than CVE scanning tools designed to compare a list of known vulnerabilities to the code present in an application.
- The AI Arms Race: Red vs. Blue – By 2034, both the attackers and defenders will be adept at using AI tools to enhance their positions. For every new type of attack designed by cyber criminals and hostile nation-states, new solutions will be designed to help defenders respond faster and more effectively.
Similarly, for every new defense strategy devised by organizations, attackers will look for new ways in. By 2034, perhaps attackers will design AI tools that enable any novice hacker to create chained attacks – forcing organizations to re-prioritize vulnerabilities that were previously thought to be too difficult to exploit to be worthwhile.
- “Just in time” security dawns – Today, the vast majority of the AppSec world is focused on a “shift left” philosophy. However, this idea was based on cost estimates from IBM in 1981, and more recent research has found no significant impacts from fixing defects later when using modern development techniques.
By 2034, the original research that spurred “shift left” as a way of life will be closer to the stock market crash of 1929 than the modern day. Tools based on securing applications at runtime will be a valued, trusted part of application security. It won’t mean the end of prevention – but instead of shifting left by default, organizations may converge on a “just in time” model of risk detection and prevention that keeps security costs low and development speeds high.
Where Does That Leave Us?
The first ten years of commercial development of AI will bring a world of outcomes that this post (or any other) could never have predicted. But in many ways, it’s likely to look much like the growth and expansion of any disruptive technology: the development of “blue chip” players, a layer of challengers to keep the game honest, and a whole range of new technologies designed to solve the challenges and problems posed by the initial disruption.
It’s easy with all the AI hype to feel like we’re standing right at the beginning of everything – or perhaps, if you’re more pessimistic, that we’re standing right at the end of everything.
I think the truth is less headline-grabbing, but always important to remember: All of us are in the middle of it all, riding waves of human progress and innovation in a sea that stretches beyond the horizons of our lives.
And being in the middle of everything is going to be an incredible ride.
From our Oligo family to yours, Happy New Year. Here’s to an incredible decade to come – and all the changes it will bring.
Subscribe and get the latest security updates
Zero in on what's exploitable
Oligo helps organizations focus on true exploitability, streamlining security processes without hindering developer productivity.