Overview

The AppSec eyeroll: if you’re a security leader, you’ve probably seen it. If you’re an engineering leader, you’ve probably done it.

It starts with a security finding, usually generated by internal tooling like software composition analysis (SCA). The tooling generates an alert for the finding, and automatically sends that alert to security and engineering.

When engineering leaders and developers see the alert, they don’t always prioritize fixing whatever’s vulnerable. Instead, they often push back against security teams

We can see this play out in the real world every day. Take a look at this Reddit post, where a security practitioner encounters the AppSec eyeroll: 

“Developers refuse to upgrade their vulnerable package. They want the security team to show a POC to show the real risk. How do you handle such demand? We don't have [a] full-time engineer handling such things. And there are too many CVEs to check.”

Most comments show tremendous sympathy toward the developers:

“Many auto generated CVE and reports from ‘auditing companies’ are trash and have zero real world risk or relevance.”
“Can you prove that it affects them? [A] library can have thousands of classes functions and they would not use [the] vulnerable one.”
“Just because a package has a vulnerability doesn’t mean it’s vulnerable in this instance. Many CVEs require very specific application configurations and for that reason, it sounds like this might not be an issue.”

It’s obvious that there is a real trust issue here, with many engineers skeptical of security findings. Who’s right? How did we get to this point – and how can organizations rebuild trust where it has been lost?

How It Started: CVE, CVSS, SCA, And the Roots of Distrust

To understand why the developer eyeroll happens, we have to go back to the beginning – all the way to the very first CVE list.

The year was 1999, and open-source software was becoming a security headache. Flaws in open-source code could be exploited when that code was used in many different applications by different organizations – and made it possible for less-skilled attackers to piggyback on the work of others to exploit code flaws. 

Making matters worse, at that time, no one had a common language for describing these security flaws. Each time someone talked about any given flaw, they had to explain it in detail, or use one of many security databases which all enumerated flaws differently.

In January of 1999, MITRE published a paper about the need for a common, interoperable database of vulnerabilities that could be used by all. Eight months later, they published the first-ever list of CVEs (Common Vulnerabilities and Exposures): 321 security flaws that were given a number, allowing security and engineering professionals to identify and discuss them.

The number of CVEs grew rapidly, with 1244 published in 2000, and 1585 in 2001. Open source security tools, designed to automate alerting about CVEs, began to make their way onto the scene with Black Duck, founded in 2004. Black Duck and other, similar technologies (originally described as Source Code Analysis, and later as Software Composition Analysis – but SCA, either way) based their findings on a manifest file, identifying libraries known to be impacted by a CVE whenever they appeared.

It was a strategy that worked better than anything that had come before – and when developers wanted to know what to fix first, there was a new way to figure that out, too. In 2005, CVEs started to receive a CVSS (Common Vulnerability Scoring System) score based on their severity, with the highest-severity vulnerabilities receiving a “critical” score of 10.0.

Now, organizations had a way to detect and prioritize fixes for vulnerabilities from open source components – but backlogs stubbornly grew, and would not stop.

As development with open source libraries expanded to dominate engineering workflows, CVEs exploded in number. Over 25,000 CVEs were announced in 2022 – which means that there are more new CVEs every week today than there were in the whole year of 1999.

Manifest-based scanning didn’t cope well with the rapid scale expansion of CVEs. Based on discovering theoretical vulnerabilities, rather than focusing on practical risks, these tools generated a lot of noise, but not enough signal.

When developers actually tracked down the CVEs they were alerted to, they often found these vulnerabilities weren’t actually exploitable in their code – the vulnerable library wasn’t actually loaded and running, or the vulnerable function wasn’t called. Any time spent fixing vulnerabilities like these was, in general, wasted.

Even keeping focus on vulnerabilities with “critical” or “high” CVSS scores became difficult: by 2022, an average of more than ten critical CVSS vulnerabilities were announced every day.

With this rate of expansion, engineering teams could be brought to their knees, spending all their time on fixes instead of features – and most of that fix time would inevitably be a waste of energy and resources, fixing vulnerabilities that weren’t exploitable.

How It’s Going: An Expensive, Stressful Blame Game

While every organization handles the balance between security and engineering differently, SCA findings are often a source of conflict. 

Can proper alignment or division of responsibilities between security and engineering teams fix the divide? From our perspective, we’ve seen conflicts happen regardless of how organizations establish a balance of power:

  1. An engineering-focused firm considers development first, and commits only minimal resources to their security team (or solo practitioner) . When the engineers get alerts, they generally simply ignore them or request exceptions to policy from management – and their requests are usually granted, to the chagrin of security, which watches helplessly while the list of known vulnerabilities in their products grows. Security can’t meet its metrics, and security leaders leave in frustration.

  2. Engineering and security have similar levels of organizational power, and security findings often result in a tense back-and-forth of demands and counter-demands, with engineers requesting POCs for vulnerabilities (like in the Reddit post we started this blog with) and security too low on resources to adequately prove out exploitability.

  3. The organization has a highly mature application security program in place, with exceptions to policies granted only rarely. Development is forced to pause frequently while engineers implement fixes for vulnerabilities, and engineers report low morale and feeling like they are spending more time on fixes than on their “real job”: developing new features.

No matter what strategy organizations use, it seems like someone’s left holding the bag. Over time, any one of these scenarios will result in distrust, conflict, and a feeling from security and engineering like they’re playing a zero-sum game.

While manifest-based security tools, CVEs, and CVSS started out to solve an emerging problem, these tools clearly have not successfully scaled for today’s application security or development landscape – and organizations are paying the price.

Healing the Rift: First Steps to Finding Common Ground

If engineering and security teams at your organization seem mired in distrust, you’re not alone – and the first step you can take toward making it better is to realize that in most organizations, it’s nobody’s fault. Everyone’s doing the best they can.

It’s not the engineers’ fault that they don’t want to be held accountable for non-exploitable vulnerabilities in libraries they’re using responsibly. It’s not security’s fault that their tools alert them to non-exploitable findings.

And it’s not even the tools’ fault – modern SCA is doing the best it can with the manifest-based scanning method it has to work with. Many of these tools have tried to reduce noise with what they call “reachability analysis,” but this method is still based on analyzing code in a static way, seeing how it’s built rather than how it’s used. That means it will always alert based on theoretical vulnerabilities, rather than exploitability in practice.

Is there a way out – or are we doomed to watch helplessly as the AppSec eyeroll impacts trust and development time?

The answer has to start with moving security from the theoretical to the practical. There is no way out of the current scenario while we’re still focused on finding vulnerabilities in manifest files.

But why is the industry stuck on manifest files? Why not watch the application while it’s running?

For anyone paying attention in AppSec, the answer to that question is two words that have taken on a huge significance: 

“Shift left.”

But why are we shifting left – and has shifting left actually had a positive, practical impact?

Stay tuned for next week, when we’ll explore “shift left” and its limitations in Part II of this three-part blog series.

Subscribe and get the latest security updates

Zero in on what's exploitable

Oligo helps organizations focus on true exploitability, streamlining security processes without hindering developer productivity.