Vulnerable By Design: Understanding Shadow Vulnerabilities

In March 2024, CISA launched its Secure By Design pledge, an effort to improve the security of how products are developed and deployed. Speaking to attendees at Black Hat this year about where the industry is lacking, the agency’s director, Jen Easterly, noted that “we don’t have a cybersecurity problem. We have a software quality problem.”

While it may not be as black and white as she makes it seem, she brings up a fair and powerful point. The number of Common Vulnerabilities and Exposures (CVEs) continues to rise – 2023 saw 28,961 in total, up from 25,059 in 2022 – demonstrating the uphill battle security pros are fighting to ensure that the software used in their organizations doesn’t leave them vulnerable. 

CVEs are an incredibly helpful mechanism for security teams, enabling them to check which dependencies are vulnerable. You simply compare the list of your dependencies to a list of vulnerable versions with CVEs attached, and you’ve got security findings. Address them, and you’re safe, right?

Not so fast. CVEs are only one part of the equation. Not every vulnerability—not even every actively exploited vulnerability currently being used by threat actors—receives a CVE. Sometimes, this happens because a software library or project deliberately chooses to hide a threat. Sometimes they’ve simply decided it’s part of a library’s intended behavior, even though it could put organizations at risk if used improperly.

The problem with a vulnerability without a CVE is this: scanners that check your dependencies for vulnerabilities only see CVEs. No CVE, no finding—even if the way your organization uses the dependency leaves you vulnerable to exploitation, including remote code execution (RCE) attacks.

At Oligo, we’re especially interested in these vulnerabilities. When we started investigating them, we saw them as lurking in the shadows, hidden to scanners but still representing a potential threat. We call them “shadow vulnerabilities,” a term the industry has rapidly adopted, and a group of threats indicating that software is often vulnerable by design. But where do shadow vulnerabilities come from—and what can you do about a risk that isn’t cataloged?
So, let's step into the shadows.

What are Shadow Vulnerabilities?

When workers frustrated by bureaucratic red tape choose to do an end-run around standard processes, it results in “shadow IT” systems. When an intern saves to an unauthorized cloud storage provider, they’ve created “shadow data” invisible to IT. When an employee tries to boost productivity by using a personal account for a SaaS tool that they use at work, they’ve created “shadow SaaS.”

In tech, to be a shadow is to function outside of known, approved, or monitored organizational policies and practices, most often with no oversight. Most times, these shadows pose a greater security risk than other issues, because unknown threats cannot be addressed or mitigated.
The two prerequisites of shadow vulnerabilities are simple:

  1. Be a vulnerability—an issue that allows unauthorized or unintended use of an application.
  2. Don’t have a CVE.

That first part is very important: shadow vulnerabilities are not theoretical threats. They have real-world consequences.

Recent discoveries by Oligo Security researchers have exposed the presence of shadow vulnerabilities in widely used software projects, including Ray, TorchServe, TensorFlow, and SnakeYAML. These vulnerabilities—which stem from the misuse of popular libraries—have exposed organizations to risks including RCE, and in some cases were actively exploited by attackers.

Why Don’t Shadow Vulnerabilities Get Cataloged?

The short answer: it’s complicated. 
CVEs most frequently stem from simple coding mistakes, or configuration or implementation issues. Shadow vulnerabilities are different: they’re the result of complex design decisions. Unlike straightforward coding errors spotted in regular reviews, these vulnerabilities are subtle and often overlooked. They arise from the way different components interact.  A design choice that enables one use case becomes a risk in another. Typically these are not easy to patch away, and reflect broader systemic challenges. 

One of those challenges is determining the answer to an important question: what’s a vulnerability, anyway? Above, we gave our quick definition (“an issue that allows unauthorized or unintended use of an application”). But not everyone sees it this way. In the eyes of many open-source project maintainers, vulnerabilities must result from an unintended or undocumented behavior.

It’s the classic software engineering gag: once you document it, it’s not a bug, it’s a feature. So where does intended behavior end and a vulnerability begin? 
Definitions and boundaries here depend on your frame of reference. Say you’re a maintainer of an open-source project you devote a lot of your time to for free. You might not be fond of someone trying to characterize a behavior you see as intentional and well-documented as a vulnerability.

If you’re trying to defend yourself from attackers, the situation looks different. Does it make any difference to your bottom line whether the attackers who stole your data or disabled your systems did it by using a CVE or a behavior a developer intended and documented? No, not really—in fact, you might feel a bit angry that the project could include such a behavior and call it “intentional,” while you’re still reeling from the impact of an attack.
So who decides what gets a CVE and what hides in the shadows? Now we’re entering the realm of responsible disclosure.

Shadow Vulnerabilities and Vendor Disputes

Disclosing vulnerabilities is fraught with the potential for negative impacts. If disclosure happens too early, with no patches available, attackers get a serious head start before anyone can make their applications secure. If it happens too late (or not at all), scanners can’t detect the vulnerability.
To get the timing right, the industry created a standard procedure for responsible disclosures and receiving a CVE assignment.
Here’s how the process works—and where shadow vulnerabilities come into the picture:

  1. Discovery and Initial Analysis
  2. Reporting of the Vulnerability 

—-SHADOW VULNERABILITIES HAPPEN HERE—-

  1. Vendor Acknowledgement
  2. Vendor Assessment and Response
  3. Coordination of Disclosure Timeline
  4. Patch Development and Testing
  5. Releasing the Fix
  6. Public Disclosure

Quite early in the process, the maintainer may choose to dispute the vulnerability, rather than mutually agreeing to it. Maintainers can dispute a vulnerability even if it is reproducible and exploitable—or even under active exploitation.

The dispute can take a long time to resolve, creating a crucial window of opportunity for would-be attackers, who can exploit the issue while organizations remain totally unable to detect their presence. 

From the maintainer’s perspective, these vulnerabilities often arise from users not using a library’s security best practices. However, these are frequently hidden in the docs in a way that makes developers misinterpret them (or miss them altogether). Developers often assume that using default configurations and engaging in typical use will not result in a significant risk exposure.

Maintainers take a stance that they’re aware of the issue—but do not plan to take action, and do not see it as resulting from unintended behavior, even when this behavior puts systems at risk. While they recognize the vulnerability's presence, they may choose not to address it. 
The maintainers are individuals with their own interests and opinions (which can sometimes be iconoclastic), and have the power to dismiss an issue with a “no fix” decision.
Motivations can differ: limited resources, a belief that the risks are minimal, or simply prioritizing other tasks. Maintainers have a lot to do, much of it thankless work, and fixes of what they see as intended behavior may not seem as important as other parts of their to-do list.

Shadows Under Attack: ShadowRay Active Exploitation

In March 2023, Oligo Security discovered an attack campaign we dubbed ShadowRay.
The attack campaign targeted a disputed vulnerability (CVE-2023-48022) in Ray, a widely used open-source AI framework. 

For at least the last seven months, this issue has been actively exploited by attackers using it to steal computing resources for cryptomining operations and troves of sensitive data, including Cloud Credentials, AI models and datasets. Organizations impacted have included some of the most prestigious universities in the world, biopharma companies, and more.

Because CVE-2023-48022 was disputed, many development teams (and most static scanning tools) could not detect it. Some developers may not have seen the Ray documentation—and millions of dollars’ worth of compute resources were stolen.  

Entire software libraries may be architected in a way that they are insecure by design, allowing innocent misuse to introduce a security vulnerability. OSS libraries may simplify or automate common tasks in ways that make sense from a developer experience perspective, but lack consideration for potential new security risks. When this happens, using specific features in unexpected ways can pave the way for malicious activity. 

At their core, shadow vulnerabilities are about the consequences of compromising between offering powerful features and ensuring security controls. These vulnerabilities stay hidden, unnoticed, until a developer or a malicious actor accidentally or intentionally triggers them.
In the words of [xkcd](https://xkcd.com/), "Some vulnerabilities are not just bugs in the system; they are the system itself."

The Responsibility Game: The Buck Stops Where?

Who is ultimately responsible for ensuring the security of software applications? Is it the developers? The users? The maintainers of open-source libraries?
The reality is that responsibility is often a gray area. Open-source library creators expect users to adhere to best practices, while project maintainers (bogged down by the non-trivial task of ongoing open-source project management and maintenance) may not prioritize patching underlying issues. This leaves users vulnerable, often unaware of the risks they face.

Even though responsibility may be hard to assign, the consequences overwhelmingly fall on organizations using open-source libraries, rather than the developers or maintainers of those libraries.
But how can organizations see what hasn’t been enumerated or made visible to scanning tools?

Seeing the Shadows and Fighting Back

Is there a way to see when an application is being exploited, even when the attackers use a shadow vulnerability? At Oligo, we realized that it doesn’t matter to an attacker whether they exploit your organization with a CVE or a shadow vulnerability. So why should it matter to your monitoring tools? We set out to build something different—something that helps organizations uncover the full picture of software vulnerabilities and take action.

Oligo ADR (Application Detection & Response) is the first solution in the world that can detect application attacks in progress. Using deep runtime inspection, Oligo ADR sees the behavior of every dependency in every application you build, buy, or use--and identifies when behavior patterns indicate an attack has begun. 
With Oligo ADR, security teams can detect application compromises in moments, not months, and can detect threats whether or not they have been assigned a CVE. Oligo ADR can detect active exploitation of supply chain compromises and zero-days with a solution that has low technical overhead and deploys in hours.

Our mission at Oligo is to bring every vulnerability out of the shadows and into the light to forge the next generation of application security.
Watch out, attackers: you’re running out of places to hide.

Subscribe and get the latest security updates

Zero in on what's exploitable

Oligo helps organizations focus on true exploitability, streamlining security processes without hindering developer productivity.