Reachability Analysis: 5 Techniques and 5 Critical Best Practices
What Is Reachability Analysis?
Reachability analysis is a method used to evaluate the risk associated with software vulnerabilities by determining whether specific parts of an application can access or invoke vulnerable code elements. These elements may include functions, classes, modules, or annotations linked to known security issues.
The analysis helps identify if a vulnerability is exploitable within the context of an application. This insight enables developers and security teams to make more informed decisions about addressing vulnerabilities, prioritizing those that pose tangible risks.
Reachability analysis can be used independently or as part of a larger risk-assessment framework, such as risk-based prioritization. This method often incorporates advanced techniques like program analysis and AI-driven tools, with validation performed by security researchers to ensure accuracy and reliability.
This is part of a series of articles about application security.
Reachability Analysis Approaches: Static, Dynamic, and Real Time
Let’s review the three primary approaches to reachability analysis.
Static Reachability Analysis
Static reachability analysis involves examining the application’s codebase without executing it. This method evaluates which parts of the code might call or invoke vulnerable functions based solely on the code’s structure and dependencies.
A major advantage of static reachability is its ability to integrate early in the development lifecycle. Developers can scan code repositories and get insights during pull requests or pipeline checks, enabling quick detection of vulnerabilities before deployment. Function-level static analysis is especially impactful as it identifies specific exploitable functions within a library, significantly reducing false positives. For instance, if a library contains multiple vulnerabilities but only a subset of functions are used in the application, function-level analysis helps narrow down the list of relevant issues.
However, static analysis has limitations. It operates without runtime context, meaning it cannot determine whether the identified vulnerabilities will actually execute when the application runs. Additionally, longer scan times and the inability to analyze dynamic behaviors or configurations in real-world environments limit its effectiveness for complete risk prioritization.
Dynamic Reachability Analysis
Dynamic reachability analysis evaluates the code during execution, often using runtime monitoring tools. This method tracks which parts of the application, including functions and libraries, are loaded and executed during runtime.
Dynamic analysis offers deeper insights than static methods because it considers the actual runtime behavior of the application. By identifying vulnerabilities in actively-used components, it minimizes false positives and highlights exploitable risks. For example, if a library is loaded into memory but its vulnerable functions are never called, dynamic analysis can deprioritize its associated vulnerabilities. Additionally, this approach often integrates with application detection and response (ADR) tools, enabling proactive security measures, such as blocking malicious actions.
However, dynamic analysis requires deploying agents or instrumentation tools in production or staging environments, which may introduce performance overhead. Also, its effectiveness depends on the comprehensiveness of the monitored runtime scenarios, as it may miss vulnerabilities in unused paths or edge cases.
Real-Time Reachability Analysis
Real-time reachability analysis combines static and dynamic methodologies, offering continuous monitoring of code execution with contextual insights from both approaches. This method evaluates runtime execution alongside business and cloud contexts, such as internet exposure, data access, and application criticality, to assess the true exploitability of vulnerabilities.
By correlating real-time function execution data with business and cloud-level parameters, this method provides the most actionable insights. For instance, it can flag vulnerabilities in a critical application with public internet access as high-priority while deprioritizing vulnerabilities in isolated or non-critical environments. Additionally, real-time analysis supports automated remediation policies, such as blocking specific libraries in high-risk scenarios.
Real-time analysis requires advanced instrumentation, integration with business and cloud systems, and robust monitoring capabilities to manage the data generated. The method is most effective when paired with tools that offer runtime protection and contextual filtering to streamline remediation efforts.
Related content: Read our guide to runtime security (coming soon)
Reachability Analysis Techniques
Reachability analysis solutions typically combine some or all of the following techniques.
1. Function-Level Reachability
Function-level reachability focuses on identifying whether a vulnerability within a specific function of a third-party library is actually exploitable within the context of an application. This type of reachability is highly precise and scans code to determine if particular functions associated with vulnerabilities are actively called in the application.
For example, some vulnerabilities, like a cross-site scripting (XSS) issue in a jQuery library, might only be exploitable under specific conditions, such as the usage of certain anti-patterns. Function-level reachability tools analyze the codebase to verify if these conditions exist. By doing so, they help distinguish exploitable vulnerabilities from false positives, saving security teams valuable time and effort.
2. Package Baselining
Package baselining assesses the behavior of third-party libraries to identify unusual or malicious actions. It observes what a package typically does—such as logging messages or accessing files—and flags deviations from this behavior, such as executing unauthorized network calls.
This type of reachability is particularly useful in containerized environments, where runtime behavior can provide critical insights into potential risks. For example, a logging library like Log4j should not execute code or make network calls under normal circumstances. If it does, this would likely indicate an active exploit.
3. Internet Reachability
Internet reachability prioritizes vulnerabilities based on their exposure to internet-facing systems. The premise is that components directly accessible from the internet pose a higher risk. However, this method often oversimplifies the problem, as vulnerability exploitation depends on factors beyond proximity to the internet, such as the permissions and usage context of affected components.
For example, a vulnerability in a non-internet-facing component could still be critical if exploited through chained attacks. While internet reachability provides a useful filter for prioritizing certain risks, it must be combined with deeper application context and runtime analysis to ensure accurate prioritization.
4. Dependency-Level Reachability
Dependency-level reachability examines whether a vulnerable package is imported or called anywhere in the application. While less granular than function-level reachability, it provides a basic layer of prioritization by identifying unused dependencies that can be safely removed.
This method is often used when deeper reachability analysis is unavailable. It helps reduce the attack surface by eliminating unused libraries, although it may require additional manual research to verify whether the vulnerabilities in used dependencies are exploitable.
5. Package Used in Image
This method determines whether a vulnerable package is present and running in a container image. Its utility lies in detecting unnecessary packages that can be removed to reduce the attack surface. For instance, removing unused Linux packages from a Docker image can mitigate risks associated with vulnerabilities in those packages.
While this approach is limited as a standalone prioritization method, it becomes valuable when combined with runtime data, configuration vulnerabilities, or user interaction requirements. By ensuring that only necessary dependencies are included in production images, organizations can enhance security and streamline their remediation efforts.
Challenges in Reachability Analysis
Scalability for Large Codebases
As software systems grow in complexity, analyzing reachability across expansive codebases becomes increasingly difficult. Large projects often consist of millions of lines of code, multiple modules, and extensive dependencies, all of which can overwhelm analysis tools. This results in long processing times and increased potential for errors or oversights.
Handling Complex Dependencies and Third-Party Libraries
Modern applications rely heavily on third-party libraries and frameworks, which introduce additional layers of complexity. These external dependencies may include proprietary or obfuscated code, making it difficult to assess their contribution to system states accurately. Additionally, updates to third-party components can alter reachable states, introducing new risks or invalidating prior analysis results.
Balancing Precision and Performance
One core challenge in reachability analysis is achieving a balance between precision and performance. Highly precise models can be computationally expensive and time consuming, especially for large or dynamic systems. Conversely, faster, less detailed methods may miss critical insights or produce false positives and negatives.
Best Practices for Effective Reachability Analysis
1. Integrate Reachability Analysis into Development Lifecycle
Incorporating reachability analysis into the development lifecycle ensures continuous monitoring and improvement of code quality. By integrating these practices early in the development stages, developers can detect issues before they escalate, reducing defects and maintenance costs. Continuously analyzing reachability throughout development also fosters a proactive security culture, where potential risks are anticipated and mitigated ahead of time.
This integration involves using automated tools for regular checks during the code build process. By embedding best practices in each stage of development, from design to implementation, the reachability of system states can be monitored. This leads to early detection of faults, enhancing software reliability and lowering the chance of security incidents.
2. Combining Reachability Analysis with SCA
Combining reachability analysis with software composition analysis (SCA) provides a more comprehensive view of application security by integrating insights about vulnerable dependencies with contextual information on exploitability. SCA identifies all third-party components within an application, flagging known vulnerabilities across these dependencies. By pairing this data with reachability analysis, teams can pinpoint which vulnerabilities are truly impactful, as opposed to merely existing within the codebase.
For example, an SCA tool might detect a critical vulnerability in a widely used library, but reachability analysis can reveal that none of the affected functions are invoked within the application. This distinction allows security teams to deprioritize low-risk vulnerabilities and focus remediation efforts on threats with real exploit potential. Such synergy minimizes false positives while enhancing the precision of vulnerability management strategies.
3. Handle False Positives and Negatives
To enhance the reliability of reachability analysis, it is critical to properly manage false positives and negatives. Techniques such as machine learning can be employed to filter out false results, refining the accuracy of detection mechanisms. It's crucial to adapt and fine-tune algorithms to distinguish between real threats and benign states, minimizing unnecessary interventions.
Regular feedback loops and iterative processes help refine analysis tools, improving detection accuracy. By understanding the root causes of these inaccuracies, developers can better tailor their analysis strategies to reduce noise. Precision in managing these outputs results in more reliable reachability assessments and resource-efficient problem-solving processes.
4. Keep Analysis Models Updated
Updating analysis models regularly is necessary to keep up with evolving codebases and security threats. Frequent software updates often introduce new states or transitions that must be evaluated to maintain robust reachability analysis. Keeping models current ensures accurate assessments and reduces the risk of overlooking potential vulnerabilities.
Continuous updates also involve recalibrating tools and refining techniques to suit new requirements and technological advancements. By maintaining up-to-date models, development teams can ensure that reachability analysis remains relevant and effective, providing ongoing clarity into system behaviors and potential issues.
5. Foster Team Training and Knowledge Sharing
Effective reachability analysis requires teams to be well-informed about best practices and tool utilization. Regular training sessions and workshops can empower teams with the necessary skills and knowledge to perform accurate analysis. Fostering a culture of knowledge sharing among team members strengthens team capabilities and encourages innovation.
Establishing repositories of learning resources and conducting collaborative sessions promote an environment of ongoing learning and improvement. Encouraging cross-functional collaboration enhances problem-solving abilities and ensures consistent application of reachability analysis principles across projects. Overall, empowering teams translates to more effective reachability assessments and a heightened focus on application security.
Transition to Real-Time Reachability Analysis with Oligo
Oligo Security brings real-time context to reachability analysis, enabling teams to prioritize vulnerabilities based on actual usage and risk. By focusing on actionable insights, Oligo helps organizations address vulnerabilities that matter most, reducing noise and saving time.
Key Capabilities:
- Real-Time Insights: Tracks actively used components to surface exploitable vulnerabilities as they emerge.
- Contextual Prioritization: Evaluates vulnerabilities based on real-time data and business impact for informed decision-making.
- Seamless Integration: Aligns with CI/CD pipelines and existing workflows, supporting continuous monitoring without disrupting development.
Addressing Reachability Challenges:
Oligo simplifies vulnerability management by:
- Reducing false positives through real-time validation.
- Highlighting exploitable paths within third-party dependencies.
- Supporting automated responses to mitigate high-risk actions.
Oligo equips teams to focus on securing applications against real threats while optimizing remediation efforts.
Discover how Oligo streamlines vulnerability management — schedule a demo to see it in action.
Subscribe and get the latest security updates
Zero in on what's exploitable
Oligo helps organizations focus on true exploitability, streamlining security processes without hindering developer productivity.