Perspective

The Coana Approach to Reachability Analysis

Learn what's different about Coana's approach to reachability and what we do to ensure highly trustworthy results.

The Coana Approach to Reachability Analysis

Written by

Martin Torp

CPO, Co-founder

Industry

No items found.

Location

Number of engineers

Programming languages

No items found.

Every security professional is today well-aware of 'the noise' produced by traditional SCAs without reachability, so it's no surprise that most major SCA providers have been scrambling to sprinkle some form of reachability analysis over their traditional SCA offerings. However, there are many different ways of doing reachability analysis, and it can be incredibly difficult for users to understand what distinguishes the different solutions and what solution meets their specific requirements.

When we talk to prospective customers, some of the most common questions we get are 'What's the difference between Coana and the reachability-based SCA offered by xyz?' and 'How can we trust Coana's reachability analysis to produce correct results?'.

In this post, I'll try to answer these questions by describing Coana's philosophy to reachability analysis.

A Static Approach to Reachability Analysis

There are many different ways to conduct reachability analysis. You may have heard about reachability analyses that claim to be static analysis-based, based on eBPF, network traffic-based, dynamic/runtime-based, and so on. Broadly speaking, there are two main categories of reachability analyses.

  • Static reachability analyses which conduct reachability analysis without running the code of the target application.
  • Dynamic reachability analyses which conduct reachability analysis based on observations extracted from a running version of the target application.

Dynamic reachability analyses suffer from some important limitations:

  • If a vulnerability is only reachable through code that is rarely executed, then dynamic reachability analyses may incorrectly mark a vulnerability as unreachable, thereby potentially missing a critical security problem.
  • Since dynamic reachability analyses often rely on observations extracted from applications running in production, they generally cannot conduct reachability analysis on code that is still in development.
  • Many dynamic reachability analyses require agents that are difficult to install and maintain to be installed in your production environment.

Static reachability analyses don't have the same limitations (though simplistic static analyses suffer from other limitations, as we'll cover below). So, if static analyses suffer from fewer limitations, why doesn't everyone build static reachability analyses? The short, simple answer to that question is:

💡 Building scalable, precise static analyses is incredibly challenging!

Coana: A Static Analysis-first Company

At Coana, we know how difficult static analysis is since the founding team has collectively more than 30 years of experience as academic researchers with a focus on static analysis. And those are not just 30 years of theoretical pen and paper work. We have been working on bleeding-edge static analysis tools like the Jelly and TAJS analyses for JavaScript and written numerous research papers on the topic (link). These many years of experience have taught us many valuable lessons on how to build scalable, precise static analysis tools. Most importantly, we strongly believe, and adhere to the principle that:

💡 Each programming language requires a dedicated static analysis.

By following this principle at Coana, we have been able to tune our analyses to handle the intricate features of each language. Let’s consider a small example that shows why this principle is important. It’s somewhat contrived to keep it small, but nevertheless, shows a pattern that occurs often in JavaScript programs.

1export class MyUtils {
2  constructor(utilsLib) {
3    this.utilsLib = utilsLib;
4  }
5  trim(s) {
6    return this.utilsLib.trim(s);
7  }
8}

my-utils.mjs

import {MyUtils} from './my-utils.mjs';
import lodash from 'lodash';

const lib = new MyUtils(lodash);
console.log(lib.trim('Hello World! '));

index.mjs

The MyUtils library is written as a wrapper of another utility library. In this case, it’s instantiated with lodash as the wrapped library.

Back in early 2021, a vulnerability (CVE-2020-28500) was discovered in lodash affecting its trim function. A proper reachability analysis should be able to tell that it’s lodash` trim function that is called inside MyUtils.trim() on line 6. However, for a reachability analysis to detect this vulnerable call, it must be able to infer that utilsLib is really instantiated with lodash. This shows that a simplistic static analysis that just looks for patterns like lodash.trim() in the code won’t be enough. Real examples are often more complicated, involving data flow between many modules. Coana's state-of-the-art analysis is carefully designed to precisely model such mechanisms.

How we Approach Correctness

It is very important to understand that no reachability analysis is perfect. In fact, it’s been proven to be impossible to build a perfect analysis, whether static or dynamic (See Rice’s theorem), so don’t trust anybody who claims they have one. However, the fact that perfect program analysis is mathematically impossible doesn’t mean that we can’t build analyses that work exceptionally well on real-world programs. Apart from the 1-analysis-per-language principle described above, here are some other practices we follow at Coana to ensure high-quality results:

  • Soundness testing: We use a technique known as soundness testing to check that our analyses don’t miss any real vulnerabilities. In a soundness test, we match our static reachability analysis results against runtime observations derived from running the target program to ensure that code that’s reached at runtime is also marked as reachable by the static analysis.
  • Conservative reachability: Sometimes Coana’s analyses are uncertain about the reachability of a vulnerability. In those cases, Coana marks the vulnerability as reachable even if it may not be. While this results in some false positives, it happens seldom enough that Coana can still reduce the number of alerts by more than 80%.
  • Consistent benchmarking: We consistently benchmark our analyses against a large set of open source projects. Our benchmarks ensure that we catch regressions in both the scalability and precision of our analyses early.

With all this said and done, there are rare theoretical scenarios where Coana may incorrectly classify a reachable vulnerability as not reachable. However, in a world where fixing every vulnerability is often infeasible and even human security engineers are prone to mistakes in their triaging, we are confident that using a high-quality reachability analysis is the best possible solution to prioritizing the most important vulnerabilities in third-party dependencies. Let your developers focus on the security efforts that actually matter rather than spending weeks fixing irrelevant security vulnerabilities in unused features.

Interested to learn more about how Coana works and test it out on your own code? Then book a short demo below.

Ready to talk?

Book a Demo with One of the Coana Founders