How much vulnerability analysis can we automate? How complete, and reliable, are those results? How well can we even answer those questions? Digital Operatives has been exploring questions like these for quite a while and the answer may be “much less than you think.”
While there are plenty of scientifically sound ways to investigate these questions, a coworker remarked, “we are not scientists; we are artists,” while sharing the image to the right. The image depicts a graph of potential attack vectors in a two-node, closed network. Every edge begins at some system state, travels through one or more attacker actions and leads to a practical goal for an attacker, like “deny access” or “gain unauthorized access.”
Most of the time, when conducting a vulnerability analysis task, we seek to answer a series of specific questions about a specific binary, most of which can be boiled down to a simple yes or no: Does this function sanitize user input before passing it to the kernel module? Is this buffer appropriately sized? Does this feature inappropriately rely on internal implementation details of Library X? These questions, and many like them, could be scripted away. With an appropriately robust tool, you might even be able to abstract them away into algorithmic analyses that match most similar implementations.
In our exploration of the limits of automation in answering such simple questions, we identified two key factors in vulnerability analysis that technology has yet to replicate: human creativity and intuition.
Sure, input_x is sanitized, truncated and evaluated against a series of expected inputs, but why are we accepting this input? What feature does it implement, and how? We identified one such scenario in our evaluation of a binary recently. Without access to the source code and the ability to write very implementation-specific tests, it was simply not possible to automate a solution to these types of questions and, when it comes to commercial solutions, getting source code may not be an option. Without a formal, machine-readable specification describing what files are supposed to be created at which times, and with what content, this question required that we rely on personal experience and intuition in order to discover that the binary created a file in Function X, a file unnecessary to the operation of the program. We never did identify any data being placed in the file… Is that malice? Is it a mistake, a vestige of testing that should have been compiled out? Is it an undocumented feature we never figured out? Is it a failed, benign attempt at malice?
How are we to verify the expected behavior of a particular program when we have to rely on the closed implementation from the vendor? For example, the vendor may state, “Program X launches Program Y at boot if Option Z is passed. Program Y monitors Port 1234 for the ABC protocol.” Each of these is (relatively) easily proven with static analysis and verifiable with dynamic analysis but can we be certain in confirming a statement like “Program Y does not send sensitive data.” What if we don’t feel it’s sensitive? Is it necessary to the system’s operation? To whom could it be valuable?
These are the questions we sought to answer in constructing the graph above. Here’s a contrived example to illustrate a similar situation:
Julie builds ATM software. She is paid/blackmailed/duped into inserting a covert feature: If a card results in a “Failed to read card” error, the user can type in any 5-digit number and the clock time’s last digit (minutes) is replaced with the current value, in thousands, of the cash in the ATM.
Is this truly sensitive information? While obviously not necessary to the ATM’s primary functions, is the feature even noticeable to most users? Would a security professional think to programmatically audit and verify the behavior of the clock display?
To close the loop on this example, consider that the attackers—not Julie, the ones behind her—may not have clearly visible motives. Perhaps Julie is unaware that this particular build is only going to ATMs on a sensitive government campus where the attackers are looking to ensure a particular set of employees are withdrawing a specific amount by having a well-placed “landscape technician” check the balance after their each use. Or perhaps this update is destined for ill-lit, poorly monitored ATMs in locations where the attackers can physically access them and they’re looking to limit their theft to the most profitable hits. Still, how much of a problem is this “feature?”
I’m sure this example seems especially absurd; but what if we weren’t talking about ATMs? What if it was a badge scanner and entry control system and, rather than a clock providing a cash balance, it was the LED blinking once for every 100 people who passed through the door? Obviously, this is potentially much more valuable, to a terrorist organization looking to cause the most damage, or a commercial competitor looking to understand their competition’s commitment to a certain product.
We decided to play out these kinds of scenarios with our graph. Beyond the hundreds of typical one- and two-step attack vectors, we ultimately identified just shy of 9.5 million unique attack vectors, each leading to a practical goal for a determined attacker. The majority of these paths relied on combinations of seemingly unrelated bugs, features or even seemingly innocuous malicious “features” like those above. To drive the point home, these malicious features may not rely on blackmail and subversion of an individual; the vendor may well have done it intentionally.
In the end, vulnerability analysis is as much an art as it is a science and, today, we rely on humans to produce art. Digital Operatives employs a highly skilled team of such artists: those with passion enough to paint a Turing’s Starry Night yet experience enough to appreciate the meaning and value in attacker Pablo’s dog sketch.
Whether you’re looking to commission your own piece of artistry or have a professional assessment of your latest acquisition, reach out to Digital Operatives’ imaginative team of experts and let us do the hard work.