April 18, 2024

Blog

Adapting Security Controls in the Evolving Threat Landscape

There’s a strong argument to be made that endpoint security controls are the technology backbone of a security program. They’re usually the last line of defense against a potential breach and unlock peace of mind. So, security leaders accept that they’ll likely be one of the largest line items on an annual budget.

Unfortunately, these critical technologies are neither one-size-fits-all nor ‘set it and forget it’ solutions. In addition to monitoring, they also require constant maintenance, configuration, and tuning. In this blog, we’ll focus on one of the reasons they require constant adaptation and how enterprise security typically addresses this challenge.

The Evolving Threat Landscape

According to IBM’s 2022 Cost of a Data Breach report, 83% of organizations surveyed suffered more than one data breach with 84% of respondents1 believing that most attacks started with the endpoint. What’s terrifying is that this survey data was collected prior to mass adoption of AI and OpenAI’s public release of ChatGPT in November 2022 - an accelerant to the evolution and enhancement of tactics, techniques and procedures (TTPs) that make up these threats.

In short: if the state of endpoint security controls are in dire straits in 2022, then they’re unlikely to be better off in 2024 because the speed at which adversaries evolved has increased dramatically. Unless, of course, organizations started proactively improving their defensive controls to address emerging threats.

Let’s take a look at what this process looks like for an enterprise security organization to shore up their security controls so they’re protecting them against the latest threats.

The Self-Service Approach to Improving Defensive Controls

We’ll start by addressing a harsh reality: there’s almost always going to be at least one organization that has to suffer a breach in order for the rest of the security world to be beneficiaries of the ensuing threat intelligence. This enables other organizations to ready their own controls against the threat that compromised patient zero.

The chain of events that lead to this threat intelligence often looks like this: one organization gets breached, they might contract an incident response team to do an investigation, that IR team’s finding become threat intelligence, and then that threat intel may be made available to the public or to paying customers. Sometimes, the threat actors become so prolific that the threat intelligence makes it all the way to CISA.

This isn’t the only scenario that can trigger a defensive response, but it’s a common one. These threats make their way to the headlines and ultimately in front of board members and C-levels, who will lob over a posture validation inquiry to a CISO.

With that preamble out of the way, let’s dig up a piece of public threat intelligence - CISA’s Volt Typhoon security advisory - and do a high-level overview of the security control adaptation workflow.2

Phase 1: Receiving and Interpreting Threat Intelligence

Upon receiving a piece of threat intelligence, you’ll likely have a threat intelligence analyst take an initial pass at reviewing the CTI report and generating a BLUF (Bottom Line Up Front) which will educate a senior leader about the nature of the described attack and any recommended next steps. In the case of the Volt Typhoon advisory, which had 50 unique techniques identified, it’s very likely this is not suited for initial review by a junior analyst. More than likely, this was immediately routed to a threat intelligence resource that has a direct pipeline to security leadership. This senior technical role is spending hours - a conservative unit - reading, interpreting, and processing for relevance and attack techniques. 

Phase 2: Making Threat Intelligence Useful

Even after passing over the desk of senior threat intelligence personnel - at no fault of their own - this advisory isn’t actionable. An organization might have determined relevance, although that’s no guarantee, but they still require an assessment and validation that endpoint controls are capable of seeing and mitigating the threat. To achieve this, a Security Operations team will most likely be tapping their Offensive Security function to turn the threat intelligence into some form of a security test. In the very least, you’re compiling the IOCs like malicious domains, SHA-256 hashes, and JA3 fingerprints. This is not necessarily a trivial effort, but it’s definitely the tip of the iceberg. In the case of CISA’s Volt Typhoon advisory, an OffSec expert is going to closely examine the LOLBINs, plucking out the TTPs, and examine their procedural variations.

After all the parsing, they’re finally ready to generate the security tests. So, they fire up their IDE and are probably spending cycles jamming away to create tests that reflect the TTPs in the report. Although it’s not always the case, this can be a very stop-and-go process with shared responsibility across multiple people that all need to be working in harmony with sympatico code.

Once the tests have been created, ideally the OffSec team has testing infrastructure in place to operationalize the newly created tests. If not, tack on some more workcycles.

Finally, your OffSec specialists are running these tests to determine exposure. It would be an oversimplification to say that the team is running the tests and handing off the results. This, in many cases, required coordination with other teams in the SOC to track down how security controls behaved in response to the tests. The process requires additional human intervention to contextualize the results and make the consumable for the next team in line: detection and response engineers.

Phase 3: Detection and Response Engineering

While it’s true that an XDR vendor might globally apply a new protection to their product to serve all of their customers, this is not always the case. It might take awhile. It could never happen. Heck, it might not even be possible to create a globally accepted protection due to the observed behaviors of the attack not being inherently malicious in the eyes of the defensive provider. At this point, the detection and response engineering is stepping up - sometimes referred to as the DART.

This team will examine the missing telemetry, detection, and prevention capabilities of the defensive controls and ultimately craft a solution that prevents the organization from being compromised. 

These DART teams will have strong knowledge of system internals and the platform used to build the custom detections in the defensive controls. In CrowdStrike, these are referred to as Indicators of Attack (IOAs). In SentinelOne, they’re STAR rules. Each defensive provider has their own query and detection syntax.

To build these rules, you’ll need a source of telemetry. But, what if that telemetry isn’t there? Then you’ve got to sort that out or find another method to create a detection. For example, if an adversary was observed changing a value in the registry, you’d need to collect telemetry about relevant registry operations.

Volt Typhoon is relatively stealthy with their use of LOLBINS to dump credentials from lsass.exe. In retrospect, the recommended mitigation sounds simple: block process creations originating from PSExec and WMI commands. But how might you detect other behavior, like attacker traffic being proxied into your internal network?

This is all to say that, unless there’s a global fix provided by the vendor, then the DART team is spending a lot of cycles sourcing telemetry and building logic that would alert on advanced adversary tradecraft.

Once new protections are complete, they require implementation, additional efficacy validation, and maintenance.

This workflow has been reflected in a few different frameworks. Forrester’s Allie Mellen codified it with the The Detection And Response Development Lifecycle (DR-DLC). Snowflake’s security teammates Haider Dost, Tammy Truong, and Michele Freschi have done a nice job summarizing it via their Detection Development Lifecycle, shown in the image below.3

Snowflake's Detection Development Lifecycle

Granted, the process digested by the Snowflake team is in the context of generating SIEM content, so it might have a slight deviation from what we’ve summarized, but not by much.

Phase 4: Executive Communications

With your new protection capabilities built, deployed, and validated, it’s time to communicate to the inquiring senior stakeholders. The way you communicate between the security team will differ from how you communicate to the board members and C-level personalities that might have triggered the engagement. It’s extremely unlikely they’ll care about ‘showing your work’ but it’s important that you’re able to visually provide evidence that your defenses are protected. Additionally, if your team has gone through the trouble of finding and fixing an exposure, it’d be nice to relay the progress of your improving defenses.

Beyond that, your CISO is likely going to package this with a BLUF of their own that looks something like this:

What: On February 7th, 2023, CISA, the FBI, and NSA announced that threat actors, Volt Typhoon, had compromised the IT environments of multiple critical infrastructure organizations and is pre-positioning itself on IT networks for potential disruptive attacks, not typical espionage, amid future geopolitical conflicts.
Who: These western intelligence officials say Volt Typhoon – also known as Vanguard Panda, Brronze Silhouette, Dev-0391, UNC3236, Voltzite, and Insidious Taurus – is a state-supported Chinese cyber operation that has compromised thousands of internet-connected devices. 
How: Volt Typhoon works by exploiting vulnerabilities in small and end-of-life routers, firewalls and virtual private networks (VPNs), often using administrator credentials and stolen passwords, or taking advantage of outmoded tech that hasn’t had regular security updates – key weaknesses identified in US digital infrastructure. It uses “living off the land” techniques, whereby malware only uses existing resources in the operating system of what it’s targeting, rather than introducing a new (and more discoverable) file.
Operational Impact: The cyber attack on US critical infrastructure via Volt Typhoon resulted in significant operational disruptions, compromising the reliability and functionality of essential services.

In some instances, you’d also consider adding in the operational, financial, and legal impacts.

Conclusion

There you have it. An at a glance view of what the minimally viable process of adapting your defensive controls might look like. It’s highly involved, time consuming, and involves a broad range of different Security Operation and Security Engineering functions and, of course, your security executive(s). When you consider the amount of time this process takes and the velocity and variety of the modern threat landscape, it’s no wonder organizations are drowning in a backlog of threats that need to be addressed.

1 IBM | Are you keeping pace with evolving threat landscape?

2 CISA | PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure

3 Snowflake | Detection Development Lifecycle

See the only production-scale detection and response platform first-hand

Book time with our team to see Prelude can help you create actionable threat intelligence, surface better detections, and remediate threats at scale.

Request Your Demo