3-Part Webinar Series:

Get highlights of new OT threat activity, vulnerabilities, and insights from frontline defense.

Skip to main content
The Dragos Blog

02.26.18 | 8 min read

Threat Analytics and Activity Groups

Joe Slowik

Computer and network defense has typically focused on ‘indicators of compromise’ (IOCs) to drive investigations and response. Anomaly detection and modeling (e.g., machine learning approaches) are also increasingly used for alerting purposes, but due to the lack of context of adversary activity, they are of limited utility in tracking threats or informing investigations – thus, they will not be discussed in-depth here. Returning to IOCs, while they have value, the name indicates that such value is generally backward-looking: an IOC is indicative of a compromise by a known and observed threat vector. As a result, an IOC-based approach to defense has limited value in forward-leaning, hunting-oriented defense, and is of almost no value whatsoever in catching ‘new’, not previously observed intrusions.

The majority of IOCs captured after an observed intrusion are target specific – they are unique or limited to that specific event. An adversary will either change or utilize different technical items, with corresponding different IOCs, for any future or different intrusions. Furthermore, IOCs are, by definition, unitary or ‘atomic’ in nature; they refer to a single observable piece of information: an IP address, a domain name, or a file hash value. While these have value, on their own, IOCs only capture one very specific aspect of any activity and must be combined with other data points to yield ‘information’, let alone intelligence. Thus, an IOC-dependent approach will find itself in perpetual data refinement to identify follow-on details from the single alerting point, so as to gain knowledge or understanding of an event.

Transitioning from defense by IOCs, a threat behavioral analytic approach attempts to leverage commonalities in adversary behaviors to create more complex threat identification methods by incorporating multiple data points into a single, robust analytic. An analytic is designed to target an adversary behavior, especially one that is an operational requirement (e.g., nearly all intrusions will require some form of command and control) and not trivially changed. For example, a specific command and control domain can be changed, but the method of communication (especially if a custom protocol or implementation is employed) is not so easily shifted.

In developing an analytic, the resulting detection methodology should not focus on a specific implementation of a behavior, but rather seek to cover multiple implementations of the behavior type. Threat analytics focus on adversary tactics, techniques, and procedures (TTPs) and behaviors – also referred to as ‘tradecraft’ – instead of static and atomic data points. Threat analytics can, therefore, be forward-looking and flexible. For the former, an analyst may not identify a specific instantiation of command and control but he or she can identify general command and control behaviors to track and alert on. With respect to the latter, flexibility comes from the ability to capture mutations in specific examples of behaviors over time. A critical advantage to this approach is providing alerting criteria with context. Even in those cases where a general threat analytic fires, the analyst has the advantage of knowing the corresponding alert relates to an instantiation of a type of malicious activity, aligning to a portion of the relevant attack path or kill chain, yielding context and nuance to the investigation’s start. This contrasts with an IOC approach, where a single IOC detection – for example, a hash value – must then be manually oriented by the analyst to determine its relevance, use, and associated activity.

When focusing on adversary behaviors, an alternative means of tracking and identifying adversaries themselves emerges that is complementary with an analytics-based approach for threat identification. Specifically, the Diamond Model of Intrusion Analysis identifies threats not by ‘who’ they are, but rather ‘how’ they operate. This transition may seem trivial at first, but this transition represents a dramatic difference from typical threat attribution techniques, which seek to group observed data points (IOCs) as part of an identified, labeled object associated with some publicly-recognized and known entity. For example, “advanced persistent threat FOZZYBEAR is associated with the country Ruritania” provides attribution for FOZZYBEAR activity to an entity, but lacks any definitive connection to how FOZZYBEAR operates or what it looks like to a defender in practice outside of the atomic data points which were collected under the FOZZYBEAR banner.

Contrary to the FOZZYBEAR example, the concept of activity groups, derived via the Diamond Model, focuses on behaviors and actions displayed by an entity. Specific observables include: an adversary’s methods of operation, infrastructure used to execute actions, and what targets they focus on (either specific targets or more general verticals, such as industry type). The goal, as defined by the Diamond Model of Intrusion Analysis, is to delineate an adversary as defined by their observed actions, capabilities, and demonstrated – not implied or assumed – intentions. These attributes then combine to create a construct around which defensive plans can be built. Ultimately, the desired end-state is to empower network defenders by providing a model of adversary activity that documents and forecasts likely adversary actions based upon observed behavioral and targeting traits.

Returning to the topic of analytics, threat behavioral analytics are an obvious extension of defining an activity group; behaviors, targets, and infrastructure of a malicious actor are documented and identified. Based upon this information, an analyst can define identifying or alerting criteria corresponding to the attributes classified within the Diamond Model representation of that activity group. However, at this stage, an important decision-point is reached with respect to the construction of analytics:

  1. Building threat analytics tuned to specific threat activity group behavior
  2. Designing a threat analytic to capture a general malicious TTP

The former is likely to be more accurate and focused, but at the cost of breadth and the ability to capture alterations of the underlying tactic. The latter is, in general, more sustainable and actionable, as analytics need not be redefined or recreated for each permutation of the technique, but at the potential cost of initial fidelity and detail when analyzing a triggered analytic.

To expound upon the latter point, if we were to define a threat analytic based around known adversary behavior – for example, taking a human machine interface (HMI) screenshot and exfiltrating this from the ICS environment for system reconnaissance, a DYMALLOY observed behavior1 – we can construct this in such a fashion as to abstract the precise method as to how this screenshot is created (likely tuned to a specific type of malware or other technique), and instead focus on simply that a screenshot was generated and moved out of the ICS environment. In the former case, where the specifics of how are taken into consideration, we have likely generated a high-confidence DYMALLOY threat analytic. But in the latter case, we have instead produced an all-encompassing threat behavior analytic that will identify any observation of screenshot migration out of the ICS network. This is more robust in detecting the general technique – but at the cost of losing some level of context as to who might be responsible for the action.

This point becomes most salient when framing threat analytics within the broader context of response planning and investigation playbooks. If we are to chain our analytics with specific response plans (such as investigation playbooks, which will be covered in a future Dragos blog post), such plans can become more refined and specific in action and detail the closer they are to a specific activity group. In this case, narrower analytics are beneficial. For more general threat analytics, response plans must, by necessity, be more broadly focused and prescribe less detailed, more general response actions. Additionally, while a specific threat-focused analytic can lead an analyst to hypothesize likely next-step actions by the specific adversary for investigation and pursuit, the more general threat behavior analytic leaves a much larger field of possibilities open.

Building a threat-focused defensive mindset around both of these approaches – specific threats and generalized threat behaviors – becomes the ideal end-state for an intelligence-driven network defensive posture. In this manner, complementary defensive approaches are built:

  • General threat behavior analytics are designed to catch categories of malicious activity based upon adversary dependencies and ‘required actions’ for intrusion events.
  • Where appropriate, implement specific instantiations of general behavior analytics tuned to precise adversary actions

The goal of the above ‘tiered’ alerts is to catch the general type of malicious activity, and where sufficient information exists, create higher confidence analytics tuned to a specific implementation of that behavior. The latter enables a more precise response to ‘known threat actors’, while the former ensures that variations of the technique or behavior are observed.

Returning to the earlier example of screenshot exfiltration from an ICS environment, our general threat behavior analytic will consist of the following data points:

  1. Image file identified in network traffic FROM the ICS environment.
  2. Image file metadata matches characteristics of a system screenshot.

The above will capture any permutation of the screenshot exfiltration technique, at least depending upon how condition 2 is defined. While categorizing a general behavior, the analyst receiving a notification that such activity has occurred is then left with the task of identifying further specifics of how this traffic was generated and what the adversary’s next steps would be.

From the above general analytic, we can derive a specific, activity group-focused variation:

  1. General threat behavior analytic for screenshot exfiltration identified.
  2. Additional observable data points captured indicative of DYMALLOY activity – e.g., command and control techniques identified in network traffic; or malware variants associated with the group identified in host data.

This approach takes the existing, generalized behavior and utilizes additional data to refine it to a notification of activity correlated with a known activity group. The benefit of this approach is that the analyst now has a (potentially) more limited, narrower scope set of questions to answer: assuming the DYMALLOY detections are correct, the screenshot activity identified can be correlated with other observations to initiate a more focused investigation. Based upon other elements of the DYMALLOY activity group definition – tools, targets, and infrastructure – the analyst can focus on most-likely actions leading to the observed activity and produce more specific hypotheses to initiate the investigation.

Examining another activity group tracked by Dragos, ELECTRUM – the group responsible for CRASHOVERRIDE – utilizes ‘living off the land’ techniques to accomplish network pivoting and further intrusion in observed events. Many of the behaviors exhibited by ELECTRUM for follow-on network compromise can be captured by the general threat analytics below:

  1. New use of PSExec between network endpoints.
  2. A single host executing PSExec on multiple network endpoints.
  3. A single host attempting to connect to many hosts via ‘net use’ commands.

The above abstracts some details, but provides a general conception of capturing intrusion pivoting via ‘living off the land’ methods. This would capture ELECTRUM uses, as well as other malicious (or suspicious) activity with the same behaviors. When triggered, the analyst is alerted to the suspect activity and can begin an investigation, but as with the screenshot example above, this initial investigation is hampered due to the initial large numbers of follow-on questions to scope and investigate. While still useful, the general behavior requires additional work

However, when sufficient detail is captured on preceding and likely follow-on activity through an understanding of the behaviors, infrastructure, and targets or intentions of a specific entity, such as ELECTRUM, the analyst is presented with a much narrower list of high-confidence next steps to investigate. In this case, if the behavioral analytic above is further enriched by observables specific to ELECTRUM (specific malware identified with the group, or examples of how ‘net use’ or PSExec are actually employed), the analyst now has a concrete path to follow for subsequent investigation.

One potential misconception that might emerge from the previous discussion is that threat analytics, when not enriched with specific activity group information, can be cumbersome or difficult for defenders to utilize. While specific behavioral patterns will obviously yield to specific follow-on response actions due to their refinement, the network defender must take the more generalized threat behavior analytics in the context of the typic alert, signature, or IOC that forms the starting point for most security incidents at present. In these cases, the analyst is presented with nothing more than a single data point – this packet header, that IP address, this MD5 hash sum – as the start for an investigation. The number of potential hypotheses for further exploration from a single alerting point such as this are vast, disadvantaging the responding analyst.

Meanwhile, a general threat behavior analytic – while by definition ‘general’ – is both more specific than alerting off of an IOC and more general in that it captures adversary actions rather than atomic (and replaceable) portions of those actions. For the former, analytics are more specific in that the combination of data points ensures that, due to a greater corpus of initial information, the analyst will have higher confidence that the detected behavior is ‘bad’ or worth investigating, compared to alerting on one-off uses of software (such as PSExec), which may very well be legitimate. In the latter case, the analytics are also more ‘general’ in that, instead of being founded on a single, immutable piece of information (such as an IP address), these take a totality of actions (with their attendant behavioral permutations) into consideration and track a less-easily altered technique.

Transitioning from atomic, fleeting, and backward-looking IOCs as the foundation of security response and visibility is vital in transitioning network defense – not just ICS defense – into a more responsive, flexible, and active position. By identifying threat behaviors and designing behavioral analytics to capture these, analysts can begin shifting detection and response solidly to the defender’s advantage, while further refinement to track specific activity groups through instantiations of more general analytics can increase accuracy, confidence, and efficacy in specific response instances. Above all, developing an understanding of network security events through a behavioral perspective rather than a single observation point (atomic IOC view) ensures analysts are better positioned to understand and respond to malicious events as they are identified.

Ready to put your insights into action?

Take the next steps and contact our team today.