Skip to main content

Threat hunting policies

Threat hunting policies are a category of built-in policies used by the Spectra Assure platform to help you improve the overall software package security.


These policies are used during software package analysis to check your code and inform you if any of the built-in validation rules are violated. Specifically, threat hunting policies focus on detecting malicious and suspicious behaviors in software components to prevent tampering incidents and software supply chain attacks.

In the Spectra Assure SAFE report, these policy violations can be found in the Threat Hunting issue category and cause risk in the SAFE Assessment Tampering category.

Developers, release managers, and software publishers will benefit the most from the guidance provided by these policies. When Spectra Assure products detect malicious or suspicious behaviors in a software package, the affected files are highlighted in the analysis report. Development teams can then apply the remediation advice for each particular policy to resolve detected issues.

Differential analysis policiesโ€‹

Differential analysis policies are a subset of threat hunting policies that detect behaviors and changes characteristic for known software supply chain attacks. They focus on detecting issues that closely resemble previously discovered supply chain attacks based on their signatures.

In Spectra Assure analysis reports, these policy violations are listed in the Issues > Differential Analysis category. Differential analysis policies are triggered only on diffs and reproducibility scans and visible in the reports only in those cases. These policy violations are always considered "blocking", because they should stop the release process automatically and immediately.

Policies in this categoryโ€‹

Threat hunting policies cover the following:

  • known malicious behaviors
  • generally suspicious behaviors that require manual review
  • suspicious network references
  • unsafe AI models
  • indicators of regulatory non-compliance
Show/hide all threat hunting policies

Differential analysis policies cover the following:

  • tampering indicators resembling the SolarWinds Orion software compromise
  • tampering indicators resembling the 3CX DesktopApp software compromise
  • tampering indicators resembling the Codecov Uploader software compromise
  • tampering indicators resembling the UAParser.js software compromise
  • tampering indicators resembling the CTX software compromise
  • tampering indicators resembling the XZ Utils software compromise
Show/hide all differential analysis policies

Security challenges and practicesโ€‹

Lack of visibility and context into components and dependencies that make up the software supply chain is a major challenge that security teams face in their day-to-day work. This challenge translates into difficulties in decision-making on internal risks, vulnerabilities and threats, as there is not enough actionable information available to the relevant teams. Specifically, SOC teams struggle with an absence of in-depth knowledge about incidents, and threat hunting teams have too few clues to build out useful hunting hypotheses.

Threat hunting comprises a number of proactive measures to assess environments for threats that are not detected by security tools and practices currently in place. Most organizations have threat hunting programs, or at least recognize their importance, and continually invest in improving their approach to identifying active threats in their environments. By definition, every software supply chain has a large attack surface, as compromise can happen at any stage of the software development lifecycle. An organization must protect its own source code repositories, build environments, and software deliverables not only from outside attackers, but also from insider threats. The added responsibilities of maintaining high-quality code, monitoring third-party dependency usage, and employing security best practices at every level require a combination of strategies and activities. However, without the right kind of threat intelligence, all those efforts can yield inconclusive, subpar results, and in the worst case, fail to prevent software supply chain attacks.

The Spectra Assure platform leverages the renowned ReversingLabs threat hunting expertise and condenses it into an extensive set of built-in threat hunting policies. This reduces time-consuming threat hunting activities that require manual work. Supported by explainable threat intelligence provided by Spectra Assure, organizations can confidently automate their processes to reveal security risks, software quality gaps, and indicators of malicious tampering before reaching production.

More specifically, diffing and threat hunting policies in Spectra Assure together form a powerful solution for detecting tampering early in the software development process. Differential analysis or diffing compares two subsequent versions of the same software to identify modifications and potential indicators of tampering. Those modifications can include anything from file format and hash changes to new or updated dependencies, introduced or resolved known vulnerabilities, and new or altered software functionalities. The most relevant differences in this context are behavior changes. When Spectra Assure compares two software versions, threat hunting policies highlight behavior changes that can be considered anomalies and therefore warrant a deeper review or investigation.

Threat hunting policies are also used to predict novel supply chain attacks based on expected attacker patterns and heuristic rules. ReversingLabs regularly collects and analyzes packages from popular package management repositories (npm, PyPi, Nuget, RubyGems). In each of those software communities, ReversingLabs identifies and tracks software behaviors. Malicious behaviors, as well as behaviors that are uncommon or unusual for a community, are distinguished from typical behaviors that are expected from highly used packages in a community. This wealth of data is then used to enrich analysis reports with behavior prevalence information. Prevalence and distribution of detected behaviors in a software package are key points in onboarding third-party components, deciding which components require manual review, and preparing risk mitigation or issue resolution plans.

Software behaviorsโ€‹

In the context of Spectra Assure, software behaviors (or just "behaviors") are human-readable descriptions of file intent detected in the software during analysis. As part of the static analysis and file decomposition process, Spectra Assure converts complex code patterns into descriptions that clarify what the analyzed software is capable of doing, or how the software may "behave" when used. The code patterns, metadata, and file content collected during analysis must fulfill specific conditions for Spectra Assure to identify them as software behaviors. In Spectra Assure reports, those conditions are explained as reasons why a behavior was triggered.

A file can trigger dozens of behaviors, not all of which are equally problematic. It's important to understand the context - the type of software and its main functionalities, and how detected behaviors match what a developer or end-user expects an application to be capable of.

Spectra Assure products can identify and describe hundreds of software behaviors. They are organized into the following general categories:

  • Anomaly - Contains unusual characteristics
  • Autostart - Tampers with autostart settings
  • Behavior - Automatically executes activities as a user
  • Disable - May disable system services
  • Document - Exhibits unusual activities when handling documents
  • Evasion - Tries to evade common debuggers/sandboxes/analysis tools
  • Exploit - Contains known exploits against the system
  • Execution - Creates other processes or starts other applications
  • Family - Associated with known malicious families
  • File - Accesses files in an unusual way
  • Flow - Leaks sensitive information to external hosts
  • Macro - Contains macro functions or scripts
  • Memory - Tampers with memory of foreign processes
  • Monitor - Able to monitor host activities
  • Network - Has network-related indicators
  • Packer - Contains obfuscated or encrypted code or data
  • Payload - Extracts and launches new behavior in an unusual way
  • Permissions - Tampers with or requires permissions
  • Registry - Accesses registry and configuration files in an unusual way
  • Search - Enumerates or collects information from a system
  • Settings - Tampers with system settings
  • Signature - Matches a known signature
  • Steal - Steals and leaks sensitive information
  • Stealth - Tries to hide its presence

Every behavior is assigned a unique ID, which is visible in the analysis reports when the behavior is triggered for the analyzed file. Similar to policy controls, custom filters for behaviors are supported in the policy configuration. Users can target each behavior by copying its unique ID from the analysis reports and adding it to the appropriate section in the policy configuration.

Behavior prevalence information collected from software communities complements behavior descriptions in Spectra Assure reports, and makes unusual, suspicious, or downright malicious behaviors easier to surface and assess. Such behaviors may be indicators of software tampering, so it is highly recommended to manually review them and apply remediation advice suggested by relevant threat hunting policies.

When prevalence information is available for a behavior, it can be one of the following:

  • Common - the detected behavior is often found in the community the software component belongs to
  • Uncommon - the detected behavior is rare within a community the software component belongs to
  • Anomalous - the detected behavior was never seen in a community the component belongs to
  • Important - the detected behavior is not malicious, but should be prioritized for code intent review
  • Malicious - the detected behavior is seen only in malicious packages within a community the component belongs to

When prevalence information is not available for a behavior, it is indicated by the "Unknown" prevalence status.