CRA Research Overview

The goal of the Models for Enabling Continuous Reconfigurability of Secure Missions (MACRO) Cyber-Security Collaborative Research Alliance (CRA) program is to understand and model the risks, human behaviors and motivations, and attacks within Army cyber-maneuvers. Such understanding and models will lead to an asymmetric advantage in cyber domains against known and unknown attackers both in the ability to detect and thwart attacks as well as allow mission progress in the face of ongoing and evolving threats. The overarching scientific goal of this effort is to develop a rigorous science of cyber-decision making that enables military environments to a) detect the risks and attacks present in an environment, b) understand and predict the motivations and actions of users, defenders, and attackers, c) alter the environment to securely achieve maximal maneuver success rates at the lowest resource cost. Ultimately we wish to dictate and control the evolution of cyber-maneuvers and adversarial actions.

Winning or losing in the cyber-battlefield is dependent on defender action. However, any defender action should be based on a careful analysis of the totality of the relevant environment, risks, and potential future states. We will provide a rigorous apparatus for enabling that analysis and ultimately making optimal cyber-maneuver decisions. Practically speaking, we will enable defenders to answer, “Given a security and environmental state, what cyber-maneuvers best mitigate attacker actions and enable maneuver success?” Note that this is not a discrete and momentary analysis, but one that is continuous and adaptive within evolving state awareness.

Like its physical counterpart in traditional warfare, the waging of cyber-warfare requires constant re-evaluation of threats via reconnaissance, interpretation of adversarial intent and capability, and adjustments to strategy and resource use. Yet, there exists no theory or practice that performs such functions even in simple civilian cyber-domains, nor is there an approach for heterogeneous, hostile, and constantly changing military operational networks. It is this latter gap in the science of military-oriented computer and network security that we will address.

We envision future operational environments in which models of missions, users, and attackers guide the reconfiguration of security and network infrastructure on a continuous basis. Mission survivability is achieved by altering the security configuration and network capabilities in response to detected adversarial missions and situational needs of users and resources, and tools available to defenders. Cost and risk metrics are used to select optimal strategies and configurations that maximize mission success probabilities while mitigating adversarial actions. Models of user, defender, and adversarial behaviors, actions and needs are used to derive the mission state, as well as to identify those configurations that increase the probability of mission success. A simplified view of a framework in which the computation of risk and detection of environmental state provide inputs for a global optimization includes the following:

Outcomes: The proposed work will develop a formal framework consisting of models and algorithms for optimizing security in network environments to combat adversarial actions. We base these models around user, defender, and adversarial missions representing the goals and methods of cyber-maneuvers.

Risk

The risk research area seeks to develop theories and models of risk assessment in cyber-environments in order to integrate risk calculations into the mission optimization model. Here we combine traditional system and network risk with human oriented risk. In the latter, individuals (users, defenders, and attackers) and human-resource interfaces are directly integrated as a component of risk valuation. Attackers create risk; defenders mitigate risk; users both create and mitigate risk. Beginning with the mission-based framework, each mission will include the users, defenders, the user/defender interacting team, and with some probability, attackers. Based on the probability of being attacked and that attack being detected, each combination of mission/user/defender/resources must select an appropriate mitigation path within the mission model. Our model of risk includes both (a) the likelihood of a negative outcome, and (b) the consequence of the outcome occurring. Thus, the risk related to a mission subtask is a vector of outcomes with consequences that may impact not only the task itself, but also the infrastructure, users, and other mission activities.

Detection

The goal of detection is to determine whether there is an ongoing cyber-threat that can negatively affect the mission and provide assessments on: (i) what is the most likely threat; (ii) what impact will it have on the mission (leakage of data, system breakdown, etc.) in terms of increase in cost or decrease in payout; and (iii) the confidence in the process (based on evidence collected). Detection is influenced by (i) the actions of the attacker, and (ii) the dynamics of the environment (which can itself influence the attacker to behave in certain ways).

The central element of detection research is the study of diagnosis-enabling intrusion detection (DEID). Departing substantially from traditional signature and anomaly-based detection, DEID infers high level attacks and effects using correlations, automated reasoning, and forensic techniques. In DEID: (i) A large volume of data that encompasses all levels of operation at each node (human actions, sensors, applications, OS, network behaviors) and across a multitude of monitors is collected (ii) The observed, correlated evidence are examined and an attempt is made to map them onto to the characterizations of expected correlated behaviors derived from the models of both the system and human actors; we expect that the mappings will allow the determination of normal/attack behaviors with high accuracy (diagnosis). (iii) If the system is unable to map the observed correlated behaviors to known attacks (e.g., may be a zero-day attack) appropriate information is exported to the human defenders.

Agility

Agility refers to the context and mission-aware reconfiguration of the system or the mission by the defender with respect to a potential attack or perceived risk. Such reconfigurations of environment or mission strategies are referred to as cyber-maneuvers. This research effort will focus on developing models and algorithms that reason about the current state, the universe of potential security-compliant maneuvers and end-states, and the impacts of those changes on human users, defenders, and attackers. Furthermore , we will explore game-theoretic models that ensure adversarial actions on mission progress are mitigated by selected maneuvers. Note that some maneuvers may be offensive (such as deception techniques) in that they launch counter-measures that impact would-be attackers.

Broadly speaking, in a agile mission environment the system state needs to be continuously analyzed based on detected threats, assessed risks and human feeds on mission evolution. Subsequently, the system must be reconfigured towards: preventing and mitigating attacks, thereby maximizing the profit in our mission model; completing the mission in a secure and resource-optimal way given the current state and the dynamics of the end state; minimizing risk and accounting for deception; and (iv) integrating the human dynamics that impact the cyber-security operations.

Human Dynamics

A key advancement to the science of security is the integration of models of human behaviors and capabilities. Embodying the core of the human dynamics Cross-Cutting Research Issue (CCRI), mission models reason about situational factors that may substantially alter user, defender, or attacker performance. Furthermore we will develop models of attackers to gauge intent and to identify countermeasures that will mitigate their impact on mission outcomes. Here we will develop models of humans interacting with security systems. By understanding how an attacker, user or defender (or groups) is acting, or will act in response to a stimulus, we can predict the actions they will take. This allows us to estimate the type of attack, the goals of an attack, and the type of response taken by a user or defender, and thus estimate risk and predict future behavior. Ultimately we will use these predictions to influence and control cyber-mission evolution and adversarial action. We will conduct experiments to understand users’ mental models and behaviors under a variety of conditions. The developed models will be verified experimentally.