Skip to content

SimulationController

Bases: CybORGLogger

The class that controls the Simulation environment.

Attributes:

Name Type Description
action Dict[str, List[Action]]

dictionary of agent actions for the step

actions_in_progress Dict[str, Dict]

actions in progress during the step

actions_queues Dict[str, list]

queue of actions to be taken during the step

agents dict

unused in CC4, default None

agent_interfaces Dict[str, AgentInterface]

dictionary of agents and their interfaces

bandwidth_usage Dict[str, int]

dictionary of hostnames and their bandwidth usage

blocked_actions list

list of blocked actions

done bool

flag for when the episode is complete

dropped_actions list

list of dropped actions

end_turn_actions Dict[str, Action]

dictionary of default actions each agent completes after all chosen actions taken

failed_actions list

list of failed actions

hostname_ip_map Dict[str, IPv4Address]

map of hostnames to IP addresses

INFO_DICT Dict[str, _]

mapping of individual agent knowledge of the environment

init_state Dict[str, _]

initial state observation data

max_bandwidth int

scenario maximum bandwidth

message_length int

scenario message length

np_random RandomNumberGenerator

seeded numpy random number generator

observation Dict[str, ObservationSet]

observations of all agents

reward Dict[str, Dict[str, int]]

current reward for each team

routeless_actions list

list of routeless actions

scenario Scenario

scenario object that the simulation is based off of

scenario_generator ScenarioGenerator

the scenario generator that created the scenario

state State

the current state of the environment

step_count int

the current step count

subnet_cidr_map Dict[SUBNET, IPv4Network]

map of subnets to their network ip address

team_reward_calculators Dict[str, Dict[str, RewardCalculator]]

mapping of teams to their reward calculators

team Dict[str, List[str]]

mapping of teams to agent names

team_assignments Dict[str, List[str]]

mapping of teams to agent names (duplicate)

Functions

__init__

__init__(scenario_generator: ScenarioGenerator, agents: ScenarioGenerator, np_random: RandomNumberGenerator)

Parameters:

Name Type Description Default
scenario_generator ScenarioGenerator
required
agents dict
required
np_random RandomNumberGenerator
required

calculate_reward

calculate_reward(reward_calculator: RewardCalculator) -> float

Calculates the reward using the reward calculator

Parameters:

Name Type Description Default
reward_calculator RewardCalculator

An object to calculate the reward

required

Returns:

Type Description
float

The reward value for the associated reward calculator

determine_done

determine_done() -> bool

The done signal is always false

Returns:

Type Description
bool

whether goal was reached or not

different_subnet_agent_reassignment

different_subnet_agent_reassignment()

If an agent has a session outside of their subnet, change the agent to the corresponding agent for the subnet. If that agent is not active, activate them.

Note, a red agent may have multiple red sessions assigned to it from the PhisingEmail action (assigned to the closest connected red agent). However, only not all of these will need to be reassigned, therefore, we may need to reindex the original red agents sessions. This requires making adjustments to the state.sessions, state.sessions_counts, state.hosts, and the sessions children.

This is only required for the EnterpriseScenarioGenerator, and will cause the failure of tests that utilise older scenarios if instance not checked.

execute_action

execute_action(action: Action) -> Observation

Executes the given action

Parameters:

Name Type Description Default
action Action

action to execute

required

Returns:

Type Description
Observation

the observation resulting from the performed action

filter_actions

filter_actions(actions: List[Tuple[str, Action]]) -> List[Tuple[str, Action]]

Checks agent and session exist for each action

Parameters:

Name Type Description Default
actions List[Tuple[str, Action]]

list of actions to filter

required

Returns:

Type Description
List[Tuple[str, Action]]

list of filtered actions

get_action_space

get_action_space(agent: str) -> dict

Gets the action space for a chosen agent

Parameters:

Name Type Description Default
agent str

agent selected

required

Returns:

Type Description
dict

action space of the agent

get_active_agents

get_active_agents() -> list

Gets the currently active agents

Returns:

Name Type Description
active_agents list

list of active agents

get_agent_state

get_agent_state(agent_name: str) -> Observation

Gets agent's current state

Parameters:

Name Type Description Default
agent_name str
required

Returns:

Type Description
Observations

the agent's current state

get_connected_agents

get_connected_agents(agent: str) -> list

Gets a list of agents that are connected the the agent

get_last_action

get_last_action(agent: str) -> Action

Gets the observation space for a chosen agent

Parameters:

Name Type Description Default
agent str

agent selected

required

Returns:

Type Description
Action

agent's last action

get_last_observation

get_last_observation(agent: str) -> Observation

Get the last observation for an agent

Parameters:

Name Type Description Default
agent str

name of agent to get observation for

required

Returns:

Type Description
Observation

agents last observation

get_observation_space

get_observation_space(agent: str) -> dict

Gets the observation space for a chosen agent

Parameters:

Name Type Description Default
agent str

agent selected

required

Returns:

Type Description
dict

agent observation space

get_render_data

get_render_data()

Build render data for CC3 - not used for CC4

get_reward

get_reward(agent)

Returns the team's reward

get_reward_breakdown

get_reward_breakdown(agent: str)

Returns host scores from reward calculator

get_true_state

get_true_state(info: dict) -> Observation

Gets the true state

Parameters:

Name Type Description Default
info dict
required

Returns:

Name Type Description
output Observation

the observation from the true state

has_active_non_parent_sessions

has_active_non_parent_sessions(agent_name: str) -> bool

Tests if an agent has active sessions that aren't a parent session

is_active

is_active(agent_name: str) -> bool

Tests if agent has an active server session

replace_action_if_invalid

replace_action_if_invalid(action: Action, agent: AgentInterface)

Returns action if the parameters in the action are in and true in the action set else return InvalidAction imbued with bug report.

Parameters:

Name Type Description Default
action Action

action to test if valid

required
agent AgentInterface

agent that is performing the action

required

Returns:

Name Type Description
action Action

Action parameter if valid, otherwise InvalidAction

reset

reset(np_random = None) -> Results

Resets the environment

Parameters:

Name Type Description Default
np_random
None

Returns:

Type Description
Results

results object from the reset environment

reset_observation

reset_observation()

Populate initial observations with OSINT

send_messages

send_messages(messages: dict = None)

Sends messages between agents

Parameters:

Name Type Description Default
messages dict
None

set_np_random

set_np_random(np_random)

Sets the random number generator

sort_action_order

sort_action_order(actions: Dict[str, List[Action]]) -> List[Tuple[str, Action]]

Sorts the actions based on priority and sets the dropped parameter for actions based on bandwidth usage

Parameters:

Name Type Description Default
actions Dict[str, List[Action]]

dictionary of actions to sort

required

Returns:

Type Description
List[Tuple[str, Action]]

sorted list of actions

start

start(steps: int = None, log_file: int = None, verbose: int = False)

Start the environment and run for a specified number of steps.

Parameters:

Name Type Description Default
steps int

the number of steps to run for

None
log_file File

a file to write results to (default=None)

None

Returns:

Type Description
bool

whether goal was reached or not

step

step(actions: dict = None, skip_valid_action_check: dict = False)

Updates the simulation environment based on the joint actions of all agents

Parameters:

Name Type Description Default
actions Dict[str, Action]

name of the agent and the action they perform

None
skip_valid_action_check

if false then action is checked against the agents action space to determine validity of action and . if not valid then the action is replaced with an InvalidAction object

False