Logs are basically ledgers, or a list of transactions that show what has occurred in the system.  Logs can be user based or component based, or both. Here are some concepts to be familiar with.

Intrusion Detection System (IDS) – the key is the letter “D” in this solution, in other words it detects intrusions into the system, and sends an automated alert.  

Intrusion Prevention System (IPS) – the key is the letter “P” in this solution, in other words it prevents, or takes automated actions to prevent intrusions into the system rather than simply alerting personnel that an intrusion has occurred.  

IDS/IPS can be placed at the perimeter (gateway) or within a DMZ, on specific hosts, and at different locations throughout the network.  The best choice is a combination of locations. These solutions can detect deviations from standard defined behavior, signature attacks, or advanced deviation detection (machine learning).  Considerations for IDS/IPS include maintenance & updates, impact to network performance, and false positives. 

SIEM – IDS/IPS typically send their logs and alerts to a centralized solution to ingest and analyze the logs, which can send additional alerts about interesting events to security professionals.  These are called security information and event management systems. Most SIEM solutions have the following components:

Aggregation – collection of logs from various sources, including firewalls, network devices, hosts, and applications.

Correlation – assigning weight to specific types of activities to help with discovery of potential attacks.

Normalization – presentation of the log information in an understandable way to the dashboard users.

Analysis – examining the log and correlation information using scripts and heuristics to discover attacks.

Reporting – reporting mechanisms to alert appropriate personnel about potential security incidents/events.

Storage – archiving capabilities.

If taking the first letters, you can use the term “ACNARS” to help remember the components (think of it as the name of a made up wizard/sorceress in your mind, because the way some of these work is truly magical!).  

Keep in mind the difference between aggregation, correlation, normalization, and analysis as this could be a potential question – not necessarily in the form of “what is the difference” but rather which component might be needed in a SIEM product.

Log Management

This topic starts by introducing two frameworks, NIST 800-92, and ISO 27001 Annex A, Section A.12.4, both of which talk about security log management

There are four practices that ISC2 considers crucial for logging.

  1. Prioritizing log management throughout the organization. Requirements and goals should be defined, including which laws apply, which regulations, and which policies are already in place or known at the present…
  2. Establish policies and procedures for log management. Basically this ensures a consistent approach, and that laws and regulatory requirements are being met. As with all policies and procedures, periodic audits, testing, and validation S/B conducted to confirm that the standards and guidelines are being followed.
  3. Log management infrastructure needs to be created. So what’s a log management infrastructure? An infrastructure is a system or combination of system and processes to ensure that logs are captured, stored with sufficient storage capacity, protected from accidental or intentional modification, kept from unauthorized access, and adequate peak processing capabilities. SIEM solutions are typically used to help with this effort.
  4. Support needs to be provided for all staff with log management responsibilities. This includes training them, providing them with log management tools and tool documentation, providing them with technical guidance, and distributing information to log management staff.

Next we have some examples of events that should be captured in the logs. Remember that an event is simply any action that causes a measurable system change.

• User interaction (such as log into the device), or badge readers and keypad access attempts.
• Connection to a system.
• Shutdown or power-on boot sequence.
• Transfer of data
• Access to data
• Modification to data
• Disconnection or log outs
• CPU fan speed change

Events can be further classified as precursors or indicators.

Precursors are signals (based on the events) that suggest a possible change of conditions. For example, if you have a company of 100 employees, and 60 of them are filing grievances electronically (which is captured in a log), this could be a precursor to an internal incident, such as employees stealing data. Another example might be an announcement from a threat group that they will attack your company. Another example is a newly discovered vulnerability for a technology that exists in your environment.

Precursors can be used to make adjustments to thresholds and authentication challenges, or simply to increase the levels of security in order to avert an incident.

Indicators are signals that suggest a potential incident is happening or has happened. These are also called indicators of compromise. Rules can be set up in the SIEM to detect potential indicators that might require human interaction.

Keep alive messages could be an indicator of compromise depending on your signal rules or alarm configurations.

An additional term to be aware of is Log Normalization, which basically refers to the process of taking raw logs, ingesting them into a SIEM, and then churning out reports, or turning those logs into comprehensible logs. Think of it as translating the logs into something readable.

For security incident and event management solutions (SIEM), here are some terms you’ll need to learn.

Managed security services provider (MSSP): this refers to a third party vendor who provides some level of hosted security expertise.
Self-hosted, self-managed: which means that your organization does everything in-house.
Cloud SIEM, self-managed: means that the cloud provider collects and aggregates the logs, but the customer, or your organization, manages the detection systems, the operations, analysis, correlation, rules, alerting, and incident response activities.
Hybrid self-hosted: means that the customer organization hosts all the systems and hardware on site, but the MSSP is a partner in the collection and correlation tasks, and may participate in the overall process.
SIEM as a service: this is where all tasks would be provided by the third party up to and except when an incident response is needed.

Real-time monitoring of logs refers to simply monitoring the security logs immediately.

Information Security Continuous Monitoring (ISCM) is a principle that denotes a continuous process of analyzing of the strategy, the program, analyzing and reporting findings, responding to findings, and improving the strategy/program on a continuous basis.

NIST 800-137 lists the following continuous steps that possibly overlap:
• Define an ISCM strategy;
• Establish an ISCM program;
• Implement an ISCM program;
• Analyze Data and Report findings;
• Respond to findings; and
• Review and Update the ISCM strategy and program.


Other frameworks reinforce this approach as well, including ISO/IEC 27004:2016-0, COBIT, PCI-DSS, etc.
In order to have a good ISCM strategy, the following are essential:

• Comprehensive asset inventories
• Performance metrics for controls
• Incident response processes
• Continuous process improvement activities
• Tools and underlying systems infrastructure to gather, analyze and report on the monitored environment.

Some monitoring limitations might include:

• Logging formats that are incompatible with SIEM
• Log manipulations
• Cannot detect gradual attacks or attacks that try to avert detection
• No support for legacy systems
• Transitioning out of cloud logging is costly/difficult

Data loss prevention, data leak prevention (DLP) and egress monitoring all refer to the same thing: looking at what flows out of the organization to make sure that information isn’t being stolen. This can be done in a number of ways.

How it works: DLP typically examines data before it leaves the environment. If the data matches a certain classification, pattern or signature, action must be taken to ensure that the data isn’t exfiltrated. DLP looks for the following:


Signatures – strings that are readily identifiable and can be recognized by the solution. A tool looking for social security numbers might look for numbers that match the signature 123-45-6789, or simply nine number strings.
Pattern matching looks for broader conditions of the strings rather than just the strings. An example might be two-word patterns where each word starts with one uppercase character and the rest lowercase, which indicates that names are being sent out.
Labeling – the solution looks for specific data labels to indicate the classification, such as “proprietary” or “copyrighted” or “confidential”.

For DLP to be deployed properly, it should examine data in all three states:
• Data at rest would place DLP agents in data storage locations (both physical and logical), such as databases and archives.
• Data in motion would be DLP software that inspects outbound communications traffic.
• Data in use would be DLP agents installed on endpoint devices.

Data discovery, classification, and categorization is critical in having a successful DLP strategy in place.
Examples of egress to be monitored include:
• Email (content and attachments)
• Transfer of data to portable media (flash drives, etc.)
• File Transfer Protocol (FTP)
• Posting to web pages/sites
• Application/application programming interface (API)


DLP enforcement strategies:
• Training – upon detection, the user might be given a reminder about the organization’s policy on sending sensitive information out.
• Attribution – upon detection, the user is prompted to accept or confirm responsibility for their actions in distributing the information.
• Stringency / prevention – upon detection, the DLP stops the transaction, and/or locks the user’s account, and an alert is sent.

User entity& behavior analytics (UEBA) systems operate similar to SIEMs, and typically have three major components:

• Use cases
• Analytics
• Data

A use case helps establish a behavioral benchmark on the entity’s users. For example, users at an organization might have a business need to stream a lot of video, or download files from the internet. Analytics compare the behavior with the use cases and send alerts to management according to how the solution is configured. Data simply refers to the data gathered from user behavior. AI is used alongside risk appetite to analyze logs and adjust alerting rules.

The more strict rules you enforce, the more false positives, and the less strict the rules are, the more false negatives (i.e. more successful attacks).

Additional concepts:

Pattern matching is the process of looking at signature behavior of data movement and activity throughout the organization.  

North-to-south refers to the movement of data outside of your organization, or communication on the internet. 

East to west refers to data patterns that move inside the organization, or over the internal network connections. 

Threat intelligence or Threat Hunting, this is the process of how you’d identify future threats. 

External threat intelligence can include a lot of activities and sources of knowledge, such as open source research, threat modeling, and threat intel from third parties like vendors, governmental entities, and information sharing and analysis centers (ISACS)

Internal threat intelligence refers to internal sources and internal groups to provide the intel using logs, incident reporting, and the results of forensic investigations. A configuration management database or system inventory can also help identify potential threat areas, for example if there are Windows XP or Windows 2008 systems running in your environment, this could be a source of threat intel. Also, access or permission reports can be used to identify people with elevated privileges who could be a target or a risk for unusual activity. 

Patch Notices might come from announcements from the vendor(s), a third party, or general news sources. There should be a daily process in place to look for and analyze patch notices.  

Provided security services refers to third-party provided security services.  The provider performs their own investigative efforts to find out what threats and risks their clients could be facing. This could include threats in a certain region or industry, or threats related to certain products, or threats against specific brands or employees.

Agility refers to the quick learning curve that attackers have.  ISC2 says that attackers are much more flexible and are quick to change their methods in order to adapt to increasing security controls.

A runbook is how to complete a task, like resetting a user’s password.  

A playbook on the other hand, would contain multiple runbooks and is geared towards a bigger goal or scenario.  A playbook could contain multiple runbooks. 

Orchestration is an automation of something, and in the context of CISSP it would be automating the log management tasks using runbooks and playbooks.  See SOAR below. 

Tuning refers to the fact that even with machine learning and AI, NIDS and HIDS aren’t going to work perfectly out of the box. Some tuning will need to occur which means you’ll have to go into the rules and adjust them according to how your environment is set up. 

Security orchestration, automation, and response (SOAR) involves three main processes:

Orchestration brings all the components of security automation together into one platform.  Disparate systems and incompatible technologies can create issues in the stack.

Automation performs tasks such as log analysis, event analysis, scanning, and follow-on tasks based on playbooks and runbooks, which are simply automated tasks.  Automation uses AI to assist in the process.

Response provides a singular view into the incident detection, management, monitoring, and reporting of potential security incidents in order to automate IR capabilities.

This page has been fully updated with topics from the May 2021 revision to the ISC2 Common Body of Knowledge.