Security Architecture
A security architecture defines how a system should be built to ensure security. It includes guidelines, behaviors, and features necessary to protect data and operations. Security architectures can be general principles for designing secure systems or specific requirements for a particular system or software.
Core Elements of Security Architecture
To enforce security, certain fundamental design features must be present in any secure system:
- Privileged Mode Instructions – Special commands that control hardware and processes, usually restricted to high-privilege levels.
- Processor States – OS-managed states that dictate the privilege level of a running process.
- Memory Management – Controls how RAM is used, including virtual memory and separation of code and data.
- Abstraction Layers – Prevents interference between different system components, making maintenance and security easier.
- Data & Code Isolation – Stops processes from accessing or modifying each other’s data and code, enforced by hardware and OS.
- File System Permissions – Restricts access to files and devices based on security settings.
- Security Kernel – Monitors running programs for security threats and can force them into a safe state or shut them down if necessary.
Additional Security Measures
Once the basic security features are in place, other security mechanisms can be added:
- Access Control – Manages user authentication and permissions.
- Virtual Machines – Encapsulate entire operating systems for isolation and security.
- Sandboxing – Isolated environments for safely testing software and detecting threats.
- Cryptography – Protects data through encryption, digital signatures, and secure storage.
Each of these elements connects to security models that help define and enforce secure system behavior.
SECURE DEFAULTS, derived from NIST SP 800-53 control number SA-8, sub control # (23) also known as restrictive defaults – from a manufacturer’s point of view means that products are securely configured “as-shipped”, meaning customers don’t have to apply a lot of configurations out of the box; the product or software is capable of preventing breaches from the get-go, and that the product should initialize / startup in a secured state. It also means that during any unsuccessful initialization, it should still perform actions in a secure state, or not perform the actions at all.
Failure – an action, or behavior, that deviates from the documented, or expected behavior of some component of the system.
Fail securely – failure of any of the following, should not result in violating security policy: functions, mechanisms, and recovery actions. By “recovery actions” we don’t mean failed recovery actions, but instead, this is talking about how recovery actions themselves that result from a failure shouldn’t cause a violation of security policy. (NIST 800-160). If the system has successfully implemented the principle of fail securely, it should be able to provide degraded or alternative functionality in a secure fashion, or, prevent the system from functioning in a non-secure state altogether.
Continuous protection – a system can reasonably detect actual or impending failures during: initialization, normal operation, shutdown, and maintenance, and can circumvent any failed component by reconfiguring itself, completely shut down, or revert to a previously secured version.
Defense in Depth (DiD)
Defense in Depth (DiD) is a security strategy first introduced by the U.S. National Security Agency (NSA). It takes a layered approach to protect an organization’s assets, similar to how a castle defends its treasures. Just as a castle has multiple security layers—vaults, guards, walls, and moats—DiD uses multiple security measures to make it harder for attackers to succeed.
Layers of Defense:
- Data – Protects data using encryption, access controls, and leak prevention.
- Application – Secures applications with firewalls, monitoring, and leak prevention.
- Host – Uses endpoint protections like antivirus and patch management.
- Internal Network – Controls access within the network with firewalls and intrusion detection.
- Perimeter – Prevents unauthorized access with firewalls, malware analysis, and secure zones (DMZs).
- Physical – Uses locks, access control, and barriers.
- Policies & Awareness – Reduces insider threats through training and security policies.
While DiD was effective when organizations had centralized data centers, modern IT environments—cloud computing, remote work, and mobile devices—make the traditional model less effective. Insider threats remain a significant challenge, and some security experts argue that DiD is outdated because it can be too rigid.
However, the core idea of using multiple layers of security remains valuable. Instead of viewing it as a rigid model, organizations should adapt it to modern environments, combining administrative, technological, and physical controls to protect their systems.
Keep it Simple (AKA: Reduced Complexity) – a simpler system results in fewer vulnerabilities. It’s also easier to verify security policy implementations and gives you better assurance on vulnerabilities, meaning, whether they truly exist, whether they’re correct, and whether they’re complete.
Zero trust – an architecture in which nothing is trusted. Just like the name implies, devices and users need to be authenticated and authorized for each and every action. An example of this could be where you require your users to authenticate when they first arrive at their desk, after that another login is required to access email. After that, another login is required for access to network folders, and an additional login required to access each separate subset of folders. After that, another login is required to access the mainframe, and so on and so forth. The level in which this type of policy is implemented needs to be well documented and governed, so be thinking about zero trust in the context of things like change management, baselines, and security policies.
Privacy by design – privacy should be implemented throughout the entire SDLC, and that it needs to be collaborated and communicated at all staffing levels throughout the project.
Trust but verify – has two additional names to be aware of – system assurance and security verification. This is basically a process of monitoring and looking for, the presence/absence of proper/improper behaviors, against some type of measurable criteria.
- Presence of proper behaviors
- Measurable criteria
- Absence of improper behaviors
- Measurable criteria
An example would be to monitor the CPU usage of servers to make sure they’re within a certain percentage threshold; another might be to monitor config files to make sure that changes aren’t made.
Shared responsibility – same as the name implies, but it’s in the context of design components that are shared during the design of a system. ISC2 only mentions security controls of logging, specifying the user groups, and testing in non-production environments.
microservices. These can be thought of as a collection of services that communicate with each other.
They use a standard communications protocol, well-defined APIs, and they’re independent of any vendor, product, or technology.
Microservices – built around capabilities, for example, if you wanted to build a mobile phone application that connected people who want to make money by driving their cars, and people who need a ride, and then collect a fee from the rider in order to pay the driver, you might look into what microservices can be gathered to create that application. That’s basically what Uber is. Microservices are deployed inside application containers.
Container – if you’re familiar with virtual machines, you know that they tend to use a lot of resources. Containers are virtualized at the operating system level instead of the hardware level. If you’re not familiar with this, the best way to think of it is, that VMs are basically slicing your hardware into separate pieces, reserving memory and such by running separate operating systems within the VM, while containers are using the same underlying operating system of their host, which frees up some resources.
Here are the key points:
- They’re done at the OS level, again, meaning that the specific portion of resources is freed up to allow more containers than a VM would allow.
- They sit on top of the same kernel.
- They have their own process ID lists, network devices, and file system user lists.
- Examples of containers that you might be familiar with:
- LXC
- OpenVZ
- Parallels
- Virtuozzo
- Solaris Containers
- Docker
- HP-UX containers
The best explanation of the difference between a container and VM is explained in the video below published by IBM Cloud:
Container and application container are the same thing with the added knowledge that you could encounter either term on the exam.
OS virtualization – basically where the application is given a virtualized view of the operating system – don’t confuse this with virtual machines – there’s something called a runtime engine that allows the container to talk to the OS, and the result is that applications are abstracted from the OS and are isolated from each other. Additional info can be found here: https://en.wikipedia.org/wiki/Operating_system_abstraction_layer#:~:text=An%20operating%20system%20abstraction%20layer,multiple%20software%20or%20hardware%20platforms.
A type 1 hypervisor has type 1 security and is running at the hardware level, thus has a lower attack surface. With type ii (2) hypervisor and type 2 security we have them running on the operating system (like the container vs. VM mentioned above), which means there are more vulnerabilities and that it’s more attractive to attackers.
Code signing/validation – system components check the digital signature of code before it is executed to ensure its origin.
Serverless cloud computing – doesn’t actually mean there “isn’t a server”, so don’t be fooled. This is where the customer purchases a cloud service based on peak, seasonal, or on-demand services. Don’t confuse that “on-demand” with something like pay-per-view; the “on demand” we’re talking about here is for whenever your services might be in the highest demand, for example if you’re running a tax preparation service in the united states, April through July might be your busiest months, so you might consider a serverless architecture for your site.
High-Performance Computing Systems (HPCS) – refers to super high-speed computers. These are used for big data, or data analytics to look at things like buying patterns of individuals so they can be sold to retailers for ads, etc. HPC’s are used also for cryptography, hacking and cryptanalysis.
HPCS Vulnerabilities
- Latency constraints:
- “Normal” tools too slow
- Improper workloads:
- Compromised HPC = constrained resources
HPCS Mitigations
- Good architecture:
- Enclaves
- Detection tools around the perimeter, not on the system itself.
- Monitoring/Logging: you have to consider the tradeoff between resource use and accountability, and then of course you need to have a log review and monitoring process in place.
- Computational cost vs. accountability.
- Regular reviews.
Edge computing – a layer of computing is put at the input source. For example, the layer can be an embedded device, such as an IOT fridge, or an IOT thermostat or cooling system.
Fog computing – don’t conflate edge and fog computing… The key difference between these is where the computations are done. Just remember the phrase “Edge is Embedded, Fog is further”. From what we understand, the purpose of both of these is to reduce the computational cost on the cloud servers. Edge is done at the source, fog is typically done further out but not in the cloud.
Edge computing vulnerabilities:
- Network compromise:
- Denial of service
- Connection disruption
- Increased attack surface:
- Lots of devices
These mitigations should be applied to reduce the vulnerabilities:
- Network monitoring
- Incident response process
- Good inventory practices (prevents rogue devices)
Virtualization/sandbox – puts executable code in a controlled and separate space in the operating system that does not have access to the host system or other virtual machines.
VM sprawl – an administrator has lost control of all the VMs on a network, which jeopardizes all the services offered.
Trusted platform module – generates and stores cryptographic keys used to encrypt drives that are difficult to obtain when the power is shut down.
Root of trust refers to an immutable unchangeable trusted hardware component such as the TPM. Root of trust is also referred to as a trust anchor, that subsequent actions can rely on, to make sure they’re starting from a secure system state.
File integrity monitoring – hashes file system to ensure integrity.
Aggregation – the precursor to inference. Aggregation refers to the collection of publicly or less sensitive information in order to infer data that is more sensitive, restricted, or confidential. The term “aggregation” itself is the act of collecting the non-sensitive data.
Inference – the most probable action taken after aggregation. Inference means to look at public or non-sensitive data in order to make a determination of sensitive or confidential information. Inference refers to the act of determining the information of higher sensitivity.
Example: aggregation might look at a celebrity’s past itinerary from the news or other publications, including counties, cities, and businesses visited over the past year. If a majority of the places visited were in Lake County, and included restaurants at weird hours and a commercial real-estate firm, someone could infer that the celebrity is planning to open a restaurant in Lake County. So, while the emails, texts, and business discussions between the celebrity and his or her confidants remain confidential and hidden, the media might be able to infer what this person is planning to do.
Throttling – this is staggering output to ensure that too much doesn’t get disclosed at any one given time. For example, throttling the display of account information to users who enter account numbers on an unauthenticated web page (not a good idea BTW) could reduce the risk of attackers harvesting or siphoning account numbers.
Anonymization – removal of things like social security numbers and names from a database so that only the demographic information is used.
Tokenization – replaces things like SSNs and names with other identifiers that can be cross referenced. The tokens are the identifiers.
Software as a service (SaaS) – applications run through a web browser. SaaS affords the least amount of visibility into the system or application, since everything is managed by the provider. Limited configuration capabilities.
Network as a Service (NaaS) – as in the title, networking/transport capabilities are provided.
Infrastructure as a Service (IaaS) – customer controls operating systems, storage, applications, and possibly some networking capabilities. This type of service offers a greater amount of visibility and control.
Platform as a Service (PaaS) – a deployment platform for custom applications. The customer has control over the configuration of the environment and the custom applications only.
Private cloud – in house cloud or data center, not open to the public. This can also be referred to as “on-prem” cloud.
Public cloud – offered to the general public; anyone can be a customer.
Government cloud supports government agencies and their contractors. Not open to the general public.
Community cloud – offered to communities or types of organizations, such as taxing agencies or film industry organizations. The organizations themselves may take part in hosting them, or the offering is exclusive to organizations in that category.
Industrial control systems (ICS) – used to control industrial processes, like manufacturing, logistics, distribution, etc., which contain sensors and gauges to support cybernetics. The three types to know for the exam are:
- Supervisory control and data acquisition (SCADA) – commonly used in large-scale utility companies to automate the control of utility generation, distribution, and production.
- Distributed control systems (DCS) – smaller scaled SCADA-like system, usually confined to a geographic area or plant.
- Programmable logic controllers (PLC) – control of a singular element of the DCS or SCADA, such as the HVAC for a particular office or factory on the compound. Knobs and buttons, but can also be part of the larger DCS/SCADA.
Vulnerabilities in ICS
ICS share some weaknesses with regular computer systems, but also have unique risks due to their specialized nature:
- Limited Features: Many ICS components lack full operating system protections. Some use open-source OSs, which can introduce new vulnerabilities.
- Minimal Security Tools: Standard cybersecurity protections often can’t be used with ICS.
- Long Lifespan: ICS are typically expected to run for 10–20 years, often without major updates or upgrades.
- Easy to Misconfigure: Proper setup and maintenance require specialized knowledge. Many ICS were not designed with easy reconfiguration or updates in mind, especially as they age.
- Vulnerable to DoS Attacks: ICS often lack strong communication protections and can break down when receiving incorrect or unexpected input.
- Real-World Consequences: Attacks on ICS can affect physical processes—causing equipment failure, damage, or safety risks.
- Remote and Unmonitored: ICS devices are often installed in distant or isolated areas with weak physical security, making them easier targets for physical tampering.
- Limited Transparency: Proprietary systems can’t always be independently assessed for security; users must trust vendor claims. Open systems, while more transparent, are also complex and require advanced skills to secure.
ICS Security Mitigations
To protect ICS from threats, specific strategies are recommended:
- Network Isolation: Keep ICS networks separate from other networks using air gaps or firewalls to limit exposure.
- Strict Access Controls: Limit who can access the system and monitor activity closely. Set alarms to alert teams quickly if something goes wrong.
- Network Segmentation: Break the network into smaller, controlled sections. Using a “zero trust” model ensures every component is verified before access is allowed.
- Secure Communications: Protect data in transit using encryption and other safeguards to prevent unauthorized access or changes.
- Change Control: Carefully manage and monitor all changes to the system—software updates, configuration changes, or security settings—to catch unauthorized modifications.
Internet of Things (IOT) – small form factor devices (can include mobile devices) that offer very little security out of the box and limited vendor support, such as wireless drawing pads, smartboards, thermostats, refrigerators (not the fridge itself, but the controlling device), bluetooth devices, and printers, all of which connect to the internet or local network. Embedded system – dedicated platform that is designed for a singular function which can include system on a chip (SOC). Examples include stand-alone MP3 players or digital cameras.
Embedded Systems
Embedded systems are small computers built into devices like cars, airplanes, appliances, medical equipment, and security systems. They replace or enhance traditional controls (like thermostats or regulators) with digital functions, improving reliability, efficiency, and data collection. These systems are typically inexpensive, power-efficient, and easier to maintain than older analog systems.
However, they also pose operational risks. If an embedded system fails, the entire device may stop working. In critical systems—such as those affecting human safety—redundant controls are often required to ensure reliability.
Many embedded systems run on fixed, read-only code that can’t be updated. Others allow firmware updates locally or remotely, but updating them still poses challenges, including risk assessment, validation, and secure deployment.
The line between embedded systems and Industrial Control Systems (ICSs) is often blurred. ICSs frequently include embedded systems, and sometimes the ICS itself is embedded into the equipment it controls.
Vulnerabilities
Common vulnerabilities in embedded systems include:
- Coding flaws: Issues like poor input validation, buffer overflows, and memory mismanagement often arise due to OS limitations or bad coding.
- Web interfaces: Embedded web-based interfaces can be exposed to known attacks if poorly secured.
- Weak authentication: Devices may have default or hardcoded passwords that give easy access to attackers.
- Poor cryptography: Outdated or flawed algorithms are common, especially for authentication functions.
- Reverse engineering: Attackers can extract and analyze firmware to find weaknesses or steal intellectual property.
- Malware: Advanced threats like Stuxnet have shown how embedded systems can be targeted to disrupt industrial processes.
- Eavesdropping: Without proper encryption, attackers can monitor or manipulate communications, sometimes without detection.
Mitigations
To secure embedded systems:
- Risk assessments: Include embedded devices in your organization’s risk management process, starting with a full inventory.
- Patching: Update firmware where possible, following organizational standards—even if vendor or hardware limitations exist.
- Secure coding: Use strong cryptography, perform code analysis, and apply obfuscation to reduce vulnerabilities.
- Vendor management: Evaluate third-party development and security practices to understand the risk of embedded systems integrated into your environment.
Ransomware – the CBK talks about how your organization needs to consider whether you would pay the ransom or not, and that the decision and other safeguards need to be covered in your BCDR efforts, and that they need to cover situations where even your backups might be unavailable.
Internet of Things (IoT)
Smaller embedded systems and widespread wireless networks have made it possible to connect everyday devices—like refrigerators, road sensors, and smartwatches—to the internet. These devices can now monitor their environment and share data with other systems to help make decisions.
IoT devices range from basic sensors with minimal software to complex machines with critical functions. Some run general-purpose operating systems and support advanced features, including standard communication protocols and encryption for secure communication. Because they’re connected to broader networks, they present a larger attack surface than older, standalone embedded systems.
Common Vulnerabilities in IoT Devices
- Denial of Service (DoS): IoT devices often rely on wireless communication, making them vulnerable to attacks that disrupt service. Standard and proprietary communication protocols can each have weaknesses that attackers exploit. Since data may travel across several networks, these disruptions can cause serious delays and degraded performance.
- Device Security: IoT devices in remote or public places are prone to theft or tampering. Stolen devices can reveal sensitive data, be reverse-engineered, or used in further attacks. Even indirect attacks—like manipulating the monitored environment—can affect device behavior.
- Weak Cryptography: Many IoT devices lack the processing power to perform strong encryption, leaving their data and communication at risk.
Distributed Systems
In distributed systems, computing and data storage are spread across multiple independent devices (or nodes) that work together using messaging and synchronization. These systems are common in cloud computing, peer-to-peer networks, and large-scale data processing. For example, a data search might be split across multiple servers, each handling part of the task, with results combined at the end.
Organizations use distributed systems to reduce costs by sharing work across many lower-cost machines. They often have features like data replication and load balancing, which help maintain performance even if some parts fail.
Common Vulnerabilities in Distributed Systems
- Monitoring Gaps: Simpler systems may lack the tools to detect failures or security issues in real time.
- Insufficient Access Control: Without proper security controls, these systems may not support different data classification levels, putting sensitive information at risk.
Mitigations
- Improve access control, health monitoring, and intrusion detection—if supported by current infrastructure.
- If not, consider isolating the system physically or logically to reduce exposure.
Virtualized Systems
Virtualization allows multiple operating systems and applications to run on a single physical machine by dividing it into separate environments (or virtual machines, VMs). It began in the 1970s on mainframes and became widely adopted in the 2000s for server consolidation and cost savings.
Virtual machines are easy to copy, move, and run—making them ideal for cloud computing. Application virtualization also allows users to run powerful software on less capable devices by using remote servers.
Common Vulnerabilities in Virtualized Systems
- VM Sprawl: Without strong controls, VM copies can multiply and remain active unnecessarily, consuming resources and increasing security risks.
- VM Escape: Weak authentication can allow attackers inside a VM to break out and compromise the host system.
- Hardware & Hypervisor Attacks: Bugs in the hypervisor or weak hardware protections can affect all VMs on a system.
- Lack of Expertise: Poor understanding of cloud and virtualization security can lead to misconfigurations and breaches.
Mitigations
- Apply consistent patching and update policies to all virtual environments.
- Use strong identity and access management (IAM) to protect the hypervisor and enforce fine-grained access controls.
Client-Based Systems
Overview
Client-based systems are the devices users interact with to access information services. These include desktop computers, laptops, point-of-sale terminals, and mobile devices. They connect to servers through a network and are categorized as either thick clients (with more local functionality and storage) or thin clients (with minimal local resources, relying on the server). The line between thick and thin clients is becoming increasingly blurred.
Organizations typically have many client devices, which are constantly being added, replaced, or retired.
Vulnerabilities
Client systems are vulnerable to:
- Malware and misuse
- Hardware or software flaws
- Theft or damage, which can compromise stored data
- Being used as entry points for attacks on other systems (e.g., servers)
Mitigations
To reduce risks:
- Use anti-malware and intrusion detection software on clients
- Implement centralized endpoint management for updates, patches, and monitoring
- Use mobile device management (MDM) to reduce theft or loss risks
- Protect the network with segmentation, firewalls, and intrusion detection
- Educate users through security awareness and training
Server-Based Systems
Overview
Server-based systems provide specific services such as file storage, printing, application hosting, or network services like DNS. They may be hosted on-premises or in the cloud, often forming part of a hybrid infrastructure. Servers are typically managed centrally and used only for their designated purposes.
Vulnerabilities
Common risks include:
- Outdated hardware or software, due to long service lifespans and maintenance challenges
- High-volume logs and activity data that require continuous monitoring
- Exposure to network-based attacks
- Physical access threats to server rooms or network infrastructure
- Risks from the supply chain (vendors, service providers, etc.)
Mitigations
To reduce server risks:
- Add redundancy to allow downtime for updates and patches
- Implement strict access controls (physical, logical, and administrative)
- Regularly monitor and maintain infrastructure, including patch management
- Secure logistics and third-party interactions
Cryptographic Systems
Overview
Encryption is essential for protecting:
- Data at rest (stored data)
- Data in motion (transmitted data)
- Data in use (actively processed data)
Always use proven cryptographic solutions—do not create your own. Understand how to apply encryption effectively:
- Use symmetric encryption for bulk data (requires secure key sharing)
- Learn efficient, secure options for each scenario (covered in detail in the cryptography section)
Database Systems
Overview
Database systems rely on a Database Management System (DBMS) and are hosted on various platforms:
- Endpoints or thick clients (for local applications)
- Servers or server clusters (for higher performance and multi-user environments)
- Cloud or hybrid environments (for scalability and availability)
These systems manage large volumes of data and support high transaction throughput, making them prime targets for attacks.
Vulnerabilities
Database-specific threats include:
- Malformed input (e.g., SQL injection)
- Denial of service (DoS) attacks
- Bypassing access controls
Mitigations:
- Input validation
- Strong authentication and access control
- Regular updates and security patches
- Monitoring and logging access and changes
Cloud-Based Systems
Securing cloud services means evaluating all aspects of a cloud provider’s technology and organization. This can be a massive task. To make it manageable, determine what can be verified through third-party audits and what must be checked directly to meet your organization’s specific needs.
Cloud providers often reduce the impact of security checks by pursuing certifications from trusted independent auditors. These auditors evaluate how well the provider handles computing in a shared, globally distributed environment.
Cloud security is covered in depth in ISC2’s CCSP (Certified Cloud Security Professional) course.
Definitions of Cloud Computing
NIST Definition
Cloud computing is “a model that allows easy, on-demand access to a shared pool of configurable computing resources (like servers, storage, applications) that can be quickly set up or shut down with minimal management effort.”
ISO/IEC 17788 Definition
A similar definition describes cloud computing as a way to access a scalable and flexible pool of shared resources (physical or virtual) with on-demand setup and management.
Understanding Cloud Infrastructure Planes
Cloud data centers are built on three main layers or “planes,” each with physical components:
- Data Plane – Manages data in motion, in use, and at rest.
- Control Plane – Handles routing, switching, and system coordination.
- Management Plane – Manages tasks like scheduling, load balancing, access control, and security.
Another common model describes cloud architecture using:
- Compute
- Storage
- Management
These layers are linked by management systems that keep everything working in sync. Applications use these services through APIs, abstracting the complexity of what’s underneath.
Key Cloud Components
Compute
This refers to the processors and memory used to run applications. They’re managed by hypervisors:
- Type I Hypervisors run directly on the hardware for better performance.
- Type II Hypervisors run within an OS, easier for testing and development.
Some systems use containers to reduce overhead compared to full virtual machines (VMs).
Storage
Cloud storage includes various media (e.g., SSDs, hard drives, tape). Applications access storage through APIs that present virtual storage options (e.g., drives, objects). Behind the scenes, storage can be moved or replicated without affecting users.
Network
Cloud networks use virtual and physical devices (e.g., switches, firewalls) to move data efficiently and securely. Customers can have isolated network environments. As with storage, access and control happen via APIs.
Management
This layer ensures that requests between components are coordinated—for example, adding storage to a VM. It must remain isolated from users to protect the system’s security.
Five Essential Features of Cloud Computing (NIST)
- On-Demand Self-Service
Users can add resources (e.g., storage or server time) without needing help from the provider. - Broad Network Access
Services are accessible over networks from many types of devices (e.g., phones, laptops). - Resource Pooling
Resources are shared among many users, with dynamic allocation based on demand. - Rapid Elasticity
Resources can quickly expand or contract based on usage needs. - Measured Service
Usage is automatically tracked and optimized (e.g., storage, bandwidth, active accounts).
ISO/IEC Adds One More:
- Multi-Tenancy
Resources are isolated so that one customer’s data and processing are kept separate from others.