SECURE DEFAULTS, derived from NIST SP 800-53 control number SA-8, sub control # (23) also known as restrictive defaults – from a manufacturer’s point of view means that products are securely configured “as-shipped”, meaning customers don’t have to apply a lot of configurations out of the box; the product or software is capable of preventing breaches from the get-go, and that the product should initialize / startup in a secured state. It also means that during any unsuccessful initialization, it should still perform actions in a secure state, or not perform the actions at all.
Failure – an action, or behavior, that deviates from the documented, or expected behavior of some component of the system.
Fail securely – failure of any of the following, should not result in violating security policy: functions, mechanisms, and recovery actions. By “recovery actions” we don’t mean failed recovery actions, but instead, this is talking about how recovery actions themselves that result from a failure shouldn’t cause a violation of security policy. (NIST 800-160). If the system has successfully implemented the principle of fail securely, it should be able to provide degraded or alternative functionality in a secure fashion, or, prevent the system from functioning in a non-secure state altogether.
Continuous protection – a system can reasonably detect actual or impending failures during: initialization, normal operation, shutdown, and maintenance, and can circumvent any failed component by reconfiguring itself, completely shut down, or revert to a previously secured version.
Keep it Simple (AKA: Reduced Complexity) – a simpler system results in fewer vulnerabilities. It’s also easier to verify security policy implementations and gives you better assurance on vulnerabilities, meaning, whether they truly exist, whether they’re correct, and whether they’re complete.
Zero trust – an architecture in which nothing is trusted. Just like the name implies, devices and users need to be authenticated and authorized for each and every action. An example of this could be where you require your users to authenticate when they first arrive at their desk, after that another login is required to access email. After that, another login is required for access to network folders, and an additional login required to access each separate subset of folders. After that, another login is required to access the mainframe, and so on and so forth. The level in which this type of policy is implemented needs to be well documented and governed, so be thinking about zero trust in the context of things like change management, baselines, and security policies.
Privacy by design – privacy should be implemented throughout the entire SDLC, and that it needs to be collaborated and communicated at all staffing levels throughout the project.
Trust but verify – has two additional names to be aware of – system assurance and security verification. This is basically a process of monitoring and looking for, the presence/absence of proper/improper behaviors, against some type of measurable criteria.
- Presence of proper behaviors
- Measurable criteria
- Absence of improper behaviors
- Measurable criteria
An example would be to monitor the CPU usage of servers to make sure they’re within a certain percentage threshold; another might be to monitor config files to make sure that changes aren’t made.
Shared responsibility – same as the name implies, but it’s in the context of design components that are shared during the design of a system. ISC2 only mentions security controls of logging, specifying the user groups, and testing in non-production environments.
microservices. These can be thought of as a collection of services that communicate with each other.
They use a standard communications protocol, well-defined APIs, and they’re independent of any vendor, product, or technology.
Microservices – built around capabilities, for example, if you wanted to build a mobile phone application that connected people who want to make money by driving their cars, and people who need a ride, and then collect a fee from the rider in order to pay the driver, you might look into what microservices can be gathered to create that application. That’s basically what Uber is. Microservices are deployed inside application containers.
Container – if you’re familiar with virtual machines, you know that they tend to use a lot of resources. Containers are virtualized at the operating system level instead of the hardware level. If you’re not familiar with this, the best way to think of it is, that VMs are basically slicing your hardware into separate pieces, reserving memory and such by running separate operating systems within the VM, while containers are using the same underlying operating system of their host, which frees up some resources.
Here are the key points:
- They’re done at the OS level, again, meaning that the specific portion of resources is freed up to allow more containers than a VM would allow.
- They sit on top of the same kernel.
- They have their own process ID lists, network devices, and file system user lists.
- Examples of containers that you might be familiar with:
- LXC
- OpenVZ
- Parallels
- Virtuozzo
- Solaris Containers
- Docker
- HP-UX containers
The best explanation of the difference between a container and VM is explained in the video below published by IBM Cloud:
Container and application container are the same thing with the added knowledge that you could encounter either term on the exam.
OS virtualization – basically where the application is given a virtualized view of the operating system – don’t confuse this with virtual machines – there’s something called a runtime engine that allows the container to talk to the OS, and the result is that applications are abstracted from the OS and are isolated from each other. Additional info can be found here: https://en.wikipedia.org/wiki/Operating_system_abstraction_layer#:~:text=An%20operating%20system%20abstraction%20layer,multiple%20software%20or%20hardware%20platforms.
A type 1 hypervisor has type 1 security and is running at the hardware level, thus has a lower attack surface. With type ii (2) hypervisor and type 2 security we have them running on the operating system (like the container vs. VM mentioned above), which means there are more vulnerabilities and that it’s more attractive to attackers.
Code signing/validation – system components check the digital signature of code before it is executed to ensure its origin.
Serverless cloud computing – doesn’t actually mean there “isn’t a server”, so don’t be fooled. This is where the customer purchases a cloud service based on peak, seasonal, or on-demand services. Don’t confuse that “on-demand” with something like pay-per-view; the “on demand” we’re talking about here is for whenever your services might be in the highest demand, for example if you’re running a tax preparation service in the united states, April through July might be your busiest months, so you might consider a serverless architecture for your site.
High-Performance Computing Systems (HPCS) – refers to super high-speed computers. These are used for big data, or data analytics to look at things like buying patterns of individuals so they can be sold to retailers for ads, etc. HPC’s are used also for cryptography, hacking and cryptanalysis.
HPCS Vulnerabilities
- Latency constraints:
- “Normal” tools too slow
- Improper workloads:
- Compromised HPC = constrained resources
HPCS Mitigations
- Good architecture:
- Enclaves
- Detection tools around the perimeter, not on the system itself.
- Monitoring/Logging: you have to consider the tradeoff between resource use and accountability, and then of course you need to have a log review and monitoring process in place.
- Computational cost vs. accountability.
- Regular reviews.
Edge computing – a layer of computing is put at the input source. For example, the layer can be an embedded device, such as an IOT fridge, or an IOT thermostat or cooling system.
Fog computing – don’t conflate edge and fog computing… The key difference between these is where the computations are done. Just remember the phrase “Edge is Embedded, Fog is further”. From what we understand, the purpose of both of these is to reduce the computational cost on the cloud servers. Edge is done at the source, fog is typically done further out but not in the cloud.
Edge computing vulnerabilities:
- Network compromise:
- Denial of service
- Connection disruption
- Increased attack surface:
- Lots of devices
These mitigations should be applied to reduce the vulnerabilities:
- Network monitoring
- Incident response process
- Good inventory practices (prevents rogue devices)
Virtualization/sandbox – puts executable code in a controlled and separate space in the operating system that does not have access to the host system or other virtual machines.
VM sprawl – an administrator has lost control of all the VMs on a network, which jeopardizes all the services offered.
Trusted platform module – generates and stores cryptographic keys used to encrypt drives that are difficult to obtain when the power is shut down.
Root of trust refers to an immutable unchangeable trusted hardware component such as the TPM. Root of trust is also referred to as a trust anchor, that subsequent actions can rely on, to make sure they’re starting from a secure system state.
File integrity monitoring – hashes file system to ensure integrity.
Aggregation – the precursor to inference. Aggregation refers to the collection of publicly or less sensitive information in order to infer data that is more sensitive, restricted, or confidential. The term “aggregation” itself is the act of collecting the non-sensitive data.
Inference – the most probable action taken after aggregation. Inference means to look at public or non-sensitive data in order to make a determination of sensitive or confidential information. Inference refers to the act of determining the information of higher sensitivity.
Example: aggregation might look at a celebrity’s past itinerary from the news or other publications, including counties, cities, and businesses visited over the past year. If a majority of the places visited were in Lake County, and included restaurants at weird hours and a commercial real-estate firm, someone could infer that the celebrity is planning to open a restaurant in Lake County. So, while the emails, texts, and business discussions between the celebrity and his or her confidants remain confidential and hidden, the media might be able to infer what this person is planning to do.
Throttling – this is staggering output to ensure that too much doesn’t get disclosed at any one given time. For example, throttling the display of account information to users who enter account numbers on an unauthenticated web page (not a good idea BTW) could reduce the risk of attackers harvesting or siphoning account numbers.
Anonymization – removal of things like social security numbers and names from a database so that only the demographic information is used.
Tokenization – replaces things like SSNs and names with other identifiers that can be cross referenced. The tokens are the identifiers.
Software as a service (SaaS) – applications run through a web browser. SaaS affords the least amount of visibility into the system or application, since everything is managed by the provider. Limited configuration capabilities.
Network as a Service (NaaS) – as in the title, networking/transport capabilities are provided.
Infrastructure as a Service (IaaS) – customer controls operating systems, storage, applications, and possibly some networking capabilities. This type of service offers a greater amount of visibility and control.
Platform as a Service (PaaS) – a deployment platform for custom applications. The customer has control over the configuration of the environment and the custom applications only.
Private cloud – in house cloud or data center, not open to the public. This can also be referred to as “on-prem” cloud.
Public cloud – offered to the general public; anyone can be a customer.
Government cloud supports government agencies and their contractors. Not open to the general public.
Community cloud – offered to communities or types of organizations, such as taxing agencies or film industry organizations. The organizations themselves may take part in hosting them, or the offering is exclusive to organizations in that category.
Industrial control systems – used to control industrial processes, like manufacturing, logistics, distribution, etc., which contain sensors and gauges to support cybernetics. The three types to know for the exam are:
- Supervisory control and data acquisition (SCADA) – commonly used in large-scale utility companies to automate the control of utility generation, distribution, and production.
- Distributed control systems (DCS) – smaller scaled SCADA-like system, usually confined to a geographic area or plant.
- Programmable logic controllers (PLC) – control of a singular element of the DCS or SCADA, such as the HVAC for a particular office or factory on the compound. Knobs and buttons, but can also be part of the larger DCS/SCADA.
Internet of Things (IOT) – small form factor devices (can include mobile devices) that offer very little security out of the box and limited vendor support, such as wireless drawing pads, smartboards, thermostats, refrigerators (not the fridge itself, but the controlling device), bluetooth devices, and printers, all of which connect to the internet or local network. Embedded system – dedicated platform that is designed for a singular function which can include system on a chip (SOC). Examples include stand-alone MP3 players or digital cameras.
Ransomware – the CBK talks about how your organization needs to consider whether you would pay the ransom or not, and that the decision and other safeguards need to be covered in your BCDR efforts, and that they need to cover situations where even your backups might be unavailable.