You might be thinking exactly what we did at first when arriving at this topic, which is, why is this in Domain 6 and not in Domain 8? The truth is, the domains are completely irrelevant for the most part, but since you asked, it’s our belief that the testing of software not only occurs in the development phase, but also in assessment and audit methods.  Here are the terms you’ll need to be familiar with for this domain:

Architectural model – the prerequisite to an architecture security review.

Architecture security review – manually reviewing the product architecture to make sure it fulfills security requirements.  The benefit of this is that it can help a security test to detect architectural violations of security standards.

Business case – description of the need for testing software (usage scenario), or a function of the software that accomplishes or carries out a business need (e.g. the need to deposit checks from a mobile phone without going to a bank); this is a prerequisite to threat modeling.

Threat modeling – structured, manual review of an application business/use case or scenario that is accompanied by pre-compiled security threats.  The benefit of this is that it can help to identify threats, impact, and possible mitigations that can be developed in the planning and design phase.

Here are the terms you’ll need to be familiar with during the application and development phase (pre-testing environment):

Static source code analysis – AKA manual code review – does not execute the application.  This can help detect insecure programming, outdated libraries, and misconfigurations.  It has six objectives:

  1. Verify that all required functions exist.
  2. Verify that there is no foreign code. Foreign code refers to code that is not related to the performance of a required function.
  3. Verify that there is no dead end or unreachable code, which refers to all lines of code within the system being executable.
  4. Ensure that there are no back doors, trap doors, no debug mode controls, or special processing mode controls that can be exploited by an attacker.
  5. Coding standards are met. This means that organizational, contractual or other required frameworks, templates, coding standards, and naming conventions were used applicable to the unit of code being evaluated.
  6. All code is of trustworthy provenance. In other words, it has either been originally written, imported from other validated project libraries, or from other trustworthy code libraries and depositories.

Dynamic testing – code is executed and observed.

Static testing – as stated above, this is a manual review and code is not executed.

Interactive application security testing (IAST) as opposed to DAST and SAST brings software testing to web and mobile apps. It works with agents that are incorporated into the application being tested, which enables it to look through the application’s logic as far down as the library routines it calls to ensure proper use. Full code coverage for serverless applications that use non-HTTP interfaces are a challenge for IAST.

Manual testing – human-guided test scenarios.

Automated testing – an application is used to carry out the test.

Manual/automated penetration testing – sends data in simulated attacks to observe the application’s behavior.

Vulnerability scan – looks for weaknesses in system components or configurations.

Fuzz testing – sends large amounts of data to the application’s input capability to force crashing/failure.

Considerations for test methods:

  • Attack surface – differing methods will find differing vulnerabilities
  • Application behavior – different methods will yield varying behavior from applications.
  • Results – variances in usability, recommended fixes, and number of false positives.
  • Supported languages/technologies – differing tools may not support all technologies.
  • Resource consumption – both computing power and manual (employee) efforts.

Additional terms to be aware of:

Use case – a specific interaction between a system and a condition or environment.

Misuse case – a use case that is specifically from the point of view of an attacker.

Positive test – system works as expected with expected data.

Negative test – how the system behaves with unexpected data (should reject the data).

Interface test – testing the various hardware and software components of a system to make sure they function (interface with one another) harmoniously.  For example, testing the application with differing types of browsers, testing the file transfer capability, communication flow between components.

Privileged applets – Java applets are either sandbox applets or privileged applets. Sandbox applets are executed in a security sandbox that only allows explicit safe operations. Privileged applets can run outside the security sandbox and have extensive capabilities to access the client and its environment.

Applets loaded from the local file system (in the user’s CLASSPATH) do not have the same restrictions that applets loaded over the network do. When launched with the Java Network Launch Protocol (JNLP), sandbox applets can:

  • Open/read/save files on the client
  • Access the shared system-wide clipboard
  • Access printing functions
  • Store data on the client, decide how applets should be downloaded and cached, and much more

Sandbox applets cannot do the following:

  • Access client resources such as the local file system, executable files, system clipboard, and printers
  • Connect to or retrieve resources from any third-party server (any server other than the server it originated from)
  • Load native libraries
  • Change the Security Manager
  • Create a Class Loader
  • Read certain system properties

The final concept in Domain 6 is the protection of test data.  Considerations:

  • Don’t use PII or confidential data for testing.  If needed, ensure that proper protections are in place.  
  • Sensitive elements should be randomized or removed.
  • Utilize planned-production access control methods.
  • If production data is needed, a formal request should be submitted for each instance of use, copying, transmission, or test plan.
  • Destruction/deletion of test data when the test is finished.
  • Adequate logging should be in place for all use of production data.
  • Labels are important – both the data that is used in the test, and the output from the testing.  This way the classification will be known by future handlers.