What is the purpose of Digital Rights Management DRM?
To ensure that all attempts to access information are tracked, logged, and auditable
To control the use, modification, and distribution of copyrighted works
To ensure that corporate files and data cannot be accessed by unauthorized personnel
To ensure that intellectual property remains under the full control of the originating enterprise
Digital Rights Management is a set of technical mechanisms used to enforce the permitted uses of digital content after it has been delivered to a user or device. Its primary purpose is tocontrol how copyrighted works are accessed and used, including restricting copying, printing, screen capture, forwarding, offline use, device limits, and redistribution. DRM systems commonly apply encryption to content and then rely on a licensing and policy enforcement component that checks whether a user or device has the right to open the content and under what conditions. These conditions can include time-based access (expiry), geographic limitations, subscription status, concurrent use limits, or restrictions on modification and export.
This aligns precisely with option B because DRM is fundamentally aboutusage control of copyrighted digital works, such as music, movies, e-books, software, and protected media streams. In cybersecurity documentation, DRM is often discussed alongside content protection, anti-piracy measures, and license compliance. It differs from general access control and audit logging: access control determines who may enter a system or open a resource, while auditing records actions for accountability. DRM extends beyond simple access by enforcing what a legitimate user can do with the content once accessed.
Option A describes audit logging, option C describes general authorization and data access control, and option D is closer to broad information rights management goals but is less precise than the standard definition focused on controlling use and distribution of copyrighted works.
NIST 800-30 defines cyber risk as a function of the likelihood of a given threat-source exercising a potential vulnerability, and:
the pre-disposing conditions of the vulnerability.
the probability of detecting damage to the infrastructure.
the effectiveness of the control assurance framework.
the resulting impact of that adverse event on the organization.
NIST SP 800-30 describes risk using a classic risk model:risk is a function of likelihood and impact. In this model, a threat-source may exploit a vulnerability, producing a threat event that results in adverse consequences. Thelikelihoodcomponent reflects how probable it is that a threat event will occur and successfully cause harm, considering factors such as threat capability and intent (or in non-adversarial cases, the frequency of hazards), the existence and severity of vulnerabilities, exposure, and the strength of current safeguards. However, likelihood alone does not define risk; a highly likely event that causes minimal harm may be less important than a less likely event that causes severe harm.
The second required component is theimpact—the magnitude of harm to the organization if the adverse event occurs. Impact is commonly evaluated across mission and business outcomes, including financial loss, operational disruption, legal or regulatory consequences, reputational damage, and loss of confidentiality, integrity, or availability. This is why option D is correct: NIST’s definition explicitly ties the risk expression tothe resulting impact on the organization.
The other options may influence likelihood assessment or control selection, but they are not the missing definitional element. Detection probability and control assurance relate to monitoring and governance; predisposing conditions can shape likelihood. None replace the
Which organizational area would drive a cybersecurity infrastructure Business Case?
Risk
IT
Legal
Finance
A cybersecurity infrastructure business case is typically driven by theRiskfunction because the justification for security investments is grounded in reducing enterprise risk to an acceptable level and aligning with the organization’s risk appetite and regulatory obligations. Risk-focused teams (often working with the CISO and security governance) translate threats, vulnerabilities, and control gaps into business impact terms such as likelihood of adverse events, potential operational disruption, financial exposure, regulatory penalties, and reputational harm. This framing is what a formal business case requires: a clear problem statement, quantified or prioritized risk scenarios, expected risk reduction from proposed controls, and how residual risk compares to tolerance thresholds.
WhileITusually leads implementation and provides architecture, sizing, and operational cost estimates, IT alone does not typically “drive” the business case without the risk rationale that explains why the investment is necessary and what enterprise outcomes it protects.Legalcontributes requirements related to compliance, contracts, and breach handling, but it generally supports rather than owns investment prioritization.Financeevaluates budgeting, funding options, and return-on-investment assumptions, yet it relies on risk inputs to understand why the spend is warranted and what loss exposure is being reduced.
Therefore, the organizational area most responsible for driving a cybersecurity infrastructure business case—by defining the risk problem, articulating risk-based benefits, and enabling executive decision-making—isRisk.
Bottom of Form
Which statement is true about a data warehouse?
Data stored in a data warehouse is used for analytical purposes, not operational tasks
The data warehouse must use the same data structures as production systems
Data warehouses should act as a central repository for the data generated by all operational systems
Data cleaning must be done on operational systems before the data is transferred to a data warehouse
A data warehouse is designed primarily to supportanalytics, reporting, and decision-makingrather than day-to-day transaction processing. Operational systems are optimized for fast inserts/updates and real-time business operations such as order entry, billing, or customer service workflows. In contrast, a warehouse consolidates data—often from multiple sources—into structures optimized for querying, trending, and historical analysis. From a cybersecurity and governance perspective, this distinction matters because warehouses frequently contain large volumes of aggregated, historical, and sometimes sensitive information, which can increase impact if confidentiality is breached. As a result, controls like strong access governance, role-based access, least privilege, segregation of duties, encryption, and audit logging are emphasized for warehouses to reduce insider misuse and limit exposure.
Option B is false because warehouses often use different structures (for example, dimensional models) than production systems, specifically to improve analytical performance and usability. Option C can be true in some architectures, but it is not universally required; organizations may operate multiple warehouses, data marts, or lakehouse patterns, and not all operational data is appropriate to centralize due to privacy, cost, and regulatory constraints. Option D is incorrect because cleansing is commonly performed in dedicated integration pipelines and staging layers rather than changing operational systems to “pre-clean” data. Therefore, A is the best verified statement.
Where SaaS is the delivery of a software service, what service does PaaS provide?
Load Balancers
Storage
Subscriptions
Operating System
Cloud service models are commonly described as stacked layers of responsibility.Software as a Servicedelivers a complete application to the customer, while the provider manages the underlying platform and infrastructure.Platform as a Servicesits one level below SaaS: it provides the managed platform needed to build, deploy, and run applications without the customer having to manage the underlying servers and most core system software.
A defining feature of PaaS is that the provider supplies and manages key platform components such as theoperating system, runtime environment, middleware, web/application servers, and often supporting services like managed databases, messaging, scaling, and patching of the platform layer. The customer typically remains responsible for their application code, configuration, identities and access in the application, data classification and protection choices, and secure development practices. This shared responsibility model is central in cybersecurity guidance because it determines which security controls the provider enforces by default and which controls the customer must implement.
Given the answer options,Operating Systemis the best match because it is a core part of the platform layer that PaaS customers generally do not manage directly. Load balancers and storage can be consumed in multiple models, including IaaS and PaaS, and subscriptions describe a billing approach, not the technical service layer. Therefore, option D correctly reflects what PaaS provides compared to SaaS.
Bottom of Form
Separation of duties, as a security principle, is intended to:
optimize security application performance.
ensure that all security systems are integrated.
balance user workload.
prevent fraud and error.
Separation of duties is a foundational access-control and governance principle designed to reduce the likelihood of misuse, fraud, and significant mistakes by ensuring thatno single individual can complete a critical process end-to-end without independent oversight. Cybersecurity and audit frameworks describe this as splitting high-risk activities into distinct roles so that one person’s actions are checked or complemented by another person’s authority. This limits both intentional abuse, such as unauthorized payments or data manipulation, and unintentional errors, such as misconfigurations or accidental deletion of important records.
In practice, separation of duties is implemented by defining roles and permissions so that incompatible functions are not assigned to the same account. Common examples include separating the ability to create a vendor from the ability to approve payments, separating software development from production deployment, and separating system administration from security monitoring or audit log management. This is reinforced through role-based access control, approval workflows, privileged access management, and periodic access reviews that detect conflicting entitlements and privilege creep.
The value of separation of duties is risk reduction through accountability and control. When actions require multiple parties or independent review, it becomes harder for a single compromised account or malicious insider to cause large harm without detection. It also improves reliability by introducing checkpoints that catch mistakes earlier. Therefore, the correct purpose is to prevent fraud and error.
Why is directory management important for cybersecurity?
It prevents outside agents from viewing confidential company information
It allows all application security to be managed through a single interface
It prevents outsiders from knowing personal information about employees
It controls access to folders and files on the network
Directory management is important because it provides a centralized way to define identities, groups, roles, and permissions, which directly determines who can access network resources. In most enterprises, directory services store user and service accounts and then integrate with file servers, applications, email platforms, VPN, and cloud services. This integration enables consistent enforcement of authorization rules such as group-based access to shared folders and files, role-based access control, and least privilege. Option D captures this core security purpose: directory management is a foundational control mechanism for governing access to networked resources.
From a cybersecurity controls perspective, directory management supports secure onboarding and offboarding, ensuring that new users receive only appropriate permissions and that departing users are disabled promptly to reduce insider and external risk. It also strengthens authentication by enabling enterprise-wide policies such as password rules, account lockouts, multi-factor authentication integration, and conditional access. In addition, centralized directories improve auditability: administrators can review memberships and entitlements, monitor privileged group changes, and generate logs that support investigations and compliance reporting.
The other options are either too broad or not primarily about directory management. While directories help protect confidential information indirectly, their direct function is not “preventing outside agents” by itself; it is enforcing access rules. They also do not manage all application security through one interface, and preventing outsiders from knowing employee personal information is a privacy objective, not the main purpose of directory management.
Top of Form
What should organizations do with Key Risk Indicator KRI and Key Performance Indicator KPI data to facilitate decision making, and improve performance and accountability?
Achieve, reset, and evaluate
Collect, analyze, and report
Prioritize, falsify, and report
Challenge, compare, and revise
KRIs and KPIs are only useful when they are handled as part of a disciplined measurement lifecycle. Cybersecurity governance guidance emphasizes three essential activities:collect,analyze, andreport. Organizations must firstcollectKRI and KPI data consistently from reliable sources such as vulnerability scanners, SIEM logs, IAM systems, ticketing platforms, and asset inventories. Collection requires defined metric owners, clear definitions, standardized time windows, and data quality checks so results are comparable across periods and business units.
Next, organizationsanalyzethe data to understand what it means for risk and performance. Analysis includes trending over time, comparing results to targets and thresholds, correlating indicators to business outcomes, identifying outliers, and determining root causes. For KRIs, analysis highlights rising exposure or control breakdowns such as increasing critical vulnerabilities beyond SLA. For KPIs, analysis evaluates operational effectiveness such as mean time to detect and mean time to remediate.
Finally, organizationsreportresults to the right audiences with the right level of detail. Reporting supports accountability by assigning actions, tracking remediation progress, and escalating when thresholds are exceeded. It also supports decision making by showing where investment, staffing, or control changes will have the greatest risk-reduction and performance impact. The other options are not standard, auditable metric management activities and do not reflect the established lifecycle used in cybersecurity measurement programs.
A significant benefit of role-based access is that it:
simplifies the assignment of correct access levels to a user based on the work they will perform.
makes it easier to audit and verify data access.
ensures that employee accounts will be shut down on departure or role change.
ensures that tasks and associated privileges for a specific business process are disseminated among multiple users.
Role-based access control assigns permissions to defined roles that reflect job functions, and users receive access by being placed into the appropriate role. The major operational and security benefit is that itsimplifies and standardizes access provisioning. Instead of granting permissions individually to each user, administrators manage a smaller, controlled set of roles such as Accounts Payable Clerk, HR Specialist, or Application Administrator. When a new employee joins or changes responsibilities, access can be adjusted quickly and consistently by changing role membership. This reduces manual errors, limits over-provisioning, and helps enforce least privilege because each role is designed to include only the permissions required for that function.
RBAC also improves governance by making access decisions more repeatable and policy-driven. Security and compliance teams can review roles, validate that each role’s permissions match business needs, and require approvals for changes to role definitions. This approach supports segregation of duties by separating conflicting capabilities into different roles, which lowers fraud and misuse risk.
Option B is a real advantage of RBAC, but it is typically a secondary outcome of having structured roles rather than the primary “significant benefit” emphasized in access-control design. Option C relates to identity lifecycle processes such as deprovisioning, which can be integrated with RBAC but is not guaranteed by RBAC alone. Option D describes distributing tasks among multiple users, which is more aligned with segregation of duties design, not the core benefit of RBAC.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
A Business Analyst documents current technology in the “as-is” state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
What is whitelisting in the context of network security?
Grouping assets together based on common security requirements, and placing each group into an isolated network zone
Denying access to applications that have been determined to be malicious
Explicitly allowing identified people, groups, or services access to a particular privilege, service, or recognition
Running software to identify any malware present on a computer system
Whitelisting, often called an “allow list,” is a security approach where access is granted only toexplicitly approvedidentities, services, applications, IP addresses, domains, or network flows. In network security, this means the default stance is “deny by default,” and only pre-authorized entities are allowed to communicate or use specific resources. Option C matches this definition because it describes the core idea: explicitly permitting known, approved subjects (people, groups, service accounts, systems) to access a defined privilege or service.
Cybersecurity documents emphasize whitelisting as a strong risk-reduction technique because it constrains the attack surface. Instead of trying to block every bad thing (which is difficult due to evolving threats), whitelisting focuses on allowing only what is required for business operations. Examples include firewall rules that only permit specific source IPs to reach an admin interface, network segmentation policies that allow only required ports between zones, and application whitelisting that permits only approved executables to run. When implemented correctly, it reduces lateral movement opportunities, limits command-and-control traffic, and prevents unauthorized tools from executing.
Whitelisting is different from segmentation (option A), which is about isolating zones based on security needs, and different from blacklisting (option B), which blocks known-bad items. It is also not malware scanning (option D), which detects malicious code after it appears. Whitelisting aligns with least privilege and zero trust principles by tightly controlling what is allowed.
What is risk mitigation?
Reducing the risk by implementing one or more countermeasures
Purchasing insurance against a cybersecurity breach
Eliminating the risk by stopping the activity which causes risk
Documenting the risk in full and preparing a recovery plan
Risk mitigation is the risk treatment approach focused onreducing risk to an acceptable levelby lowering either the likelihood of a risk event, the impact of that event, or both. In cybersecurity risk management, mitigation is accomplished by implementingcontrols and countermeasuressuch as technical safeguards, process changes, and administrative measures. Examples include patching vulnerable systems, hardening configurations, enabling multi-factor authentication, applying least privilege, network segmentation, encryption, improved logging and monitoring, secure development practices, and user awareness training. Each of these actions reduces exposure or limits damage if an incident occurs.
The other options describe different risk treatment strategies, not mitigation. Purchasing insurance is generally consideredrisk transfer, where financial impact is shifted to a third party, but the underlying threat and vulnerability may still exist. Eliminating risk by stopping the risky activity isrisk avoidance; it removes the exposure by discontinuing the process, system, or behavior causing the risk. Documenting the risk and preparing a recovery plan aligns more closely withrisk acceptancecombined withcontingency planningor resilience planning; it acknowledges the risk and focuses on recovery rather than reducing the probability of occurrence.
Therefore, the correct definition of risk mitigation is reducing the risk through implementing one or more countermeasures.
What is an external audit?
A review of security-related measures in place intended to identify possible vulnerabilities
A process that the cybersecurity follows to ensure that they have implemented the proper controls
A review of security expenditures by an independent party
A review of security-related activities by an independent party to ensure compliance
Anexternal auditis an independent evaluation performed by a party outside the organization to determine whether security-related activities, controls, and evidence meet defined requirements. Those requirements are typically drawn from laws and regulations, contractual obligations, and recognized standards or control frameworks. The defining characteristics areindependenceandattestation: the auditor is not part of the operational team being assessed and provides an objective conclusion about compliance or control effectiveness.
Unlike a vulnerability-focused review (often called a security assessment or technical audit) that primarily seeks weaknesses to remediate, an external audit emphasizes whether controls aredesigned appropriately, implemented consistently, and operating effectivelyover time. External auditors usually test governance processes, risk management practices, policies, access control procedures, change management, logging and monitoring, incident response readiness, and evidence of periodic reviews. They also validate documentation and sampling records to confirm that what is written is actually performed.
Option B describes an internal assurance activity, such as self-assessment or internal audit preparation, where the security team checks its own implementation. Option C is closer to a financial or procurement review and is not the typical definition of an external security audit. Therefore, the best answer is the one that clearly captures anindependent partyreviewing security activitiesto ensure compliancewith established criteria
Other than the Requirements Analysis document, in what project deliverable should Vendor Security Requirements be included?
Training Plan
Business Continuity Plan
Project Charter
Request For Proposals
Vendor Security Requirements must be included in theRequest For Proposalsbecause the RFP is the formal mechanism used to communicate mandatory expectations to suppliers and to evaluate them consistently during selection. Cybersecurity and third-party risk management practices require that security expectations be establishedbeforea vendor is chosen, so the organization can assess whether a supplier can meet confidentiality, integrity, availability, privacy, and compliance obligations. Embedding requirements in the RFP makes them contractual in nature once incorporated into the final agreement and ensures vendors price and design their solution with security controls in scope rather than treating them as optional add-ons later.
Security requirements in an RFP typically cover topics such as secure development practices, vulnerability management, patching and support timelines, encryption for data at rest and in transit, identity and access controls, audit logging, incident notification timelines, subcontractor controls, data residency and retention, penetration testing evidence, compliance attestations, and right-to-audit provisions. The RFP also enables objective scoring by requesting documented evidence such as security certifications, control descriptions, and responses to standardized security questionnaires.
A training plan and business continuity plan are operational deliverables and do not drive vendor selection criteria. A project charter sets scope and governance at a high level, but it is not the primary procurement artifact for binding vendor security obligations. Therefore, the correct answer is Request For Proposals.
What is defined as an internal computerized table of access rules regarding the levels of computer access permitted to login IDs and computer terminals?
Access Control List
Access Control Entry
Relational Access Database
Directory Management System
AnAccess Control List (ACL)is a structured, system-maintained list of authorization rules that specifieswho or what is allowed to access a resourceand what actions are permitted. In many operating systems, network devices, and applications, an ACL functions as an internal table that maps identities such as user IDs, group IDs, service accounts, or even device/terminal identifiers to permissions like read, write, execute, modify, delete, or administer. When a subject attempts to access an object, the system consults the ACL to determine whether the requested operation should be allowed or denied, enforcing the organization’s security policy at runtime.
The description in the question matches the classic definition of an ACL as a computerized table of access rules tied to login IDs and sometimes the originating endpoint or terminal context. ACLs are central to implementingdiscretionary access controland are also widely used in networking (for example, permitting or denying traffic flows based on source/destination and ports) and file systems (controlling access to folders and files).
AnAccess Control Entry (ACE)is only a single line item within an ACL (one rule for one subject). A “Relational Access Database” is not a standard security control term for authorization tables. A “Directory Management System” manages identities and groups, but it is not the same as the enforcement list attached to a specific resource. Therefore, the correct answer isAccess Control List.
Which of the following would qualify as a multi-factor authentication pair?
Thumbprint and Encryption
Something You Know and Something You Are
Password and Token
Encryption and Password
Multi-factor authentication requires a user to prove identity usingtwo or more different factor types. Cybersecurity standards describe the main factor categories assomething you know(for example, a password or PIN),something you have(for example, a hardware token, smart card, or authenticator app producing a one-time code), andsomething you are(biometrics such as fingerprint, face, or iris). A valid MFA pair must come fromdifferent categories, not just two items from the same category or a mix of authentication with non-authentication concepts.
OptionBis correct because it explicitly combines two distinct factor types: a knowledge factor and an inherence factor. This pairing is widely recognized as MFA because compromising one factor does not automatically compromise the other: an attacker who steals a password still needs the biometric, and spoofing a biometric does not provide the secret knowledge factor.
OptionAis incorrect because “encryption” is not an authentication factor; it is a protection mechanism for confidentiality and integrity of data. OptionDhas the same problem: encryption is not a user factor. OptionCcan represent MFA in many real implementations if “token” is truly a possession factor; however, training materials and exam items often prefer the clearest, unambiguous factor-language pairing, which is why “Something You Know and Something You Are” is the best single answer here.
Which of the following factors is most important in determining the classification of personal information?
Integrity
Confidentiality
Availability
Accessibility
Personal information is classified primarily based on the harm that could result fromunauthorized disclosure, which maps directly to theconfidentialityobjective. Cybersecurity and privacy governance frameworks treat personal data as sensitive because exposure can lead to identity theft, fraud, discrimination, personal safety risks, and loss of privacy. Organizations also face regulatory penalties, contractual consequences, and reputational damage when personal data is disclosed without authorization. For this reason, when determining classification, the first and most influential question is typically: “What is the impact if this data becomes known to someone who should not have it?” That impact assessment drives the required protection level and handling rules.
Confidentiality-focused controls then follow from the classification decision, including least privilege and role-based access, strong authentication, encryption at rest and in transit, secure key management, data loss prevention where appropriate, logging and monitoring of access to sensitive records, and strict sharing/transfer procedures.
Integrity and availability matter for personal information, but they are usually secondary in classification decisions. Integrity affects trustworthiness and correctness (for example, incorrect medical or payroll data), and availability affects the ability to access records when needed. However, the defining sensitivity of personal information is that it must not be disclosed improperly. “Accessibility” is not a core security objective used in standard classification models; it is an operational usability concept that is managed through access design after sensitivity is established.
There are three states in which data can exist:
at dead, in action, in use.
at dormant, in mobile, in use.
at sleep, in awake, in use.
at rest, in transit, in use.
Data is commonly categorized into three states because the threats and protections change depending on where the data is and what is happening to it. Data at rest is stored on a device or system, such as databases, file shares, endpoints, backups, and cloud storage. The main risks are unauthorized access, theft of storage media, misconfigured permissions, and improper disposal. Controls typically include strong access control, encryption at rest with sound key management, secure configuration and hardening, segmentation, and resilient backup protections including restricted access and immutability.
Data in transit is data moving between systems, such as client-to-server traffic, service-to-service connections, API calls, and email routing. The primary risks are interception, alteration, and impersonation through man-in-the-middle techniques. Standard controls include transport encryption (such as TLS), strong authentication and certificate validation, secure network architecture, and monitoring for anomalous connections or data flows.
Data in use is actively processed in memory by applications and users, for example when a document is opened, a record is processed by an application, or data is displayed to a user. This state is challenging because data may be decrypted for processing. Controls include least privilege, strong authentication and session management, endpoint protection, application security controls, and secure development practices, with hardware-backed isolation when required.
Which of the following terms represents an accidental exploitation of a vulnerability?
Threat
Agent
Event
Response
In cybersecurity risk terminology, aneventis an observable occurrence that can affect systems, services, or data. An event may be benign, harmful, intentional, or accidental. When a vulnerability is exploitedaccidentally—for example, a user unintentionally triggers a software flaw, a misconfiguration causes unintended exposure, or a system process mishandles input and causes data corruption—the occurrence is best categorized as anevent. Cybersecurity documentation often distinguishes between thepossibilityof harm and theactual occurrenceof a harmful condition. Athreatis the potential for an unwanted incident, such as an actor or circumstance that could exploit a vulnerability. A threat does not require that exploitation actually happens; it describes risk potential. Anagentis the entity that acts (such as a person, malware, or process) and may be malicious or non-malicious, but “agent” is not the term for the occurrence itself. Aresponserefers to the actions taken after detection, such as containment, eradication, recovery, and lessons learned; it is part of incident handling, not the accidental exploitation.
Therefore, the term that represents the actual accidental exploitation occurrence isevent, because it captures the real-world happening that may trigger alerts, investigations, and potentially incident response activities if impact is significant.
TESTED 24 Feb 2026
