Summer Special Limited Time 50% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 28285018

DBS-C01 AWS Certified Database - Specialty Questions and Answers

Questions 4

A corporation is transitioning from an IBM Informix database to an Amazon RDS for SQL Server Multi-AZ implementation with Always On Availability Groups (AGs). SQL Server Agent tasks are scheduled to execute at 5-minute intervals on the Always On AG listener to synchronize data between the Informix and SQL Server databases. After a successful failover to the backup node with minimum delay, users endure hours of stale data.

How can a database professional guarantee that consumers view the most current data after a failover?

Options:

A.

Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.

B.

Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.

C.

Set the databases on the secondary node to read-only mode.

D.

Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.

Buy Now
Questions 5

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production

resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

Options:

A.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

B.

Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

C.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_

D.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports

Buy Now
Questions 6

A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company’s migration to AWS.

Which MySQL database option would meet these requirements?

Options:

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Buy Now
Questions 7

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?

Options:

A.

Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0

B.

Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1

C.

Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture

D.

Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan

Buy Now
Questions 8

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

Options:

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Buy Now
Questions 9

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

Options:

A.

Enable DocumentDB to export the logs to Amazon CloudWatch Logs

B.

Enable DocumentDB to export the logs to AWS CloudTrail

C.

Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs

D.

Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

Buy Now
Questions 10

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

Options:

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Buy Now
Questions 11

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

Options:

A.

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

B.

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

C.

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

D.

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

Buy Now
Questions 12

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Buy Now
Questions 13

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.

How should the Database Specialist optimize the database migration using AWS DMS?

Options:

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Buy Now
Questions 14

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South

America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.

How should the database administrator remediate this issue at the lowest cost?

Options:

A.

Enable auto scaling and set the target usage rate to 90%.

B.

Switch the table to provisioned mode and enable auto scaling.

C.

Switch the table to provisioned mode and set the throughput to the peak value.

D.

Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.

Buy Now
Questions 15

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Buy Now
Questions 16

A company’s database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.

What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

Options:

A.

Change the restored cluster’s parameter group to the original cluster’s custom parameter group.

B.

Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.

C.

Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.

D.

Run the syncInstances command in AWS DataSync.

Buy Now
Questions 17

A database specialist is launching a test graph database using Amazon Neptune for the first time. The database specialist needs to insert millions of rows of test observations from a .csv file that is stored in Amazon S3. The database specialist has been using a series of API calls to upload the data to the Neptune DB instance.

Which combination of steps would allow the database specialist to upload the data faster? (Choose three.)

Options:

A.

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

B.

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

C.

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

D.

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

E.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F.

Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint.

Buy Now
Questions 18

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multiple accounts. The company will initiate each DB instance from an existing Aurora PostgreSQL DB instance that runs in a

shared account. The company wants the process to be repeatable in case the company adds additional accounts in the future. The company also wants to be able to verify if manual changes have been made

to the DB instance configurations after the company deploys the DB instances.

A database specialist has determined that the company needs to create an AWS CloudFormation template with the necessary configuration to create a DB instance in an account by using a snapshot of the existing DB instance to initialize the DB instance. The company will also use the CloudFormation template's parameters to provide key values for the DB instance creation (account ID, etc.).

Which final step will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.

B.

Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.

C.

Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.

D.

Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.

Buy Now
Questions 19

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

This application has two parts:

  • An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
  • A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

A database specialist needs to design a cost-effective database solution to handle this workload. Which solution meets these requirements?

Options:

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Buy Now
Questions 20

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.

What will happen when the modification is submitted?

Options:

A.

The request will fail because this storage capacity is too large.

B.

The request will succeed only if the primary instance is in active status.

C.

The request will succeed only if CPU utilization is less than 10%.

D.

The request will fail as the most recent modification was too soon.

Buy Now
Questions 21

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster. The DB cluster uses three read replicas. The primary DB instance is an

8XL-sized instance, and the read replicas are each XL-sized instances.

Users report that database queries are returning stale data. The replication lag indicates that the replicas are 5 minutes behind the primary DB instance. Status queries on the replicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that the IO_THREAD is 1 binlog behind the primary.

Which changes will reduce the lag? (Choose two.)

Options:

A.

Deploy two additional read replicas matching the existing replica DB instance size.

B.

Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add three Aurora Replicas.

C.

Move the read replicas to the same Availability Zone as the primary DB instance.

D.

Increase the instance size of the primary DB instance within the same instance class.

E.

Increase the instance size of the read replicas to the same size and class as the primary DB instance.

Buy Now
Questions 22

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.

Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

Options:

A.

Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.

B.

Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.

C.

On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.

D.

On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.

E.

Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.

F.

On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.

Buy Now
Questions 23

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.

What should the database specialist do to meet these requirements?

Options:

A.

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

B.

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

C.

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

D.

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Buy Now
Questions 24

A coffee machine manufacturer is equipping all of its coffee machines with 10T sensors. The 10T core application is writing measurements for each record to Amazon Timestream. The records have multiple

dimensions and measures. The measures include multiple measure names and values.

An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application.

Which solution will meet these requirements?

Options:

A.

Ensure that queries contain whole records over the relevant time range.

B.

Use time range, measure name, and dimensions in the WHERE clause of the query.

C.

Avoid canceling any query after the query starts running.

D.

Implement exponential backoff in the application.

Buy Now
Questions 25

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.

The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.

What should a database specialist do to meet these requirements?

Options:

A.

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

B.

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

C.

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.

D.

Use on-demand capacity.

Buy Now
Questions 26

A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev- VPC1.

What is likely causing the timeouts?

Options:

A.

The database is deployed in a VPC that is in a different Region.

B.

The database is deployed in a VPC that is in a different Availability Zone.

C.

The database is deployed with misconfigured security groups.

D.

The database is deployed with the wrong client connect timeout configuration.

Buy Now
Questions 27

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Buy Now
Questions 28

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

Options:

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Buy Now
Questions 29

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.

Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

B.

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.

C.

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.

D.

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

E.

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

Buy Now
Questions 30

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

Options:

A.

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.

Enable Enhanced Monitoring will the appropriate settings

Buy Now
Questions 31

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

Options:

A.

Deploy multiple read replicas and have the team members make changes to separate replica instances

B.

Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot

C.

Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature

D.

Enable the Amazon RDS for MySQL Backtrack feature

Buy Now
Questions 32

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

Options:

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Buy Now
Questions 33

A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.

A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.

Which solution will satisfy these criteria?

Options:

A.

Defer the maintenance update until the sales event is over.

B.

Create a read replica with the latest update. Initiate a failover before the sales event.

C.

Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.

D.

Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Buy Now
Questions 34

A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:

€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.

Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

Options:

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Buy Now
Questions 35

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.

What is most secure solution to store the master password?

Options:

A.

Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.

B.

Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.

C.

Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.

D.

Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

Buy Now
Questions 36

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

Options:

A.

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

B.

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

C.

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

D.

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

Buy Now
Questions 37

A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.

What is the FASTEST way to accomplish this?

Options:

A.

Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.

B.

Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.

C.

Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.

D.

Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

Buy Now
Questions 38

A database professional is developing an application that will respond to single-instance requests. The program will query large amounts of client data and offer end users with results.

These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.

During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.

Which approach will be the most cost-effective in meeting these requirements?

Options:

A.

Amazon DynamoDB with provisioned capacity mode and auto scaling

B.

Amazon DynamoDB with on-demand capacity mode

C.

Amazon Aurora with auto scaling enabled

D.

Amazon Aurora in a serverless mode

Buy Now
Questions 39

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Buy Now
Questions 40

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

Options:

A.

Create a snapshot of the old databases and restore the snapshot with the required storage

B.

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

C.

Create a new database using native backup and restore

D.

Create a new read replica and make it the primary by terminating the existing primary

Buy Now
Questions 41

Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.

Which strategy will resolve the issue?

Options:

A.

Configure all replica tables to use DynamoDB auto scaling.

B.

Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.

C.

Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.

D.

Configure the table-level write throughput limit service quota to a higher value.

Buy Now
Questions 42

A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.

Which method of data loading is the FASTEST?

Options:

A.

Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.

B.

Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.

C.

Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.

D.

Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.

Buy Now
Questions 43

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.

Which solution meets these requirements?

Options:

A.

Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.

B.

Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

C.

Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

D.

Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

Buy Now
Questions 44

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the

Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to

RDS Data API is private and never passes through the public internet.

What should the database specialist do to meet this requirement?

Options:

A.

Modify the Aurora Serverless cluster by selecting a VPC with private subnets.

B.

Modify the Aurora Serverless cluster by unchecking the publicly accessible option.

C.

Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.

D.

Create a gateway VPC endpoint for RDS Data API.

Buy Now
Questions 45

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

Options:

A.

Amazon Aurora MySQL Multi-AZ DB cluster

B.

Amazon Keyspaces (for Apache Cassandra)

C.

Amazon DynamoDB table with DynamoDB auto scaling

D.

Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone

Buy Now
Questions 46

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games’ geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.

Which solution meets these requirements?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Buy Now
Questions 47

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

Options:

A.

Restart the application tool used to execute queries.

B.

Change to a database instance class with higher throughput.

C.

Convert from Single-AZ to Multi-AZ.

D.

Increase the I/O parameter in Amazon RDS Enhanced Monitoring.

E.

Convert from General Purpose to Provisioned IOPS (PIOPS).

Buy Now
Questions 48

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

Options:

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Buy Now
Questions 49

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Buy Now
Questions 50

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Buy Now
Questions 51

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

Options:

A.

Migrate the database to a new Amazon Redshift data warehouse.

B.

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C.

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D.

Add an Aurora read replica.

Buy Now
Questions 52

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.

Which combination of steps should the database specialist take to rename the database? (Choose two.)

Options:

A.

Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

B.

Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

C.

Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

D.

Update the application with the new database connection string.

E.

Update the DNS record for the DB instance.

Buy Now
Questions 53

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application’s cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time.

Which approach will meet these requirements?

Options:

A.

Use an Amazon RDS DB instance. Shut down the instance once the data has been read.

B.

Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.

C.

Use Amazon DynamoDB in on-demand capacity mode.

D.

Use Amazon S3 and load the data from flat files.

Buy Now
Questions 54

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

Options:

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Buy Now
Questions 55

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

Which solution meets these requirements?

Options:

A.

Amazon DynamoDB with on-demand capacity mode

B.

Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled

C.

Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)

D.

Amazon Aurora with one writer node and two cross-Region Aurora Replicas

Buy Now
Questions 56

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Buy Now
Questions 57

Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.

What method should the database administrator use to configure the encryption to fulfill these specifications?

Options:

A.

AWS CloudHSM

B.

AWS Key Management Service (AWS KMS) with an AWS managed key

C.

AWS Key Management Service (AWS KMS) with server-side encryption

D.

AWS Key Management Service (AWS KMS) CMK with customer-provided material

Buy Now
Questions 58

A company recently acquired a new business. A database specialist must migrate an unencrypted 12 TB Amazon RDS for MySQL DB instance to a new AWS account. The database specialist needs to minimize the amount of time required to migrate the database.

Which solution meets these requirements?

Options:

A.

Create a snapshot of the source DB instance in the source account. Share the snapshot with the destination account. In the target account, create a DB instance from the snapshot.

B.

Use AWS Resource Access Manager to share the source DB instance with the destination account. Create a DB instance in the destination account using the shared resource.

C.

Create a read replica of the DB instance. Give the destination account access to the read replica. In the destination account, create a snapshot of the shared read replica and provision a new RDS for MySQL DB instance.

D.

Use mysqldump to back up the source database. Create an RDS for MySQL DB instance in the destination account. Use the mysql command to restore the backup in the destination database.

Buy Now
Questions 59

An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company’s Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.

What should the database specialist do to achieve this? (Choose two.)

Options:

A.

Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.

B.

Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.

C.

Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.

D.

Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.

E.

Enable email notifications for AWS Trusted Advisor.

Buy Now
Questions 60

A business is operating an on-premises application that is divided into three tiers: web, application, and MySQL database. The database is predominantly accessed during business hours, with occasional bursts of activity throughout the day. As part of the company's shift to AWS, a database expert wants to increase the availability and minimize the cost of the MySQL database tier.

Which MySQL database choice satisfies these criteria?

Options:

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Buy Now
Questions 61

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?

Options:

A.

Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.

B.

Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway

C.

Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances

D.

Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.

Buy Now
Questions 62

A development team asks a database specialist to create a copy of a production Amazon RDS for MySQL DB instance every morning. The development team will use the copied DB instance as a testing environment for development. The original DB instance and the copy will be hosted in different VPCs of the same AWS account. The development team wants the copy to be available by 6 AM each day and wants to use the same endpoint address each day.

Which combination of steps should the database specialist take to meet these requirements MOST cost-effectively? (Choose three.)

Options:

A.

Create a snapshot of the production database each day before the 6 AM deadline.

B.

Create an RDS for MySQL DB instance from the snapshot. Select the desired DB instance size.

C.

Update a defined Amazon Route 53 CNAME record to point to the copied DB instance.

D.

Set up an AWS Database Migration Service (AWS DMS) migration task to copy the snapshot to the copied DB instance.

E.

Use the CopySnapshot action on the production DB instance to create a snapshot before 6 AM.

F.

Update a defined Amazon Route 53 alias record to point to the copied DB instance.

Buy Now
Questions 63

A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:

  • The networks and routes affected if a particular component fails.
  • The networks that have redundant routes between them.
  • The networks that do not have redundant routes between them.
  • The fastest path between two networks.

Which database engine meets these requirements?

Options:

A.

Amazon Aurora MySQL

B.

Amazon Neptune

C.

Amazon ElastiCache for Redis

D.

Amazon DynamoDB

Buy Now
Questions 64

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.

Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.

How should a database specialist fix this issue?

Options:

A.

Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.

B.

Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.

C.

Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.

D.

Add a ConditionExpression parameter in the PutItem request.

Buy Now
Questions 65

A company’s ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key.

To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.

What should the database administrator do to meet these requirements?

Options:

A.

Add a global secondary index on Invoice ID to the existing table.

B.

Add a local secondary index on Invoice ID to the existing table.

C.

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

D.

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

Buy Now
Questions 66

A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.

How should the database specialist improve the transfer of the database to AWS DMS?

Options:

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Buy Now
Questions 67

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Buy Now
Questions 68

A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors.

Which of the following will resolve this issue?

Options:

A.

Edit the my.cnf file for the DB cluster to increase max_connections

B.

Increase the instance size of the DB cluster

C.

Change the DB cluster to Multi-AZ

D.

Increase the number of Aurora Replicas

Buy Now
Questions 69

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.

Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

Options:

A.

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.

B.

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.

C.

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

D.

Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Buy Now
Questions 70

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?

Options:

A.

Implementing sharding to distribute the load to multiple RDS for MySQL databases.

B.

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

C.

Add an RDS for MySQL read replica.

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Buy Now
Questions 71

A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the

DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.

What could be the reason for this?

Options:

A.

When stopping the DB instance, the option to create a snapshot should have been selected.

B.

When stopping the DB instance, the duration for stopping the DB instance should have been selected.

C.

Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.

D.

Stopped DB instances will automatically restart if the instance is not manually started after 7 days.

Buy Now
Questions 72

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Buy Now
Questions 73

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

Options:

A.

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Buy Now
Questions 74

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.

Which step should be taken to troubleshoot this issue?

Options:

A.

Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address

B.

Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect

C.

Ensure that the RDS DB instance has not reached its maximum connections limit

D.

Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections

Buy Now
Questions 75

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

Options:

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Buy Now
Questions 76

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

Options:

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Buy Now
Questions 77

A global company is developing an application across multiple AWS Regions. The company needs a database solution with low latency in each Region and automatic disaster recovery. The database must be deployed in an active-active configuration with automatic data synchronization between Regions.

Which solution will meet these requirements with the LOWEST latency?

Options:

A.

Amazon RDS with cross-Region read replicas

B.

Amazon DynamoDB global tables

C.

Amazon Aurora global database

D.

Amazon Athena and Amazon S3 with S3 Cross Region Replication

Buy Now
Questions 78

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.

What is the MOST operationally efficient solution to meet these requirements?

Options:

A.

Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.

B.

Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.

C.

Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.

D.

Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.

Buy Now
Questions 79

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

Options:

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Buy Now
Questions 80

A company has an application that uses an Amazon DynamoDB table as its data store. During normal business days, the throughput requirements from the application are uniform and consist of 5 standard write calls per second to the DynamoDB table. Each write call has 2 KB of data.

For 1 hour each day, the company runs an additional automated job on the DynamoDB table that makes 20 write requests per second. No other application writes to the DynamoDB table. The DynamoDB table does not have to meet any additional capacity requirements.

How should a database specialist configure the DynamoDB table's capacity to meet these requirements MOST cost-effectively?

Options:

A.

Use DynamoDB provisioned capacity with 5 WCUs and auto scaling.

B.

Use DynamoDB provisioned capacity with 5 WCUs and a write-through cache that DynamoDB Accelerator (DAX) provides.

C.

Use DynamoDB provisioned capacity with 10 WCUs and auto scaling.

D.

Use DynamoDB provisioned capacity with 10 WCUs and no auto scaling.

Buy Now
Questions 81

A company's applications store data in Amazon Aurora MySQL DB clusters. The company has separate AWS accounts for its production, test, and development environments. To test new functionality in the test environment, the company's development team requires a copy of the production database four times a day.

Which solution meets this requirement with the MOST operational efficiency?

Options:

A.

Take a manual snapshot in the production account. Share the snapshot with the test account. Restore the database from the snapshot.

B.

Take a manual snapshot in the production account. Export the snapshot to Amazon S3. Copy the snapshot to an S3 bucket in the test account. Restore the database from the snapshot.

C.

Share the Aurora DB cluster with the test account. Create a snapshot of the production database in the test account. Restore the database from the snapshot.

D.

Share the Aurora DB cluster with the test account. Create a clone of the production database in the test account.

Buy Now
Questions 82

A business maintains a SQL Server database on-premises. Active Directory authentication is used to provide users access to the database. The organization transferred their database successfully to Amazon RDS for SQL Server. The organization, however, has reservations regarding user authentication in the AWS Cloud environment.

Which authentication solution should a database professional provide?

Options:

A.

Deploy Active Directory Federation Services (AD FS) on premises and configure it with an on-premises Active Directory. Set up delegation between the on- premises AD FS and AWS Security Token Service (AWS STS) to map user identities to a role using theAmazonRDSDirectoryServiceAccess managed IAM policy.

B.

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.

C.

Use Active Directory Connector to redirect directory requests to the companyג€™s on-premises Active Directory without caching any information in the cloud. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

D.

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Ensure RDS for SQL Server is using mixed mode authentication. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

Buy Now
Questions 83

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.

What should a database specialist do to meet this requirement with the LEAST amount of downtime?

Options:

A.

Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.

B.

Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.

C.

Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.

D.

Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.

Buy Now
Questions 84

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

Options:

A.

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B.

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C.

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D.

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Buy Now
Questions 85

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly.

What should a database specialist do to resolve this issue while minimizing access to external resources?

Options:

A.

Add a route to an internet gateway in the subnet’s route table.

B.

Add a route to a NAT gateway in the subnet’s route table.

C.

Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.

D.

Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet’s route table.

Buy Now
Questions 86

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

Options:

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Buy Now
Questions 87

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season.

Which solution will meet these requirements at the lowest cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Buy Now
Questions 88

A database specialist needs to enable IAM authentication on an existing Amazon Aurora PostgreSQL DB cluster. The database specialist already has modified the DB cluster settings, has created IAM and database credentials, and has distributed the credentials to the appropriate users.

What should the database specialist do next to establish the credentials for the users to use to log in to the DB cluster?

Options:

A.

Add the users' IAM credentials to the Aurora cluster parameter group.

B.

Run the generate-db-auth-token command with the user names to generate a temporary password for the users.

C.

Add the users' IAM credentials to the default credential profile, Use the AWS Management Console to access the DB cluster.

D.

Use an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint.

Buy Now
Questions 89

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office. Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

Options:

A.

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

C.

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

D.

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

Buy Now
Questions 90

A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.

Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

Options:

A.

CONNECT

B.

QUERY_DCL

C.

QUERY_DDL

D.

QUERY_DML

E.

TABLE

F.

QUERY

Buy Now
Questions 91

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.

Where should the AWS DMS replication instance be placed for the MOST optimal performance?

Options:

A.

In the same Region and VPC of the source DB instance

B.

In the same Region and VPC as the target DB instance

C.

In the same VPC and Availability Zone as the target DB instance

D.

In the same VPC and Availability Zone as the source DB instance

Buy Now
Questions 92

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Options:

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Buy Now
Questions 93

On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.

Which technique will satisfy these criteria with the LEAST amount of operational overhead?

Options:

A.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.

B.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.

C.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.

D.

Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

Buy Now
Exam Code: DBS-C01
Exam Name: AWS Certified Database - Specialty
Last Update: Sep 22, 2023
Questions: 321

PDF + Testing Engine

$83  $165.99

Testing Engine

$57.5  $114.99
buy now DBS-C01 testing engine

PDF (Q&A)

$52.5  $104.99
buy now DBS-C01 pdf