March Special Sale Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 713PS592

Professional-Cloud-Developer Google Certified Professional - Cloud Developer Questions and Answers

Questions 4

You have an application controlled by a managed instance group. When you deploy a new version of the application, costs should be minimized and the number of instances should not increase. You want to ensure that, when each new instance is created, the deployment only continues if the new instance is healthy. What should you do?

Options:

A.

Perform a rolling-action with maxSurge set to 1, maxUnavailable set to 0.

B.

Perform a rolling-action with maxSurge set to 0, maxUnavailable set to 1

C.

Perform a rolling-action with maxHealthy set to 1, maxUnhealthy set to 0.

D.

Perform a rolling-action with maxHealthy set to 0, maxUnhealthy set to 1.

Buy Now
Questions 5

Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent to stdout for all of the replicas in your Deployment in all clusters. Which command should you use?

Options:

A.

kubectl logs [PARAM]

B.

gcloud logging read [PARAM]

C.

kubectl exec –it [PARAM] journalctl

D.

gcloud compute ssh [PARAM] –-command= “sudo journalctl”

Buy Now
Questions 6

You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be

invoked by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the

IP address of the service.

What should you do?

Options:

A.

Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.

Clients should use this IP address to connect to the service.

B.

Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.

Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the

service.

C.

Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url

https://[INSTANCE_NAME].[ZONE] .c.[PROJECT_ID].internal/.

D.

Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url

https://[API_NAME]/[API_VERSION] /.

Buy Now
Questions 7

You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in the new containers. You want to follow Google-recommended best practices. What should you do?

Options:

A.

Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

B.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment.

C.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each deployment.

D.

Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

Buy Now
Questions 8

Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture. How should you proceed with the migration?

Options:

A.

Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises Hadoop environment.

B.

Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Compute Engine instances.

C.

Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.

D.

Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.

Buy Now
Questions 9

Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?

Options:

A.

Process the messages with a Dataproc job, and write the output to storage.

B.

Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.

C.

Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.

D.

Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.

Buy Now
Questions 10

You are developing a marquee stateless web application that will run on Google Cloud. The rate of the incoming user traffic is expected to be unpredictable, with no traffic on some days and large spikes on other days. You need the application to automatically scale up and down, and you need to minimize the cost associated with running the application. What should you do?

Options:

A.

Build the application in Python with Firestore as the database. Deploy the application to Cloud Run.

B.

Build the application in C# with Firestore as the database. Deploy the application to App Engine flexible environment.

C.

Build the application in Python with CloudSQL as the database. Deploy the application to App Engine standard environment.

D.

Build the application in Python with Firestore as the database. Deploy the application to a Compute Engine managed instance group with autoscaling.

Buy Now
Questions 11

You have an ecommerce application hosted in Google Kubernetes Engine (GKE) that receives external requests and forwards them to third-party APIs external to Google Cloud. The third-party APIs are responsible for credit card processing, shipping, and inventory management using the process shown in the diagram.

Your customers are reporting that the ecommerce application is running slowly at unpredictable times. The application doesn't report any metrics You need to determine the cause of the inconsistent performance What should you do?

Professional-Cloud-Developer Question 11

Options:

A.

Install the Ops Agent inside your container and configure it to gather application metrics.

B.

Install the OpenTelemetry library for your respective language, and instrument your application.

C.

Modify your application to read and forward the x-Cloud-Trace-context header when it calls the

downstream services

D Enable Managed Service for Prometheus on the GKE cluster to gather application metrics.

Buy Now
Questions 12

You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?

Options:

A.

Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.

B.

Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster.

C.

Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster.

D.

Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster.

Buy Now
Questions 13

You are using Cloud Build to build and test application source code stored in Cloud Source Repositories. The

build process requires a build tool not available in the Cloud Build environment.

What should you do?

Options:

A.

Download the binary from the internet during the build process.

B.

Build a custom cloud builder image and reference the image in your build steps.

C.

Include the binary in your Cloud Source Repositories repository and reference it in your build scripts.

D.

Ask to have the binary added to the Cloud Build environment by filing a feature request against the Cloud

Build public Issue Tracker.

Buy Now
Questions 14

You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service.

What should you do?

Options:

A.

Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's

cluster IP address.

B.

Define a GKE Service. Clients should use the service name in the URL to connect to the service.

C.

Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in

the client container.

D.

Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.

Buy Now
Questions 15

Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you do?

Options:

A.

Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.

B.

Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.

C.

Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.

D.

Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.

Buy Now
Questions 16

You are developing a flower ordering application Currently you have three microservices.

• Order Service (receives the orders).

• Order Fulfillment Service (processes the orders).

• Notification Service (notifies the customer when the order is filled).

You need to determine how the services will communicate with each other. You want incoming orders to be processed quickly and you need to collect order information for fulfillment. You also want to make sure orders are not lost between your services and are able to communicate asynchronously. How should the requests be processed?

Options:

A.

Professional-Cloud-Developer Question 16 Option 1

B.

16

C.

16

D.

16

Buy Now
Questions 17

Your development team has been asked to refactor an existing monolithic application into a set of composable microservices. Which design aspects should you implement for the new application? (Choose two.)

Options:

A.

Develop the microservice code in the same programming language used by the microservice caller.

B.

Create an API contract agreement between the microservice implementation and microservice caller.

C.

Require asynchronous communications between all microservice implementations and microservice callers.

D.

Ensure that sufficient instances of the microservice are running to accommodate the performance requirements.

E.

Implement a versioning scheme to permit future changes that could be incompatible with the current interface.

Buy Now
Questions 18

You are planning to add unit tests to your application. You need to be able to assert that published Pub/Sub messages are processed by your subscriber in order. You want the unit tests to be cost-effective and reliable. What should you do?

Options:

A.

Implement a mocking framework.

B.

Create a topic and subscription for each tester.

C.

Add a filter by tester to the subscription.

D.

Use the Pub/Sub emulator.

Buy Now
Questions 19

You want to view the memory usage of your application deployed on Compute Engine. What should you do?

Options:

A.

Install the Stackdriver Client Library.

B.

Install the Stackdriver Monitoring Agent.

C.

Use the Stackdriver Metrics Explorer.

D.

Use the Google Cloud Platform Console.

Buy Now
Questions 20

You are developing a new application. You want the application to be triggered only when a given file is updated in your Cloud Storage bucket. Your trigger might change, so your process must support different types of triggers. You want the configuration to be simple so that multiple team members can update the triggers in the future. What should you do?

Options:

A.

Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run.

B.

Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to trigger a Cloud Build job that executes your application.

C.

Configure a Firebase function that executes your application and is triggered when an object is updated in Cloud Storage.

D.

Configure a Cloud Function that executes your application and is triggered when an object is updated in Cloud Storage.

Buy Now
Questions 21

You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?

Options:

A.

Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s performance.

B.

Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.

C.

Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.

D.

Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.

Buy Now
Questions 22

Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application publicly behind a Cloud Load Balancing HTTP(S) load balancer. What should you do?

Options:

A.

Configure a GKE Ingress resource.

B.

Configure a GKE Service resource.

C.

Configure a GKE Ingress resource with type: LoadBalancer.

D.

Configure a GKE Service resource with type: LoadBalancer.

Buy Now
Questions 23

You are creating a Google Kubernetes Engine (GKE) cluster and run this command:

Professional-Cloud-Developer Question 23

The command fails with the error:

Professional-Cloud-Developer Question 23

You want to resolve the issue. What should you do?

Options:

A.

Request additional GKE quota is the GCP Console.

B.

Request additional Compute Engine quota in the GCP Console.

C.

Open a support case to request additional GKE quotA.

D.

Decouple services in the cluster, and rewrite new clusters to function with fewer cores.

Buy Now
Questions 24

Your team is developing a Cloud Function triggered by Cloud Storage Events. You want to accelerate testing and development of your Cloud Function while following Google-recommended best practices. What should you do?

Options:

A.

Install the Functions Frameworks library, and configure the Cloud Function on localhost. Make a copy of the function, and make edits to the new version Test the new version using cur1.

B.

Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered Edit and test the new version

by triggering the HTTP endpoint. Send mock requests to the new function to evaluate the functionality.

C.

Make a copy of the Cloud Function in the Google Cloud Console Use the Cloud console's in-line editor to

make source code changes to the new function Modify your web application to call the new function and test the new version in production.

D.

Create a new Cloud Function that is triggered when Cloud Audit Logs detects the

cloudfunctions. functions. sourceCodeSet operation in the original Cloud Function Send mock

requests to the new function to evaluate the functionality.

Buy Now
Questions 25

You work for a financial services company that has a container-first approach. Your team develops microservices applications You have a Cloud Build pipeline that creates a container image, runs regression tests, and publishes the image to Artifact Registry You need to ensure that only containers that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters You have already enabled Binary Authorization on the GKE clusters What should you do next?

Options:

A.

Deploy Voucher Server and Voucher Client Components. After a container image has passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline.

B.

Set the Pod Security Standard level to Restricted for the relevant namespaces Digitally sign the container

images that have passed the regression tests as a step in the Cloud Build pipeline.

C.

Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.

D.

Create an attestor and a policy Run a vulnerability scan to create an attestation for the container image as a step in the Cloud Build pipeline.

Buy Now
Questions 26

You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?

Options:

A.

Perform a rolling update with a PodDisruptionBudget of 80%.

B.

Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

C.

Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.

D.

Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

Buy Now
Questions 27

You are developing an application that reads credit card data from a Pub/Sub subscription. You have written code and completed unit testing. You need to test the Pub/Sub integration before deploying to Google Cloud. What should you do?

Options:

A.

Create a service to publish messages, and deploy the Pub/Sub emulator. Generate random content in the publishing service, and publish to the emulator.

B.

Create a service to publish messages to your application. Collect the messages from Pub/Sub in production, and replay them through the publishing service.

C.

Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the messages from Pub/Sub in production, and publish them to the emulator.

D.

Create a service to publish messages, and deploy the Pub/Sub emulator. Publish a standard set of testing messages from the publishing service to the emulator.

Buy Now
Questions 28

You recently developed an application. You need to call the Cloud Storage API from a Compute Engine instance that doesn’t have a public IP address. What should you do?

Options:

A.

Use Carrier Peering

B.

Use VPC Network Peering

C.

Use Shared VPC networks

D.

Use Private Google Access

Buy Now
Questions 29

You want to create “fully baked” or “golden” Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

Options:

A.

Embed the appropriate database connection string in the image. Create a different image for each environment.

B.

When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.

C.

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, read the “DATABASE” environment variable, and use the value to connect to the appropriate database.

D.

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, query the metadata server for the “DATABASE” value, and use the value to connect to the appropriate database.

Buy Now
Questions 30

You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive a high volume messages and update the current stock price in a single large in-memory database The downstream service needs only the most up-to-date prices in the in-memory database to perform stock trading transactions Each message contains three pieces of information

• Stock symbol

• Stock price

• Timestamp for the update

.

How should you set up your Pub/Sub subscription?

Options:

A.

Create a pull subscription with both ordering and exactly-once delivery turned off

B.

Create a pull subscription with exactly-once delivery enabled

C.

Create a push subscription with exactly-once delivery enabled

D.

Create a push subscription with both ordering and exactly-once delivery turned off

Buy Now
Questions 31

Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application is released, your CI/CD tool updates the spec.template.spec.containers[0].image value to reference the Docker image of your new application version. When the Deployment object applies the change, you want to deploy at least 1 replica of the new version and maintain the previous replicas until the new replica is healthy.

Which change should you make to the GKE Deployment object shown below?

Professional-Cloud-Developer Question 31

Options:

A.

Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1.

B.

Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0.

C.

Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1.

D.

Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.

Buy Now
Questions 32

You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an employee to submit a timesheet, which then initiates several steps:

• An email is sent to the employee and manager, notifying them that the timesheet was submitted.

• A timesheet is sent to payroll processing for the vendor's API.

• A timesheet is sent to the data warehouse for headcount planning.

These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by different development teams. Each development team will implement the error handling specific to their step. What should you do?

Options:

A.

Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action.

B.

Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic.

C.

Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the topic.

D.

Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a successful response before calling the next step.

Buy Now
Questions 33

You are load testing your server application. During the first 30 seconds, you observe that a previously inactive

Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second.

Your application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage JSON API

as the demand escalates. You want to decrease the failed responses from the Cloud Storage API.

What should you do?

Options:

A.

Distribute the uploads across a large number of individual storage buckets.

B.

Use the XML API instead of the JSON API for interfacing with Cloud Storage.

C.

Pass the HTTP response codes back to clients that are invoking the uploads from your application.

D.

Limit the upload rate from your application clients so that the dormant bucket's peak request rate is

reached more gradually.

Buy Now
Questions 34

You need to load-test a set of REST API endpoints that are deployed to Cloud Run. The API responds to HTTP POST requests Your load tests must meet the following requirements:

• Load is initiated from multiple parallel threads

• User traffic to the API originates from multiple source IP addresses.

• Load can be scaled up using additional test instances

You want to follow Google-recommended best practices How should you configure the load testing'?

Options:

A.

Create an image that has cURL installed and configure cURLto run a test plan Deploy the image in a

managed instance group, and run one instance of the image for each VM.

B.

Create an image that has cURL installed and configure cURL to run a test plan Deploy the image in an

unmanaged instance group, and run one instance of the image for each VM.

C.

Deploy a distributed load testing framework on a private Google Kubernetes Engine Cluster Deploy

additional Pods as needed to initiate more traffic and support the number of concurrent users.

D.

Download the container image of a distributed load testing framework on Cloud Shell Sequentially start

several instances of the container on Cloud Shell to increase the load on the API.

Buy Now
Questions 35

Your team recently deployed an application on Google Kubernetes Engine (GKE). You are monitoring your application and want to be alerted when the average memory consumption of your containers is under 20% or above 80% How should you configure the alerts?

Options:

A.

Create a Cloud Function that consumes the Monitoring API. Create a schedule to trigger the Cloud Function hourly and alert you if the average memory consumption is outside the defined range

B.

In Cloud Monitoring, create an alerting policy to notify you if the average memory consumption is outside the

defined range

C.

Create a Cloud Function that runs on a schedule, executes kubect1 top on all the workloads on the cluster, and sends an email alert if the average memory consumption is outside the defined range

D.

Write a script that pulls the memory consumption of the instance at the OS level and sends an email alert if the average memory consumption is outside the defined range

Buy Now
Questions 36

You are deploying your application on a Compute Engine instance that communicates with Cloud SQL. You will use Cloud SQL Proxy to allow your application to communicate to the database using the service account associated with the application’s instance. You want to follow the Google-recommended best practice of providing minimum access for the role assigned to the service account. What should you do?

Options:

A.

Assign the Project Editor role.

B.

Assign the Project Owner role.

C.

Assign the Cloud SQL Client role.

D.

Assign the Cloud SQL Editor role.

Buy Now
Questions 37

You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?

Options:

A.

Google Cloud Armor

B.

Debugger

C.

Web Security Scanner

D.

Error Reporting

Buy Now
Questions 38

You want to notify on-call engineers about a service degradation in production while minimizing development

time.

What should you do?

Options:

A.

Use Cloud Function to monitor resources and raise alerts.

B.

Use Cloud Pub/Sub to monitor resources and raise alerts.

C.

Use Stackdriver Error Reporting to capture errors and raise alerts.

D.

Use Stackdriver Monitoring to monitor resources and raise alerts.

Buy Now
Questions 39

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Buy Now
Questions 40

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Buy Now
Questions 41

Which service should HipLocal use to enable access to internal apps?

Options:

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Buy Now
Questions 42

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Buy Now
Questions 43

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Buy Now
Questions 44

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Buy Now
Questions 45

In order to meet their business requirements, how should HipLocal store their application state?

Options:

A.

Use local SSDs to store state.

B.

Put a memcache layer in front of MySQL.

C.

Move the state storage to Cloud Spanner.

D.

Replace the MySQL instance with Cloud SQL.

Buy Now
Questions 46

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Buy Now
Questions 47

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Buy Now
Questions 48

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Buy Now
Questions 49

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Buy Now
Questions 50

Which database should HipLocal use for storing user activity?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Buy Now
Questions 51

Which service should HipLocal use for their public APIs?

Options:

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Buy Now
Questions 52

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Buy Now
Questions 53

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Buy Now
Questions 54

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Buy Now
Questions 55

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

Options:

A.

Block all traffic on port 443.

B.

Allow all traffic into the network.

C.

Allow traffic on port 443 for a specific tag.

D.

Allow all traffic on port 443 into the network.

Buy Now
Questions 56

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Buy Now
Questions 57

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

Options:

A.

Create an API key. Use the API key to interact with Google Cloud.

B.

Use the default compute service account to interact with Google Cloud.

C.

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Buy Now
Exam Name: Google Certified Professional - Cloud Developer
Last Update: Mar 29, 2024
Questions: 254

PDF + Testing Engine

$66.4  $165.99

Testing Engine

$46  $114.99
buy now Professional-Cloud-Developer testing engine

PDF (Q&A)

$42  $104.99
buy now Professional-Cloud-Developer pdf