Weekend Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

1z0-1127-25 Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Questions 4

What is the purpose of frequency penalties in language model outputs?

Options:

A.

To ensure that tokens that appear frequently are used more often

B.

To penalize tokens that have already appeared, based on the number of times they have been used

C.

To reward the tokens that have never appeared in the text

D.

To randomly penalize some tokens to increase the diversity of the text

Buy Now
Questions 5

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

Options:

A.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

B.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

C.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

D.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

Buy Now
Questions 6

Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?

Options:

A.

Fine-tuning and continuous pretraining both modify all parameters and use labeled, task-specific data.

B.

Parameter Efficient Fine-Tuning and Soft Prompting modify all parameters of the model using unlabeled data.

C.

Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.

D.

Soft Prompting and continuous pretraining are both methods that require no modification to the original parameters of the model.

Buy Now
Questions 7

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

Options:

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Buy Now
Questions 8

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

Options:

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Buy Now
Questions 9

Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?

Options:

A.

GPUs are shared with other customers to maximize resource utilization.

B.

The GPUs allocated for a customer’s generative AI tasks are isolated from other GPUs.

C.

GPUs are used exclusively for storing large datasets, not for computation.

D.

Each customer's GPUs are connected via a public Internet network for ease of access.

Buy Now
Questions 10

Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:

Options:

A.

"Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50."

B.

"Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question."

C.

"To understand the impact of greenhouse gases on climate change, let’s start by defining what greenhouse gases are. Next, we’ll explore how they trap heat in the Earth’s atmosphere."A. 1: Step-Back, 2: Chain-of-Thought, 3: Least-to-MostB. 1: Least-to-Most, 2: Chain-of-Thought, 3: Step-BackC. 1: Chain-of-Thought, 2: Step-Back, 3: Least-to-MostD. 1: Chain-of-Thought, 2: Least-to-Most, 3: Step-Back

Buy Now
Questions 11

Which technique involves prompting the Large Language Model (LLM) to emit intermediate reasoning steps as part of its response?

Options:

A.

Step-Back Prompting

B.

Chain-of-Thought

C.

Least-to-Most Prompting

D.

In-Context Learning

Buy Now
Questions 12

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

Options:

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Buy Now
Questions 13

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Buy Now
Questions 14

Which LangChain component is responsible for generating the linguistic output in a chatbot system?

Options:

A.

Document Loaders

B.

Vector Stores

C.

LangChain Application

D.

LLMs

Buy Now
Questions 15

When should you use the T-Few fine-tuning method for training a model?

Options:

A.

For complicated semantic understanding improvement

B.

For models that require their own hosting dedicated AI cluster

C.

For datasets with a few thousand samples or less

D.

For datasets with hundreds of thousands to millions of samples

Buy Now
Questions 16

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?

Options:

A.

480 unit hours

B.

240 unit hours

C.

744 unit hours

D.

20 unit hours

Buy Now
Questions 17

What is prompt engineering in the context of Large Language Models (LLMs)?

Options:

A.

Iteratively refining the ask to elicit a desired response

B.

Adding more layers to the neural network

C.

Adjusting the hyperparameters of the model

D.

Training the model on a large dataset

Buy Now
Questions 18

Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?

Options:

A.

Summarization models

B.

Generation models

C.

Translation models

D.

Embedding models

Buy Now
Questions 19

What does the RAG Sequence model do in the context of generating a response?

Options:

A.

It retrieves a single relevant document for the entire input query and generates a response based on that alone.

B.

For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response.

C.

It retrieves relevant documents only for the initial part of the query and ignores the rest.

D.

It modifies the input query before retrieving relevant documents to ensure a diverse response.

Buy Now
Questions 20

How does a presence penalty function in language model generation?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It penalizes only tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Buy Now
Questions 21

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Buy Now
Questions 22

What is the purpose of Retrieval Augmented Generation (RAG) in text generation?

Options:

A.

To generate text based only on the model's internal knowledge without external data

B.

To generate text using extra information obtained from an external data source

C.

To store text in an external database without using it for generation

D.

To retrieve text from an external source and present it without any modifications

Buy Now
Questions 23

What do embeddings in Large Language Models (LLMs) represent?

Options:

A.

The color and size of the font in textual data

B.

The frequency of each word or pixel in the data

C.

The semantic content of data in high-dimensional vectors

D.

The grammatical structure of sentences in the data

Buy Now
Questions 24

Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?

Options:

A.

Retriever

B.

Encoder-Decoder

C.

Generator

D.

Ranker

Buy Now
Questions 25

How does the structure of vector databases differ from traditional relational databases?

Options:

A.

It stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It uses simple row-based data storage.

D.

It is based on distances and similarities in a vector space.

Buy Now
Questions 26

Which is NOT a built-in memory type in LangChain?

Options:

A.

ConversationImageMemory

B.

ConversationBufferMemory

C.

ConversationSummaryMemory

D.

ConversationTokenBufferMemory

Buy Now
Exam Code: 1z0-1127-25
Exam Name: Oracle Cloud Infrastructure 2025 Generative AI Professional
Last Update: Jun 13, 2025
Questions: 88

PDF + Testing Engine

$57.75  $164.99

Testing Engine

$43.75  $124.99
buy now 1z0-1127-25 testing engine

PDF (Q&A)

$36.75  $104.99
buy now 1z0-1127-25 pdf