Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?
Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?
What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?
Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:
Which technique involves prompting the Large Language Model (LLM) to emit intermediate reasoning steps as part of its response?
What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?
What does a cosine distance of 0 indicate about the relationship between two embeddings?
Which LangChain component is responsible for generating the linguistic output in a chatbot system?
You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?
Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?
Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?
What is the purpose of Retrieval Augmented Generation (RAG) in text generation?
Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?
How does the structure of vector databases differ from traditional relational databases?