An AI system is generating confident but incorrect outputs, commonly known as hallucinations. Which strategy would most likely reduce the occurrence of such hallucinations and improve the trustworthiness of the system?
In a Retrieval-Augmented Generation (RAG) system, which key step is crucial for ensuring that the generated response is contextually accurate and relevant to the user's question?
Fine-tuning an LLM on a single task involves adjusting model parameters to specialize in a particular domain. What is the primary challenge associated with fine tuning for a single task compared to multi task fine tuning?
In line with the US Executive Order on AI, a company's AI application has encountered a security vulnerability. What should be prioritized to align with the order's expectations?
When dealing with the risk of data leakage in LLMs, which of the following actions is most effective in mitigating this issue?
What role does GenAI play in automating vulnerability scanning and remediation processes?
A company's chatbot, Tay, was poisoned by malicious interactions. What is the primary lesson learned from this case study?
Which framework is commonly used to assess risks in Generative AI systems according to NIST?