https://blog.llamaindex.ai/evaluating-multi-modal-retrieval-augmented-generation-db3ca824d428?gi=45acebfc0a3a Evaluating Multi-Modal Retrieval-Augmented GenerationLlamaIndexNov 16 “A natural starting point is to consider how evaluation was done in traditional, text-only RAG and then ask ourselves how this ought to be
What I Read: Adversarial Attacks on LLMs
https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/ Adversarial Attacks on LLMsLilian WengOctober 25, 2023 “Adversarial attacks are inputs that trigger the model to output something undesired.”