https://gaotianyu.xyz/blog/2023/11/30/instruction-tuning/ Teach Llamas to Talk: Recent Progress in Instruction TuningTianyu Gao30 November 2023 “…open-ended instruction tuning… fine-tunes an LLM such that it can follow user instructions…. there have been numerous
What I Read: Multi-Modal Retrieval-Augmented Generation
https://blog.llamaindex.ai/evaluating-multi-modal-retrieval-augmented-generation-db3ca824d428?gi=45acebfc0a3a Evaluating Multi-Modal Retrieval-Augmented GenerationLlamaIndexNov 16 “A natural starting point is to consider how evaluation was done in traditional, text-only RAG and then ask ourselves how this ought to be
What I Read: Adversarial Attacks on LLMs
https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/ Adversarial Attacks on LLMsLilian WengOctober 25, 2023 “Adversarial attacks are inputs that trigger the model to output something undesired.”
What I Read: Distributed Training, Finetuning
https://sumanthrh.com/post/distributed-and-efficient-finetuning/ Everything about Distributed Training and Efficient FinetuningSumanth R HegdeLast updated on Oct 13, 2023 “practical guidelines and gotchas with multi-GPU and multi-node training”
What I Read: Retrieval Augmented Generation at scale
https://medium.com/@neum_ai/retrieval-augmented-generation-at-scale-building-a-distributed-system-for-synchronizing-and-eaa29162521 Retrieval Augmented Generation at scale — Building a distributed system for synchronizing and ingesting billions of text embeddingsNeum AISep 28 “…getting a Retrieval Augmented Generation (RAG) application started is