https://arxiv.org/abs/2305.18654 Faith and Fate: Limits of Transformers on CompositionalityNouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena
What I Read: Multi-Modal Retrieval-Augmented Generation
https://blog.llamaindex.ai/evaluating-multi-modal-retrieval-augmented-generation-db3ca824d428?gi=45acebfc0a3a Evaluating Multi-Modal Retrieval-Augmented GenerationLlamaIndexNov 16 “A natural starting point is to consider how evaluation was done in traditional, text-only RAG and then ask ourselves how this ought to be
What I Read: Adversarial Attacks on LLMs
https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/ Adversarial Attacks on LLMsLilian WengOctober 25, 2023 “Adversarial attacks are inputs that trigger the model to output something undesired.”
What I Read: Nvidia, GPU gold rush
https://blog.johnluttig.com/p/nvidia-envy-understanding-the-gpu Nvidia Envy: understanding the GPU gold rushJohn LuttigNov 10, 2023 “In 2023, thousands of companies and countries begged Nvidia to purchase more GPUs. Can the exponential demand endure?”
What I Read: Unify Batch and ML Systems
https://www.kdnuggets.com/2023/09/hopsworks-unify-batch-ml-systems-feature-training-inference-pipelines Unify Batch and ML Systems with Feature/Training/Inference PipelinesBy Jim Dowling, Co-Founder & CEO, HopsworksSeptember 27, 2023 “This article introduces a unified architectural pattern for building both Batch and Real-Time