https://astralord.github.io/posts/transformer-inference-optimization-toolset Transformers Inference Optimization ToolsetAleksandr SamarinOct 1, 2024 “Large Language Models are pushing the boundaries of artificial intelligence, but their immense size poses significant computational challenges. As these models grow,
What I Read: Viola Jones’ Algorithm
https://medium.com/@aaronward6210/facial-detection-understanding-viola-jones-algorithm-116d1a9db218 Facial Detection — Understanding Viola Jones’ AlgorithmAaronWardJan 24, 2020 “There are many approaches to implement facial detection…”
What I Read: embedding models
https://unstructured.io/blog/understanding-embedding-models-make-an-informed-choice-for-your-rag Understanding embedding models: make an informed choice for your RAGMaria KhalusovaAug 13, 2024 “How do you choose a suitable embedding model for your RAG application?”
What I Watch: How LLMs store facts
How might LLMs store facts | Chapter 7, Deep Learning3Blue1BrownAug 31, 2024 “Unpacking the multilayer perceptrons in a transformer, and how they may store facts”
What I Read: passively learned, causality
What can be passively learned about causality?Simons InstituteAndrew Lampinen (Google DeepMind)Jun 25, 2024 “What could language models learn about causality and experimentation from their passive training?”