https://adamkarvonen.github.io/machine_learning/2024/06/11/sae-intuitions.html An Intuitive Explanation of Sparse Autoencoders for LLM InterpretabilityAdam KarvonenJun 11, 2024 “Sparse Autoencoders (SAEs) have recently become popular for interpretability of machine learning models…”
What I Read: LLMs, School Math
https://towardsdatascience.com/understanding-llms-from-scratch-using-middle-school-math-e602d27ec876?gi=551c5bfd7f21 Understanding LLMs from Scratch Using Middle School MathRohit PatelOct 19, 2024 “In this article, we talk about how Large Language Models (LLMs) work, from scratch — assuming only that
What I Read: Transformers Inference Optimization
https://astralord.github.io/posts/transformer-inference-optimization-toolset Transformers Inference Optimization ToolsetAleksandr SamarinOct 1, 2024 “Large Language Models are pushing the boundaries of artificial intelligence, but their immense size poses significant computational challenges. As these models grow,
What I Read: LLMs, 2024
https://simonwillison.net/2024/Dec/31/llms-in-2024 Things we learned about LLMs in 2024Simon Willison31st December 2024 “A lot has happened in the world of Large Language Models over the course of 2024.”