https://transformer-circuits.pub/2022/toy_model/index.html Toy Models of SuperpositionNelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan,
What I Read: What’s Fair, What’s Hard
https://www.quantamagazine.org/the-question-of-whats-fair-illuminates-the-question-of-whats-hard-20240624 The Question of What’s Fair Illuminates the Question of What’s HardLakshmi ChandrasekaranJune 24, 2024 “Computational complexity theorists have discovered a surprising new way to understand what makes certain problems
What I Read: AI, Light-Based Chips
https://www.quantamagazine.org/ai-needs-enormous-computing-power-could-light-based-chips-help-20240520 AI Needs Enormous Computing Power. Could Light-Based Chips Help?Amos Zeeberg5/20/24 10:40 AM “Optical neural networks, which use photons instead of electrons, have advantages over traditional systems. They also face
What I Read: Linear Algebra, Random
https://youtu.be/6htbyY3rH1w?si=IXTrcoIReps_ftFq Is the Future of Linear Algebra.. Random?Mutual Information “Randomization is arguably the most exciting and innovative idea to have hit linear algebra in a long time.”
What I Read: Chain-of-Thought Reasoning
https://www.quantamagazine.org/how-chain-of-thought-reasoning-helps-neural-networks-compute-20240321 How Chain-of-Thought Reasoning Helps Neural Networks ComputeBen Brubaker3/21/24 11:15 AM “Large language models do better at solving problems when they show their work. Researchers are beginning to understand why.”