https://www.johndcook.com/blog/2024/03/26/hallucinations-of-ai-science-models/ Hallucinations of AI Science ModelsWayne Joubert3/26/24 11:35 AM “…standard DNN methods applied even to a simple 1-dimensional problem can result in “glitches”: the DNN as a whole matches the
What I Read: Kalman Filter
https://www.youtube.com/watch?v=-DiZGpAh7T4 Kalman Filter – VISUALLY EXPLAINED!Kapil Sachdeva “This tutorial explains the Kalman Filter from Bayesian Probabilistic View and as a special case of Bayesian Filtering.”
What I Read: How Machines ‘Grok’ Data
https://www.quantamagazine.org/how-do-machines-grok-data-20240412 How Do Machines ‘Grok’ Data?Anil Ananthaswamy4/12/24 “By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems.”
What I Read: Attention, transformers
Attention in transformers, visually explained | Chapter 6, Deep Learning3Blue1Brown “Demystifying attention, the key mechanism inside transformers and LLMs.”
What I Read: Linear Algebra, Random
https://youtu.be/6htbyY3rH1w?si=IXTrcoIReps_ftFq Is the Future of Linear Algebra.. Random?Mutual Information “Randomization is arguably the most exciting and innovative idea to have hit linear algebra in a long time.”