https://statmodeling.stat.columbia.edu/2024/05/21/what-to-make-of-implicit-biases-in-llm-output What to make of implicit biases in LLM output?Jessica Hullman5/21/24 12:27 PM “…when you prompt a number of large language models to do IAT-inspired tasks and make associated decisions,
What I Read: Chain-of-Thought Reasoning
https://www.quantamagazine.org/how-chain-of-thought-reasoning-helps-neural-networks-compute-20240321 How Chain-of-Thought Reasoning Helps Neural Networks ComputeBen Brubaker3/21/24 11:15 AM “Large language models do better at solving problems when they show their work. Researchers are beginning to understand why.”
What I Read: Artificial, Biological Intelligence
https://medium.com/@begus.gasper/artificial-and-biological-intelligence-humans-animals-and-machines-142bc3c4b304 Artificial and Biological Intelligence: Humans, Animals, and MachinesGasper BegusSep 19, 2023 “…a highly promising direction in AI research is to use artificial intelligence to better understand biological intelligence, and
What I Read: Chatbots Understand Text
https://www.quantamagazine.org/new-theory-suggests-chatbots-can-understand-text-20240122/ New Theory Suggests Chatbots Can Understand TextAnil Ananthaswamy1/22/24 “Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.”