https://www.cs.princeton.edu/~smalladi/blog/2024/07/09/dpo-infinity The Hidden Infinity in Preference LearningSadhika MalladiJuly 09 2024 “I demonstrate from first principles how offline preference learning algorithms (e.g., SimPO) can benefit from length normalization, especially when training
What I Read: Illustrated AlphaFold
https://elanapearl.github.io/blog/2024/the-illustrated-alphafold The Illustrated AlphaFoldElana Simon, Jake Silberg “A visual walkthrough of the AlphaFold3 architecture…”
What I Read: decision analysis, significance testing
https://statmodeling.stat.columbia.edu/2024/07/10/a-misunderstanding-about-decision-analysis-and-significance-testing (Trying to) clear up a misunderstanding about decision analysis and significance testingAndrew Gelman7/10/24 9:03 AM “…we’re just saying that screening based on statistical significance has lots of problems. P-values
What I Read: What’s Fair, What’s Hard
https://www.quantamagazine.org/the-question-of-whats-fair-illuminates-the-question-of-whats-hard-20240624 The Question of What’s Fair Illuminates the Question of What’s HardLakshmi ChandrasekaranJune 24, 2024 “Computational complexity theorists have discovered a surprising new way to understand what makes certain problems
What I Read: Detecting hallucinations, LLMs, semantic entropy
https://oatml.cs.ox.ac.uk/blog/2024/06/19/detecting_hallucinations_2024.html Detecting hallucinations in large language models using semantic entropySebastian Farquhar, Jannik Kossen, Lorenz Kuhn, Yarin Gal19 Jun 2024 “We show how one can use uncertainty to detect confabulations.”