What can be passively learned about causality?Simons InstituteAndrew Lampinen (Google DeepMind)Jun 25, 2024 “What could language models learn about causality and experimentation from their passive training?”
What I Read: Contextual Bandit, LinUCB:
https://truetheta.io/concepts/reinforcement-learning/lin-ucb A Reliable Contextual Bandit Algorithm: LinUCBDJ RichAugust 6, 2024 “A user visits a news website. Which articles should they be shown?”
What I Read: Summarization, LLMs
https://cameronrwolfe.substack.com/p/summarization-and-the-evolution-of Summarization and the Evolution of LLMsCameron R. Wolfe, Ph.D.Jun 03, 2024 “How research on abstractive summarization changed language models forever…”
What I Read: Will Scaling Solve Robotics?
https://nishanthjkumar.com/Will-Scaling-Solve-Robotics-Perspectives-from-CoRL-2023/ Will Scaling Solve Robotics?: Perspectives From Corl 2023Nishanth J. Kumar “…is training a large neural network on a very large dataset a feasible way to solve robotics?”