https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html Training Machine Learning Models More Efficiently with Dataset DistillationWednesday, December 15, 2021Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research“For a machine learning (ML)
What I Read: Do Wide and Deep Networks Learn the Same Things?
https://ai.googleblog.com/2021/05/do-wide-and-deep-networks-learn-same.html Do Wide and Deep Networks Learn the Same Things?Tuesday, May 4, 2021Posted by Thao Nguyen, AI Resident, Google Research
What I Read: Branch Specialization
https://distill.pub/2020/circuits/branch-specialization/ Branch SpecializationChelsea VossGabriel GohNick CammarataMichael PetrovLudwig SchubertChris OlahApril 5, 2021DOI 10.23915/distill.00024.008 “Branch specialization occurs when neural network layers are split up into branches. The neurons and circuits tend to