https://magazine.sebastianraschka.com/p/accelerating-pytorch-model-training Accelerating PyTorch Model TrainingUsing Mixed-Precision and Fully Sharded Data ParallelismSebastian Raschka, PhDJun 26, 2023 “…how to scale PyTorch model training with minimal code changes. The focus here is on
What I Read: artificial intelligence really hard
https://thegradient.pub/why-transformative-artificial-intelligence-is-really-really-hard-to-achieve/ Why transformative artificial intelligence is really, really hard to achieveArjun Ramani and Zhengdong Wang26.Jun.2023“1. The transformational potential of AI is constrained by its hardest problems
What I Read: Sparse Networks
https://www.quantamagazine.org/sparse-neural-networks-point-physicists-to-useful-data-20230608/ Sparse Networks Come to the Aid of Big PhysicsSteve NadisJune 8, 2023 “A novel type of neural network is helping physicists with the daunting challenge of data analysis.”
What I Read: Neural Networks Learn Language
https://www.quantamagazine.org/some-neural-networks-learn-language-like-humans-20230522/ Some Neural Networks Learn Language Like HumansSteve NadisMay 22, 2023 “Researchers uncover striking parallels in the ways that humans and machine learning models acquire language skills.”
What I Read: Computation, Artificial Intelligence
https://www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413/ A New Approach to Computation Reimagines Artificial IntelligenceAnil AnanthaswamyApril 13, 2023 “By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly — and efficiently