https://lilianweng.github.io/posts/2022-09-08-ntk/ Some Math behind Neural Tangent KernelLilian WengSeptember 8, 2022 “Neural tangent kernel… leads to great insights into why neural networks with enough width can consistently converge to a global
What I Read: AI, Limits, Language
https://www.noemamag.com/ai-and-the-limits-of-language/AI And The Limits Of LanguageBy Jacob Browning and Yann LeCunAugust 23, 2022 “An artificial intelligence system trained on words and sentences alone will never approximate human understanding.”
What I Read: Self-Taught AI, Brain
https://www.quantamagazine.org/self-taught-ai-shows-similarities-to-how-the-brain-works-20220811/ Self-Taught AI Shows Similarities to How the Brain WorksAnil AnanthaswamyContributing WriterAugust 11, 2022 “Self-supervised learning allows a neural network to figure out for itself what matters. The process might
What I Read: Challenging AI to Learn Better
https://www.quantamagazine.org/the-computer-scientist-trying-to-teach-ai-to-learn-like-we-do-20220802/ The Computer Scientist Challenging AI to Learn BetterAllison WhittenContributing WriterAugust 2, 2022Christopher Kanan is building algorithms that can continuously learn over time — the way we do. “Instead of
What I Read: Neural-Implicit Representations, 3D Shapes
https://towardsdatascience.com/neural-implicit-representations-for-3d-shapes-and-scenes-c6750dff49db?gi=dd4876367dbb Neural-Implicit Representations for 3D Shapes and ScenesOmri KaduriJun 26 “Tracing the progress of deep learning-based solutions to computer graphics tasks”