https://logb-research.github.io/blog/2024/ckn Kernel Trick I – Deep Convolutional Representations in RKHSOussama Zekri, Ambroise OdonnatJuly 18, 2024 “…we focus on the Convolutional Kernel Network (CKN) architecture proposed in End-to-End Kernel Learning with
What I Read: Deep learning, single-cell sequencing
https://thegradient.pub/deep-learning-for-single-cell-sequencing-a-microscope-to-uncover-the-rich-diversity-of-individual-cells/ Deep learning for single-cell sequencing: a microscope to see the diversity of cellsFatima Zahra El Hajji1/13/24 1:12 PM “…we will explore the pivotal role that Deep Learning, in particular,
What I Read: Sparse Networks
https://www.quantamagazine.org/sparse-neural-networks-point-physicists-to-useful-data-20230608/ Sparse Networks Come to the Aid of Big PhysicsSteve NadisJune 8, 2023 “A novel type of neural network is helping physicists with the daunting challenge of data analysis.”
What I Read: Geometric Deep Learning
https://thegradient.pub/towards-geometric-deep-learning/ Towards Geometric Deep LearningMichael Bronstein18.Feb.2023 “Geometric Deep Learning is an umbrella term for approaches considering a broad class of ML problems from the perspectives of symmetry and invariance.”
What I Read: Realtime User Actions in Recommendation
https://medium.com/pinterest-engineering/how-pinterest-leverages-realtime-user-actions-in-recommendation-to-boost-homefeed-engagement-volume-165ae2e8cde8 How Pinterest Leverages Realtime User Actions in Recommendation to Boost Homefeed Engagement VolumeXue Xia, Software Engineer, Homefeed Ranking; Neng Gu, Software Engineer, Content & User Understanding; Dhruvil Deven Badani,
What I Read: Visual Explanation of Classifiers
https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html Introducing StylEx: A New Approach for Visual Explanation of ClassifiersTuesday, January 18, 2022Posted by Oran Lang and Inbar Mosseri, Software Engineers, Google Research“Previous approaches for visual explanations of classifiers…
What I Read: Dataset Distillation
https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html Training Machine Learning Models More Efficiently with Dataset DistillationWednesday, December 15, 2021Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research“For a machine learning (ML)