https://logb-research.github.io/blog/2024/ckn Kernel Trick I – Deep Convolutional Representations in RKHSOussama Zekri, Ambroise OdonnatJuly 18, 2024 “…we focus on the Convolutional Kernel Network (CKN) architecture proposed in End-to-End Kernel Learning with
What I Read: Smooth Noisy Data
https://towardsdatascience.com/the-perfect-way-to-smooth-your-noisy-data-4f3fe6b44440?gi=a6f62aaf2818 The Perfect Way to Smooth Your Noisy DataAndrew BowellOct 25, 2023 “Insanely fast and reliable smoothing and interpolation with the Whittaker-Eilers method.”
What I Read: Density Kernel Depth for Outlier Detection
https://www.kdnuggets.com/density-kernel-depth-for-outlier-detection-in-functional-data Density Kernel Depth for Outlier Detection in Functional DataKulbir SinghNovember 8, 2023 “The Density Kernel Depth (DKD) method provides a nuanced approach to detect outliers in functional data…”
What I Read: Tree-Structured Parzen Estimator
https://towardsdatascience.com/building-a-tree-structured-parzen-estimator-from-scratch-kind-of-20ed31770478 Building a Tree-Structured Parzen Estimator from Scratch (Kind Of)An alternative to traditional hyperparameter tuning methodsColin HorganApr 4 “Although popular, Grid and Random Search methods… are purely trial and error.
What I Read: Dataset Distillation
https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html Training Machine Learning Models More Efficiently with Dataset DistillationWednesday, December 15, 2021Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research“For a machine learning (ML)
What I Read: First-Principles Theory of Neural Network Generalization
https://natluk.net/a-first-principles-theory-of-neuralnetwork-generalization-the-berkeley-artificial-intelligence-research-blog/ A First-Principles Theory of Neural Network Generalization – The Berkeley Artificial Intelligence Research BlogNatLuk Community25 October 2021 “Perhaps the greatest of these mysteries has been the question of generalization:
What I Read: Attention with Performers
https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Rethinking Attention with PerformersFriday, October 23, 2020Posted by Krzysztof Choromanski and Lucy Colwell, Research Scientists, Google Research “To resolve these issues, we introduce the Performer, a Transformer architecture with