What I Read: Dataset Distillation

https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html

Training Machine Learning Models More Efficiently with Dataset Distillation
Wednesday, December 15, 2021
Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research
“For a machine learning (ML) algorithm to be effective, useful features must be extracted from (often) large amounts of training data…. Training a model with such a distilled dataset can reduce the required memory and compute.”