https://bdtechtalks.com/2021/01/11/concept-whitening-interpretable-neural-networks/ Deep learning doesn’t need to be a black boxBen DicksonJanuary 11, 2021 “The innerworkings of neural networks are often a mystery…. scientists at Duke University propose “concept whitening,” a
What I Read: technical debt, ML pipelines
https://towardsdatascience.com/avoiding-technical-debt-with-ml-pipelines-3e5b6e0c1c93?gi=8f8c9d302be Avoiding technical debt with ML pipelinesStrike a balance between rapid results and high-quality coding.Hamza Tahir “ML practitioners in many organizations are heavily incentivized to make quick-wins to produce early
What I Read: Ensemble, knowledge distillation, and self-distillation
https://www.microsoft.com/en-us/research/blog/three-mysteries-in-deep-learning-ensemble-knowledge-distillation-and-self-distillation/ Three mysteries in deep learning: Ensemble, knowledge distillation, and self-distillationPublished January 19, 2021By Zeyuan Allen-Zhu , Senior Researcher Yuanzhi Li , Assistant Professor, Carnegie Mellon University “…besides this small
What I Read: Transformer Networks to Answer Questions About Images
https://medium.com/dataseries/microsoft-uses-transformer-networks-to-answer-questions-about-images-with-minimum-training-f978c018bb72 Microsoft Uses Transformer Networks to Answer Questions About Images With Minimum TrainingUnified VLP can understand concepts about scenic images by using pretrained models.Jesus RodriguezJan 12 “Can we build deep