https://bdtechtalks.com/2021/01/11/concept-whitening-interpretable-neural-networks/ Deep learning doesn’t need to be a black boxBen DicksonJanuary 11, 2021 “The innerworkings of neural networks are often a mystery…. scientists at Duke University propose “concept whitening,” a
What I Read: Building Robust Machine Learning Systems
https://medium.com/swlh/deepminds-three-pillars-for-building-robust-machine-learning-systems-a9679e56250a DeepMind’s Three Pillars for Building Robust Machine Learning SystemsSpecification Testing, Robust Training and Formal Verification are three elements that the AI powerhouse believe hold the essence of robust machine
What I Read: Neural Networks Help Explain Brains
https://www.quantamagazine.org/deep-neural-networks-help-to-explain-living-brains-20201028/ neuroscienceDeep Neural Networks Help to Explain Living Brains Deep neural networks, often criticized as “black boxes,” are helping neuroscientists understand the organization of living brains.“If their CNN mimicked a
What I Read: AI’s limitations
https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in Technology QuarterlyJun 11th 2020 editionAn understanding of AI’s limitations is starting to sink inAfter years of hype, many people feel AI has failed to deliver, says Tim Cross “Surveying