https://thegradient.pub/interpretability-in-ml-a-broad-overview/ Interpretability in Machine Learning: An Overview21.Nov.2020Owen Shen “Broadly, interpretability focuses on the how. It’s focused on getting some notion of an explanation for the decisions made by our models….
What I Read: HuggingFace Transformers
https://medium.com/georgian-impact-blog/how-to-incorporate-tabular-data-with-huggingface-transformers-b70ac45fcfb4 How to Incorporate Tabular Data with HuggingFace TransformersGeorgianOct 23 “At Georgian, we find ourselves working with supporting tabular feature information as well as unstructured text data. We found that
What I Read: Revisiting Sutton’s Bitter Lesson for AI
https://blog.exxactcorp.com/compute-goes-brrr-revisiting-suttons-bitter-lesson-artificial-intelligence/ Deep LearningCompute Goes Brrr: Revisiting Sutton’s Bitter Lesson for Artificial IntelligenceMarketing, October 27, 2020 “The main driver of AI progress, according to Sutton, is the increasing availability of compute
What I Read: This AI learns by reading the web
https://www.technologyreview.com/2020/09/04/1008156/knowledge-graph-ai-reads-web-machine-learning-natural-language-processing/ This know-it-all AI learns by reading the entire web nonstopDiffbot is building the biggest-ever knowledge graph by applying image recognition and natural-language processing to billions of web pages.Will Douglas
What I Read: Transformer Architecture
https://blog.exxactcorp.com/a-deep-dive-into-the-transformer-architecture-the-development-of-transformer-models/ Deep LearningA Deep Dive Into the Transformer Architecture – The Development of Transformer ModelsMarketing, July 14, 2020 0 11 min readTransformers for Natural Language Processing “There’s no better time
What I Read: Progress of Natural Language Processing
https://blog.exxactcorp.com/the-unreasonable-progress-of-deep-neural-networks-in-natural-language-processing-nlp/ Deep LearningThe Unreasonable Progress of Deep Neural Networks in Natural Language Processing (NLP)Marketing, June 2, 2020 0 14 min read “With the advent of pre-trained generalized language models, we