https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html Interpretable Deep Learning for Time Series ForecastingMonday, December 13, 2021Posted by Sercan O. Arik, Research Scientist and Tomas Pfister, Engineering Manager, Google Cloud “Multi-horizon forecasting, i.e. predicting variables-of-interest at
What I Read: Attention with Performers
https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Rethinking Attention with PerformersFriday, October 23, 2020Posted by Krzysztof Choromanski and Lucy Colwell, Research Scientists, Google Research “To resolve these issues, we introduce the Performer, a Transformer architecture with
What I Read: Transformers for Image Recognition
https://medium.com/swlh/an-image-is-worth-16×16-words-transformers-for-image-recognition-at-scale-brief-review-of-the-8770a636c6a8 An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale (Brief Review of the ICLR 2021 Paper)Stan KriventsovOct 9 “The reason attention models haven’t been doing better