https://ai.googleblog.com/2021/10/deciding-which-tasks-should-train.html Deciding Which Tasks Should Train Together in Multi-Task Neural NetworksMonday, October 25, 2021Posted by Christopher Fifty, Research Engineer, Google Research, Brain Team “…there may be instances when learning from
What I Read: Introduction AutoEncoder
https://www.theaidream.com/post/an-introduction-to-autoencoder-and-variational-autoencoder-vae Introduction to AutoEncoder and Variational AutoEncoder(VAE)Nagesh Singh ChauhanJul 28“In recent years, deep learning-based generative models have gained more and more interest…. we will dive deep into these generative networks
What I Read: The Uselessness of Useful Knowledge
https://www.quantamagazine.org/science-has-entered-a-new-era-of-alchemy-good-20211020/ The Uselessness of Useful KnowledgeRobbert DijkgraafOctober 20, 2021Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress.“…the current state of AI research is nothing new
What I Read: binary cross-entropy, log loss
https://towardsdatascience.com/understanding-binary-cross-entropy-log-loss-a-visual-explanation-a3ac6025181a?gi=375ce73be21b Understanding binary cross-entropy / log loss: a visual explanationDaniel GodoyNov 21, 2018 “If you are training a binary classifier, chances are you are using binary cross-entropy / log loss