https://lilianweng.github.io/posts/2024-07-07-hallucination Extrinsic Hallucinations in LLMsLilian WengJuly 7, 2024 “This post focuses on extrinsic hallucination. To avoid hallucination, LLMs need to be (1) factual and (2) acknowledge not knowing the answer
What I Read: AI Engineers, Search
https://softwaredoug.com/blog/2024/06/25/what-ai-engineers-need-to-know-search What AI Engineers Should Know about SearchDoug TurnbullJune 25th, 2024 “Things AI Engineers Should Know about Search”
What I Read: Dataset Distillation
https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html Training Machine Learning Models More Efficiently with Dataset DistillationWednesday, December 15, 2021Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research“For a machine learning (ML)
What I Read: Better computer vision models, Transformers, CNNs
https://ai.facebook.com/blog/computer-vision-combining-transformers-and-convolutional-neural-networks/ Better computer vision models by combining Transformers and convolutional neural networksJuly 8, 2021 “We’ve developed a new computer vision model… which combines… convolutional neural networks (CNNs) and Transformer-based models…