https://www.harmdevries.com/post/model-size-vs-compute-overhead/ Go smol or go homeWhy we should train smaller LLMs on more tokensHarm de VriesApr 13, 2023 “However, for most use cases you should not train a compute-optimal LLM
What I Read: Bloom filter
https://luminousmen.com/post/building-a-bloom-filter Building a Bloom filterluminousmen “In this post, we will explore the Bloom filter — a data structure that is ingenious in its simplicity and elegant in its design.”