https://sumanthrh.com/post/distributed-and-efficient-finetuning/ Everything about Distributed Training and Efficient FinetuningSumanth R HegdeLast updated on Oct 13, 2023 “practical guidelines and gotchas with multi-GPU and multi-node training”
What I Read: AI’s $200B Question
Follow the GPUs Perspective AI’s $200B QuestionBy David CahnPublished September 20, 2023 “GPU capacity is getting overbuilt. Long-term, this is good. Short-term, things could get messy.”
What I Read: LLM Chatbots, Browser
https://www.kdnuggets.com/2023/05/webllm-bring-llm-chatbots-browser.html Web LLM: Bring LLM Chatbots to the BrowserBala Priya CMay 22, 2023 “Wouldn’t it be cool if you can run LLMs and LLM chatbots natively in your browser?”