The LLM chapter of my book, which is now online, features not one, not two, not three, but six (6) Colab notebooks! Not only have you learned to code a Transformer language model from scratch in the previous Transformer chapter, but in this LLM chapter, you will also learn how to finetune it for various business scenarios. These include:
1. Finetune an LLM for an emotion classification task as a generative model.
2. Finetune an LLM for an emotion classification task as a multi-class classifier.
3. Compare the classifiers' performance to a bag of words + logistic regression combination (a baseline).
4. A full finetune of a pretrained LLM to answer questions.
5. A LoRA finetune (for cost efficiency).
The code in these notebooks can be used for solving your real business problems by simply replacing the data file and the prompting format.
The chapter also includes the sampling strategies from a language model and prompt engineering techniques.
Enjoy! The printed book is coming very soon. Stay tuned.
Congrats Andriy and huge thank you for writing this book. Will be sure to purchase the print copy as well.
Congrats, Andriy! Lovely little book :) I'll be sure to buy a printed copy...