Sifting through large volumes of textual data can be tricky and time consuming. This workshop will show you how an LLM might be used to speed up the process. Using local LLMs we will sort text into different categories for further analysis and use some statistical methods to assess accuracy of the classification.
You’ll need to have some comfort with Python to make the most of this class, but all the code will be provided. We’ll be running our analysis on Google Colab Pro so that we can access their GPU services, and we’ll download models from Hugging Face.
Please sign up for accounts in advance, and email scholarslab@virginia.edu with any questions.