This week’s episode is brought to you by Intel.Ĭourse Spotlight: Learn Text Classification With Python and Keras We also discuss ways to augment models using agents or plugins, which can access search engine results or other authoritative sources. Jodie shares three benchmarking datasets and links to resources to get you started. We dig into ways to measure levels of bias, toxicity, and hallucinations using Python. We discuss training datasets and the potential quality issues with crawling uncurated sources. Jodie provides some background on large language models and how they can absorb vast amounts of information about the relationship between words using a type of neural network called a transformer. How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |