OPERATING SYSTEMSOS Linux

CODE Presents: Harnessing Large Language Models Locally

Join us for an introduction into the use of local LLMs with Philipp Bauer. This talk will cover the basics of deploying and using an LLM on your own machine, as well as motivations behind using a local LLM and some of the challenges and pitfalls that come with it.

During this session we will make sense of the “zoo” of available LLMs and touch on the pros and cons of some of them, look at recent developments in the field and set up a C#/.NET project that can leverage a language model to get you started building your own AI applications. We will also explore some of the most important parameters that influence the models’ performance and how to tune them for your specific use case.

Whether you are new to using LLMs or have been using them for a while, this talk will give you a better understanding of what is possible using LLMs directly instead of using a cloud service. Don’t miss this chance to learn from Philipp’s experience and ask questions about your own projects.

Github Repositories:
www.github.com/SciSharp/LLamaSharp
www.github.com/ggerganov/llama.cpp
General Information / News:
https://www.reddit.com/r/LocalLLaMA
www.promptingguide.ai
Quantized Models:
www.huggingface.co/TheBloke
LLM Applications:
www.github.com/oobabooga (WebUI)
lmstudio.ai (LM Studio, Multi-Platform App)
Papers:
https://arxiv.org/abs/2307.09009 (How is ChatGPT’s behavior changing over time?)
https://arxiv.org/abs/2306.02707 (Orca: Progressive Learning from Complex Explanation Traces of GPT-4)
https://arxiv.org/abs/2304.12244 (WizardLM: Empowering Large Language Models to Follow Complex Instructions)
https://arxiv.org/abs/2307.08621 (Retentive Network: A Successor to Transformer for Large Language Models)
https://arxiv.org/abs/2305.17493 (The Curse of Recursion: Training on Generated Data Makes Models Forget)

source

by CODE Magazine

linux foundation