Ollama on Windows ! Now, Everyone can use this #ollama
In this video, we are talking about running Ollama on Windows. The much awaited project!
Key Concepts:
1. Windows use of Ollama
2. OpenAI Compatibility
3. Python and Javascript Integration
4. Ollama meets Llava
Check Out other Ollama videos:
https://www.youtube.com/@PromptEngineer48/search?query=ollama
Ollama: https://ollama.ai/
https://ollama.com/blog/windows-preview
Let’s do this!
Join the AI Revolution!
#ollama #ollava #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
🕵️♀️ Join my Patreon: https://www.patreon.com/PromptEngineer975
☕ Buy me a coffee: https://ko-fi.com/promptengineer
📞 Get on a Call with me – Calendly: https://calendly.com/prompt-engineer48/call
❤️ Subscribe: https://www.youtube.com/@PromptEngineer48
💀 GitHub Profile: https://github.com/PromptEngineer48
🔖 Twitter Profile: https://twitter.com/prompt48
TIME STAMPS:
🎁Subscribe to my channel: https://www.youtube.com/@PromptEngineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don’t forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!
by Prompt Engineer
linux foundation
This could have proably have saved me two days, not on the Linux part which only took a few minutes, but with this sh*tty Windows. I had to update it first to 2H22 because otherwise my GPU wasn't avaiable under WSL2. But it couldn't do because of some generic error with some number that can have like 100 reasons and after wasting a lot of time with fix-your-windows guides, I found an assistent tool that finally updated my windows. Than I had another day fun with Ollama Webui in terms of portmapping from its docker container to the Windows host and then the LAN. I got it working first but another Windows update broke it as it installed some new iphelper service at port 8080 which is the exact port under which WebUI is exposed. With the Guide of ChatGPT I finally disabled that and installed a reverse proxy. What a pain! I'll almost certainly make a dedicated Linux AI machine this year. Btw. you could have mentioned WebUI for Ollama perhabs unless I just missed it when you did. 😉
Thank you so much for all your videos. What is your thinking on using Ollama vs. LocalAI?
can I chat with my docs with windows version?
So what's new about this we've been able to run llama in LM studio and run open interpreter an auto Gen.???????
Very cool!
Thank you for the video! This is so exciting!
Hey thanks for the vid!
Hey maybe im missing something (and maybe its due to it being just the preview too) but do you know how I can watch the real-time logs? When using this via docker I was able to see the backend of it and it really helped with troubleshooting. IM sure theres a way im just blind haha
It is very nice that finally they added support to windows. But this preview they have is really slow. It is slower than when we run it using wsl subsystem in windows. I have used it successfully within wsl subsystem. A lot of users have been seeing the same thing. Hopefully they will address these performance issues. But I am glad it is finally supported in windows.