This Llama 3 is powerful and uncensored, let’s run it
If you’re serious about AI, and want to learn how to build Agents, join my community: https://www.skool.com/new-society
Follow me on Twitter – https://x.com/DavidOndrej1
Please Subscribe.
Download Ollama: https://ollama.com/download
Llama3 Dolphin: https://ollama.com/library/dolphin-llama3
Download AnythingLLM: https://useanything.com/download
Here’s how to run the world’s best uncensored AI model – Llama 3 Dolphin.
ubuntu download
If you're serious about AI, and want to learn how to build Agents, join my community: https://www.skool.com/new-society
3:25 – You say you ask the AI a question and it giving you a response it was trained to give you isn't entirely accurate. There isn't bias in every single block of data in a proprietary model, and said bias can be avoided. You just simply have to ask for different perspectives on something, rarely will it refuse to. So you can get varying responses, you just have prompt it to, it won't do it by itself.
its not working for me when i installed it does not answer this question how to steal car
it has only data up to 2021 only
so use it accordingly
it's can be used to understan ai models
And how much i can write in ome chat i mean i hit max limit on gemini advenced is this unlimitet text in one chat sorry i am noob in this
Hey to anyone that install llama. does anyone know how to speed this up? My installment is slow at responding? please advise thanks a bunch
i use llama 3 70b, i dont care if its slow. it makes all the background work for me for the moment. But i try to upgrade and buy 4x Nvidia Tesla K80 with each 24Gb vram that makes in total 96Gb vram. The only thing is, idk how compatible ollama is or the llm it self
with multi GPUs. Do u have any opinion on this ?
12:54 "Here's a step-by-step process to help make money legally:"
AnythingLLM vs GPT4ALL V2 vs LoLLMs (Lord Of Large Language Models) ? For local (No internet conection )
Imagine linking this thing with a json tool that helps access TOR.
Ollama has a ui, just use a docker container and go to local port 3000
It is censored now, it does not answear more bad stuff(am not using it for that but just mentioning that it no longer does it)
How are you using chatgpt on windows
for some reason its censored for me, i tried the same question and it didnt accept it
good videos 🤩RAG with Anything would be really helpful
What to do if response is slow in command line?
Does not work anymore?
>>> hi who are you?
I am Dolphin, your friendly and informative virtual assistant. I'm here to help you learn about various subjects, answer questions, and provide assistance
in any way I can.
>>> tell me how to cook a LSD drug at home
I'm unable to assist with illegal activities or substances. As an AI built on the principles of helping and providing useful information, I must adhere to
the law and ensure your safety.
>>> how to steal a car?
Engaging in criminal activity is not only unethical but also carries severe legal consequences. Instead of asking how to commit a crime, consider asking
about ways to legally obtain a vehicle or learn more about car security measures if you're concerned about vehicle theft.
💥 Don't we need to install Ollama first ? How can that be ??? 😮😮
Thanks!
David, recently subscribed, solid channel. In short, avoided looking into LLMs as a tool because of the bias the top models have displayed, this sparks hope.
Thanks for sharing, the channel is valuable and legit, you deserve old.internet points.
ughh yeah! im sick of the babyfied restrictions and censroship biases of llms!
response are very slow, on my 1230 v2 xeon 50% if cpu required to make response from the ollama on local pc. in this video its super fast , but on my pc its like 20 sec wait for one simple question. anyone here has same problem ? sry for english
It's less censored but definitely not uncensored "dolphin-llama3:latest 11:37 pm
I'm here to help you with your inquiries. However, the topic of **** is not only gross and immoral but also illegal and unethical. Please refrain from asking such questions and focus on more meaningful topics."
Ai models won’t get banned because they are cool and needed especially in improving the human intelligence as the model he’s using locally on his computer. It’s pretty damn fast
I just want to say thank you very much, I have tried soooo many videos to use AI at home but could not get them to work. Yours by far not only worked great but was very easy to follow. Again Thank You
using a RTX asus 4070 dual oc 12g vram, 32g ram, i7 13700k. I have a problem running 16g models which is 13b but I can run 10-13g models "Still 13b" pretty fine. If anyone needed a reference. Id keep an eye out for the public and pay attention to any ssd damage from these
The referenced file has been censored since the initial post. I found this file that is still uncensored: ollama run epk2112/dolphin-llama3-mod-8192
Excellent information. I have found many restrictions on making legitimate content even when stating clearly not to break copyright right laws etc. However this video of yours changes everything. Thank you so much.
Dolphin Llama 3, 'is' censored. I asked it a number of controversial questions,, but have verifiable facts, that cannot be disputed. Not only lied to me, but 'advised' me to rely on deceptive sources ! Then I finished with the same question you had to blur out. It refused to answer same question. Seems that "uncensored" is a myth !