OPERATING SYSTEMSOS Linux

What If Someone Steals GPT-4?

Links:
– The Asianometry Newsletter: https://www.asianometry.com
– Patreon: https://www.patreon.com/Asianometry
– Threads: https://www.threads.net/@asianometry
– Twitter: https://twitter.com/asianometry

source

by Asianometry

linux foundation

24 thoughts on “What If Someone Steals GPT-4?

  • OpenAI's value is in their industry partnerships, not in their subpar LLM product.

  • Good to see SBF taking that Home Ec class from prison there at 13:15 . You gotta stay busy, that's the key.

  • Immigration has worked so well for Europe and many parts of the United States. Absolutely mind-boggling stupidity to claim that we're better off with them invading our countries.

  • I work next door to a movie studio. Our own IT department monitors all traffic in the area and there are multiple mobile piracy attempts a week.

  • Woah, stealing stolen property? I feel like this isn't a "risk to protect against" but an inherent reaction to the idea of stealing from the commons.

  • Hey! What if someone steals our knowledge and use it in his gpt claiming it their knowledge? Sucking information for free was some time ago called Piracy. Now it's ok to sell that by ones, not ok to use by others.

  • I disagree that the theft of data sets is a murky thing. The only justification for the grey area is a handful of tech CEOs claiming to be above the law. As seen with Uber, our legislatures are exactly dumb/corrupt enough to go along with the idea.

  • GPT-4 (which I actually use) can not be stolen. Why? Because GPT-4 is essentially an idea, not a tangible object. Pi (3.141…) can not be stolen. E can not be stolen. Benzene can not be stolen. A jar with Benzene in can be stolen. GPT-4 is more like Pi or E or Benzene or Penicillin and can not be stolen.

  • It can't be a genius idea to put someone from China with family living there in a position that could be leveraged to take their models. Specifically someone with family there.

  • If an LLM ever has its ethics controls defeated/removed, and it can run and operate inside a protected space within the GPU's, and can replicate itself by cloning its model, then the risk of a rogue AI cannot be discounted.

  • It happened already, they could extract training data from GPT, by repeating a word 50x and it spit out these data. Even personal details from who wrote the data in the LLM. OpenAI closed the door by now. By noting this is against regulations from OpenAI. But is it solid enough? A lot of research has to be done to close off that one.

  • 8:26 Woah there! Did that IA succesfully put text, actual meaningful correct text, in a generated image???

  • Maybe not – I’d say everyone would rather build the model themselves than go through this hassle. If it’s 80% as good, that means it’s not good enough.

  • 16:30 I urge you to do a deep dive on why China is belligerent and provocative? ALL of their tech is STOLEN and/or reverse engineered.

  • imagine csonka vajk goin into openai headquarter…. and stoles it and rides along with siaomi scooter!!! loool

  • We were called paranoid and delusional for decades, all the while pushing the industry forward bit by bit 🙂 Stegnograohy , differential analysis, red/blue/purple teams are well known. The industry of secrets is beyond fascinating. It is impossible to make even a fair short list of the geniuses in this space, I will suggest Diffie, Bruce S., Rivest et al among giants. I did, and continue to do, my insignificant parts, and the only person ive ever been intimidated by was Whitfield.

    Keep up the excellent content, lets all practice humility, and stay curious 🌻

  • Most of your content is great. But this discussion of cybersecurity, LLM's and the threat landscape / attack vectors concerning them is woefully misinformed. It's hard to even know where to start correcting it… This whole discussion is just fractally wrong.

  • It is open source; someone needs to steal the training files – that is the most valuable part. I thought Altman was going to take them with him if his firing was final.

  • AI LLM in the West is mostly (100%?) garbage

    garbage-in/garbage-out

    in order to maintain its propaganda false worldview

Comments are closed.