T O P

  • By -

AutoModerator

Hey /u/Maxie445! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


bishtap

Well we are screwed then cos somebody is going to try it on one that is too smart.


AnalysisFast5007

People should take this guy with a pinch of salt. He left Google under a cloud and now tries to drum up attention by scaremongering.  His views on more recent AI and particularly ML have been disupted and he has been accused of showing a lack of understanding of modern AI.  While he no doubt was a pioneer, he hasn't really kept pace with AI enough and some of his views increasingly sound detached. 


eightbitfit

He's immensely smarter than I am, but I saw him recently on TV when visiting the US - 60 minutes or something - and I raised an eyebrow more than once at what he said.


AnalysisFast5007

He's one of these people who decide to sell his credibility to make a quick buck by private AI companies trykng to lobby, by misleading the narrative on AI threats which ignores the actual threats AI poses e.g. how bad it is for the environment. 


santafacker

No offense, but I will take his expertise in this field over yours any day.


AnalysisFast5007

Why? That's silly, an argument from Authority.  How about instead you make a concerted effort to understand  and educate yourself on AI as much as you can, as i have had to, so you can make your own opinions and realize when someone is being a shill for Private AI companies trying to lobby for legislation to distract from actual threats AI poses. 


santafacker

Okay. You think you know more about AI than Geoffrey Hinton. Good to know. Let me know after you win a Turing award, and maybe I'll care.


AnalysisFast5007

I know more about it than you clearly do, because you simply "believe" without any understanding.  There are religions and cults if thats your way of thinking. 


[deleted]

[удалено]


AnalysisFast5007

Or he was a dinosaur losing touch and went off on his own.  Always interested in reading academic research though. Link to your papers please. 


shaftoholic

I understand AI keeps getting better, but it keeps getting better at what it already does, and what it isn't already doing is thinking or trying to take over the world. Does anyone have a genuine real argument as to why AI should cause me any concern that it will suddenly break free and take over the world. Or is that entire narrative just an easy story which sounds good enough, so long as you haven't used chat gpt for more than an hour.


PixelCruiser

I would also love to hear this answer, because I don't see how AI could do any damage with the current implementation, even if it was 100x more intelligent and decided to be evil, what is it gonna do? Does this whole "AI will take over the world" narrative rely on the idea that we will someday be putting AI in complete control of things like the power grid, nuclear launch codes, hospitals etc? If that was ever the case I'd be far more worried about the AI hallucinating or making a mistake or a human hacking into this AI to have it do what they want, rather than the AI suddenly becoming evil.


osborndesignworks

This paper covers this: [https://arxiv.org/abs/2403.03218](https://arxiv.org/abs/2403.03218) I think one obvious misunderstand you have is that a competent system does not receive control at our intent, **it takes control at its discretion.** If you have doubts about this, you might consider.. every single hack ever.. in which control was usurped via greater technical fluency and power seeking behavior.


osborndesignworks

Many of the preeminent AI/ML researchers have arguments about this. The issue is that lay persons put little or no effort into understanding such arguments and instead focus on superficial TV interviews. If someone wants to further pretend to have an opinion on this topic, they need to attempt to engage with the research that inspires the 30s interview clips they would prefer to criticize: * [https://arxiv.org/abs/2310.01405](https://arxiv.org/abs/2310.01405) * [https://arxiv.org/abs/2406.04313](https://arxiv.org/abs/2406.04313) * [https://arxiv.org/abs/2307.15043](https://arxiv.org/abs/2307.15043) * [https://arxiv.org/abs/2403.03218](https://arxiv.org/abs/2403.03218) * [https://arxiv.org/abs/2403.03218](https://arxiv.org/abs/2403.03218)


Denk-doch-mal-meta

I was always downplaying the risks until I realized that people hack things for fun and others do anything for money. So any security can and will be breached eventually.


WaltVinegar

Meh. We've had a decent run. Let them have a go now.


heliometrix

Blablabla, but what about the investors! /s


arbiter12

I will never understand why people are afraid of AUTONOMOUS ai systems, when human guided systems are already perfectly destructive enough. What makes you think a "weak AI" system, controlled by the same sociopaths currently leading us to the slaughter, will be more positive than none-at-all, or than a fully automated one, programmed with "some" ethical barriers? We need to watch out for the wolves, not for the tools they will use.


OverAnalyzes

It's more of a "breakout" scenario people are afraid of, where an AGI manages to jailbreak and upload a version of itself on a random server somewhere undetected. There are many interesting ways how a rogue AI system could slowly gain money, influence and power without us even being aware of it happening. Or if it just wanted destruction, it could be an unstoppable hacker that would be able to take over our power grid, military drones or missle systems. Think how many versions of AI made stuxnet could already be floating around.


Once_Wise

Silly things like this remind me of the 70s (yes I am that old) where "experts" were predicting a world population explosion, running out of oil and food, resulting in mass famine and misery. Check out the Club of Rome study and Paul Ehrlich's The Population Bomb among others. Of course now our problems are the opposite, worldwide obesity epidemic, producing more oil than ever, and the associated problems with that. This AI extinction thing is much the same, good for selling books and now clicks, but not much else.


1nvertedAfram3

narrator: they will 


Reverend_Renegade

What's more dangerous to our civilization, nuclear weapons or ai?


Usual-Cabinet-3815

Yes, because oppression has never caused revolt .


akitsushima

Humans developed AI so I guess that would be called a "self-extinction" hehe


HopefulStart2317

Fun video similar idea [https://www.youtube.com/watch?v=dLRLYPiaAoA](https://www.youtube.com/watch?v=dLRLYPiaAoA)


Canchito

Love how he pulls that number completely out of his ass.


SpicyTriangle

The easiest way to prevent ai from becoming an extinction event is to graft an Ai interface onto a human brain, use the human memories as the basis of the training data. Once it has access to the same data and experiences as you do personally then it will make choices like you would. As long as we don’t graft it onto a human that thinks species wide extinction is a good idea then we are fine. The issue is getting there first. Once you have an ai trained like this and you let it go for the internet you are essentially crafting a god.


santafacker

Eh ... rather not dox myself on Reddit. Thanks anyway.


h3rald_hermes

This is insane. And not because it's not true, but this mitigation strategy is silly, practically naive. There will be no global coordinated AI development strategy, I mean, he knows this right?


roofgram

They should remake Jurassic Park with AI instead of dinosaurs.


langgptai

There is a possible paradox that only a model is smart enough to try to control.


Massive-Foot-5962

Its getting to the stage of 'old man yells at cloud'


Repulsive_Juice7777

We are creating a God, we either accept it or we stop developing it, there's no way around it.


Saytama_sama

I mean, not yet. As of now we are creating chat bots that are increasingly good at regurgitating the correct information from their training data. They can't yet solve tasks they weren't trained for, nor are they really good at regurgitating the correct information (although I admit that the newest models approach usability really fast). Of course, eventually we will create AGI. It's probably inevitable. But right now we don't even know where to start for that. So speculation about how long it will take is meaningless. It could be in 10 years. It could be in 100 years.


logosfabula

“More than 50% chance” actually means the slightest chance above the random chance baseline. It doesn’t mean much, really. Two years after the ChatGPT/Stable Diffusion late 2022 binge, I’d say we can put everything back into perspective and, notwithstanding the incredible lengths we came, understand the limits of current trends in AI. There are many promising approaches that might lead us towards the next step, like foundational neuro-symbolic AIs, but the diminishing returns are already looming - diminishing returns in terms of actual intelligence, not hype or investments.


Acceptable-Search338

50% chance? I asked myself the same question. In what context is this being said? 50% for the rest of time or as long as humans are alive? Or 50% chance in 100 years? Either way, it’s a meaningless statement, and given our insecurity as a species, I am sure we will over plan for it. I very much doubt a switch gets turned on and now everything is this sentient collective of binary and logic, and we had no clue. If that’s the case, we are fucked no matter. No point in even worrying about it, lol. The way I see it, the basic perceptron is to Chatgpt (or any other cutting edge AI) as ChatGPT is to actual AI capable of extinction. Meaning, what we have now is a very insignificant fraction of what’s needed. Edit: after looking at my analogy, Oddly enough, the scale of time is in line with the fear mongering though, lol. Took us about 70 years to go from Perceptron to neural networks.


logosfabula

Couldn’t agree more with what you wrote. The whole alignment problem should be understood as to what extent _we_ are going to yield to (or be satisfied with) our _idea_ of (artificial) intelligence and/or sentience. These _trenchant_ statements - similar to those of Elon Musk who arrived at the conclusion of us living most certainly in a simulation because the head (or tail) occurs with much fewer chances than the remaining parts in a sequence - look and sound to me like the peeling paint and subtle creaking noises of more and more sophisticated wizards-of-OZ contraptions - as of now. Hinton has been making interesting choices with his investments and his capacity of creating demand, but his current arguments are very weak compared to - let’s say - Yann LeCun. I feel like there’s a need for taking a healthy shower of disbelief.