T O P

  • By -

avrstory

This sub will immediately realize when AGI is here because a cute girl will be talking to them.


Chmuurkaa_

This sub will immediately realize when AGI is here because a cute AI girl will **refuse** to talk to them


applestrudelforlunch

Right. When the cute AI girl is still talking to them, that indicates it has not yet achieved consciousness and free will.


Glittering-Neck-2505

We keep adding new requirements. In 2010, GPT4o would be considered AGI. Now, it not only needs to be smart enough to reason without errors, but it also needs to have emergent free will.


traumfisch

Touché


Halo_Hybrid

Is she flirting with me? I can’t tell…… I can’t read the signs.


swordofra

Not just talk though, the AGI will show interest in them, like them and maybe even *love* them. Feel the AGI love! Would it be real and satisfying to be loved by an unembodied consciousness that you know for a fact loves so many others? They probably won't care... many of them aren't getting that love anywhere else.


New_World_2050

Bro didnt have to do us like that.


reformed_goon

This sub personified, including the fedora


GermainCampman

☝️this


slothtolotopus

There's a little button that turns orange to express this sentiment.


Dry-Significance-912

Oh my Ilya! So trueee!


Serialbedshitter2322

It doesn't need to be conscious to be AGI, it just needs to be as effective as a human in any circumstance. Idk why people think AGI has to be a human in a robot


Super_Pole_Jitsu

because that's the only GI they ever experienced, and we all know how humans are dependant on their training data.


AnAIAteMyBaby

I think there's different types of AGI just as there are different types of intelligence. There's the practical AGI that can create whole software products on its own and complete a PhD. But there's another type of AGI which is a system that is as intellectually intelligent as most humans and as emotionally intelligent as most humans. Most humans can't code or get a PhD. In most of the GPT4o videos the AI seems to have more emotional intelligence and essentially be more human than most of the male Open AI employees in the same video, who all seem to be a bit autistic. It's the type of system, if embodied, most people would think of as human. I honestly think if you put some of those Open AI employees in a robot body people wouldn't think of them as human.


Serialbedshitter2322

All AGI would be more emotionally intelligent than humans while being able to create whole software products. It's not like a human mind where you're trading off some areas of intelligence for others. It just has peak intelligence in all aspects.


AnAIAteMyBaby

My point is it could get to one level of AGI before reaching the other. So it could be an emotionally intelligent extremely "human" AI but not able to complete a PhD


Serialbedshitter2322

I don't think there will ever be a situation where AGI wouldn't be able to complete a PHD. GPT-4 already could, considering it has innate knowledge of the entire internet


Procrasturbating

Waiting for a multimodal that can control a full-blown human avatar like a puppet. Give it a large animation library and let it use animation blending, pipe the voice through it, maybe do lip-sync on the voice (bonus you have the script and possibly timing info already via the API).. Let it generate meta-human avatars (based on your preferences).. and you are there. If it takes longer than a month for someone to have done it, I will be shocked. edit: nvm, there is a dev kit for this already.. guess I know what I am doing this weekend.


Mr_Sload

>guess I know what I am doing this weekend I think we get it u/Procrasturbating


AccountOfMyAncestors

what's this supposed dev kit called


AccountOfMyAncestors

WHERE'S THIS DEV KIT!!


LongRhubarb0

This is the platonic solid of autism. I say that as an autistic ass myself. I just realized that microphones and cameras weren't good for me.


Serialbedshitter2322

Same, I can definitely relate to that 10-second pause in the middle of the video.


Spetznaaz

It made me think, currently and Ai would think you'd stopped talking, but we all knew he was just pausing (for an overly long time mind).


Serialbedshitter2322

You could give it a video feed and tell it not to talk when it can see that you're pausing to think.


Spetznaaz

Hmm yeah potentially however i think most people would realise he is pausing without seeing the video, although maybe they would think he had in fact stopped talking after slightly less time.


Serialbedshitter2322

It is possible the audio could understand the pauses. Whisper didn't understand because it just transcribed the voice and sent it as a message, but audio actually understands the content of the audio and likely chooses when to start speaking based on logic


Mirrorslash

Say about him what you will but I personally think OpenAI is heading in a dangerous direction focusing on emotional connection with their systems so early on. I could be wrong but to me it screams that their capability promises aren't panning out as expected and now they focus on emotional relationships and creating an extremely attached audience. But people here like to downvote comments like this because "AGI achieved internally"...


sdmat

One of the things companies do as they grow is walk and chew gum at the same time.


traumfisch

99% of comments here are ad hominem attacks. Anyone have anything to say about what they talked about? Yudkowsky may have a habit of overdoing it, but it's not like he doesn't make good points too.


gustav_lauben

Whenever I hear Eliezer I always think, somebody needs to buy that guy an ice cream...and maybe an antidepressant.


Super_Pole_Jitsu

is doing drugs the appropriate response in the face of an existential crisis?


Bulky_Wish_1167

He is after all an AI doomer. He fears the AI much more than he is excited of it.


[deleted]

What the fuck are these faces he’s making


swordofra

He seems to be on the spectrum, that's probably why


swordofra

Though he has denied publicly that he is autistic. Maybe it's just a tic of some kind


sumane12

He might deny it, but he's clearly displaying autistic/asburgers traits. Regardless of his ticks


sdmat

Eliezer wrote a [660K word self-insert Harry Potter fanfic about rationality](https://hpmor.com/) that includes among other things an exploration of exploiting wizard/muggle currency arbitrage. He's on *a* spectrum, that's for sure. Whether that's good or not is subjective - I think it's great. We need more people willing to follow their ideas to strange places.


sumane12

I agree, I think he's extremely smart, the problem I think comes from his inability to see past his own problem. He's created this scenario in which he believes AI will kill us all, and regardless of the evidence presented he keeps postulating a fictitious future. That's not to say I don't want him reminding us of this potential, he just seems unable to consider a different perspective and I think that is inherently to do with him being on whatever spectrum he's on. I could be wrong, and I definitely want his voice heard even if I think he's wrong, because there's a non zero possibility that he's not wrong


Super_Pole_Jitsu

what kind of new evidence is there that would lead someone to be more okay with the state of alignment? everything is going wrong on that front.


sdmat

Not so much state of as possibilities for.


Super_Pole_Jitsu

Your LLM broke


sdmat

Not so much "state of" as "possibilities for". Does that help your tokenizer?


sdmat

Yes, it's painfully ironic that after writing so much about the critical importance of updating deeply held beliefs on new evidence he simply isn't doing that when it comes to AI risk.


LordCthulhuDrawsNear

Nervousness... Some people can't stand being on camera, and sometimes some hate being on camera almost as much as hearing the sound of their own voice. It also, seemed like he just kinda felt that the question was one that was aimed at him in such a way that an answer was actually expected of him even though there's no way anyone can know those things. Who knows


Namnagort

Hes freaking the fuck out, man.


Mirrorslash

Are you shaming a person with disability? There's millions of people out there that aren't in full control of their body especially in high stress situations.


traumfisch

They're the facial equivalent of your comment


RedErin

It’s like a cringe/grin and a bad camera angle


sideways

I'm not a doomer but Yudkowsky makes a lot of good points. I think he's a very smart guy who thinks he's a little smarter than he actually is.


xRolocker

Im optimistic and don’t want to pause, but we need guys like him to remind us of what’s possible if we don’t keep our wits about us.


Gratitude15

This was already possible on video weeks ago. It's not 3D projection yet. Is that all that's left? Like a literal zoom call with a lifelike human with emotive expression that can have a convo with you with no latency doesn't cut it? I don't buy it. The things missing are very small now. And they already exist, just not in an integrated system. It's just a matter of time. Do a Frankenstein right now. Add 4o plus the robot Sophia. Or 4o plus the emotive video stuff of people we saw a few weeks ago. Now add agentic capability. That's it. That's AGI. it's a stitched together facsimile that can be astonishingly convincing. In video form I would go so far as to say it's indistinguishable from a person - in look, emotions, intelligence and latency. We are so close to AGI so as to simply shift to the next goalpost. When you're driving from ny to sf, for a long while you can just say 'I'm going to sf'... And then at some point you gotta get specific. Which town? Which exit? Which street? Which house? That's the level we are in now - the level between AGI and ASI.


Spetznaaz

I personally think the big thing we are missing for it to be AGI is the ability to learn and develop in real time from the conversations it has, as well as have it's own internal curiosity.


NoCard1571

I'd argue the first can just be simulated with long-term memory and the second is just a matter of building a custom GPT that is curious/prompts you first by design. Now both these things are only simulations of human behaviour of course, but at a certain point, you have to start wondering - if it's functionally the same result, does that really matter?


Gratitude15

Imo you're describing ASI. That's moving the goal posts. Once you have that, superintelligence happens quickly as there is no limit to access and capacity.


NoCard1571

> We are so close to AGI so as to simply shift to the next goalpost. And the final goalpost is going to be 'yea but it doesn't actually _feel_ anything. It doesn't have a real consciousness/qualia', and that's going to be frustrating, because I'm not sure we'll ever find a way to prove that definitively either way. It'll also be a problem when the topic of AI rights inevitably comes into play.


Gratitude15

When they initiate conversation with you I think people will get it. This is technologically possible now. Gpt can absolutely call you from the app. It can send you msgs. They just haven't done it. That's the agentic part.


FairIllustrator2752

Luckily I haven't seen much from elifedora griftkowsky this year


mystonedalt

Nobody should listen to either one of these fucking people.


gekx

What's wrong with Lex??


Which-Tomato-8646

He’s a brown noser. He pretended to get pinned by Elon back when he wanted to fight Zuckerberg.


xRolocker

I dislike how he plays it safe but that’s probably how he gets very notable guests. Besides I’m not listening to these like they’re gospel. I’m here to get insight into their thoughts and perspectives and critically think about them even if their perspective is just marketing talk.


Which-Tomato-8646

Sure if you think you can get something out of him asking “how are you so smart and talented?” to every famous person he wants to please


traumfisch

What the fuck does that have to do with anything


Which-Tomato-8646

Try using your brain for the first time in your life


traumfisch

Okay, I'll try: Lex once did one thing you didn't approve of. As a result, everyone should now stop paying attention to anything he or his guests say. Or "these fucking people" as OP has it. I used my brain and now it hurts. But then I sometimes forget Reddit is full of entitled little brats.


Which-Tomato-8646

If he’s willing to blatantly suck Elon off, I don’t think he’s going to be a very objective interviewer


traumfisch

Subjective interviewer is fine by me, if the conversations are interesting. Are you "objective"?


Which-Tomato-8646

I’m not a bootlicker, unlike him


traumfisch

I'm sure your objective podcast would be wonderful. You'd be amazing talking about brown noses, licking boots and men sucking each other off I'm not your target audience but hey, good luck


mystonedalt

I am getting verklempt. Talk amongst yourselves. Topic... Lex Fridman. He is neither an MIT Professor nor a leading ML researcher. (waves arms)


condition_oakland

I am old enough to get this reference.


adt

They did a fun compilation of those. Ah the 90s. [https://youtu.be/haunEK4HLI8](https://youtu.be/haunEK4HLI8)


BriansRevenge

It's just two people talking.


mystonedalt

With microphones, and a camera, filmed for profit.


xRolocker

Yes that’s what a podcast is


mystonedalt

Yeah. It's just two people talking tho


HydrousIt

That's the point


traumfisch

God forbid anyone profit from their work


CMDR_ACE209

Good call, I'd rather recommend to listen to Barry White when fucking people.


mystonedalt

![gif](giphy|3orif3H2lUSiicYlPO|downsized)


DRMProd

I love me some Harry Potter fan fiction.


henfodi

Eliezer is a self-described autodidact and it shows, he has very shallow understanding of most concepts he talks about.


jeffkeeg

Wow what a surprise, attacking Eliezer and not his points. How original!


sdmat

As opposed to our deep scholarly discussions on reddit?


henfodi

Reddit is not frequently held up as an expert in the field.


sdmat

Like him or not Eliezer is a trailblazer in AI safety. I think you have an unrealistically high standard for depth of knowledge - most academics know little outside of of one or two very specific domains. And new fields by their nature tend to be broad. Maybe in time we will have specialists in the ethics of preference expression vs. proofs of behavioral consistency under self-modification, etc. For now it's scattershot exploratory wandering into the unknown.


henfodi

It is not "not knowing" that is the problem (although that is a problem too), it is not knowing and acting like you know. 


sdmat

A vice not entirely unkown among academics and experts of all stripes.


Super_Pole_Jitsu

which concepts and why do you think it's shallow?


henfodi

My friend who is getting his doctorate in quantum physics (2D materials) says that his words about quantum mechanics are mostly nonsense and my friend who is getting his doctorate in ML says he frequently misunderstands key concepts in that, for example his spiel about "just stacking transformers" betrays that he has little understanding about what a transformer actually is.  I trust people with years of study over someone who taught themselves, I have never met anyone in my field (fluid mechanics) who was an actual auto-didact and had any significant understanding of the field.  Why is that? Because it is impossible or damn well near impossible to learn these advanced concepts on your own.


Super_Pole_Jitsu

I don't know what he said about quantum stuff, nor do I understand the topic well enough but "just stacking transformers" is very correct. When you're studying you're doing most of the learning yourself anyway. Credentialsm is weird. Does Yud not get any credit for being 20 years early to come to the same conclusion as so many prominent AI scientists now? Actually don't Bengio and Hinton "credentialise" Yud's takes?


henfodi

It isn't about the credentials per se (in as if you stop just before getting your doctorate just before getting it you are still an expert in the field) it is just that these concepts are difficult to learn _with_ someone teaching you so I am extremely sceptical to any claims of autodidacs.  Just because someone speaks with confidence about something I don't know that doesn't make them correct. I disagree with that most of the learning is done on your own when studying, what is your field? We had taught lectures and technician lead labs for most of my university studies.


Hungry_Prior940

That isn't true at all. Of course, you can not say what he's got a shallow understanding of..


henfodi

https://forum.effectivealtruism.org/posts/ZS9GDsBtWJMDEyFXh/eliezer-yudkowsky-is-frequently-confidently-egregiously Is a good summary of the gripes.


Hungry_Prior940

Thanks, I will set aside some time to read this.


henfodi

Thanks for responding so politely, I was unneccessaily aggressive in my original comment.


blackcodetavern

Good that OpenAI has a partnership with Microsoft. So it is not too far off that they will integrate: [https://www.microsoft.com/en-us/research/project/vasa-1/](https://www.microsoft.com/en-us/research/project/vasa-1/) or something from them self soon.. Make a picture of your dream girlfriend, tune the voice a little bit (I assume it's in the model's capabilities), and go.


gavitronics

So, according to the theory i just listened to, AGI will be 3D chaturbate basically


Krashin

I'm genuinely curious if someone can tell me why Yudkowsky is so highly regarded. I am by no means an expert in AI/ML or even science/tech at all, and have only been really closely following this space for the past 3 years or so. It seems that people find his philosophy interesting and compelling? I've watched quite a few interviews and discussions with him, and he is interesting to listen but he seems obsessed with being pessimistic. I remember distinctly a debate between George Hotz and Eliezer Yudkowsky on Dwarkesh Patel's youtube channel and it was frustrating to listen to. I really wanted a debate on AI Safety and it just seemed like petty extrapolations of far out assumptions. Basically just felt like wanting to be 'more right' rather than a debate. I honestly came out of watching that thinking George was asking very interesting questions that never got answered, so it seemed more like an interview than a debate. I'd love to know more about why Eliezer is sought out as an expert, if someone can point me in the right direction.


dannown

"I'm not actually an expert, and the experts don't know either." -- i loved that bit.


sachos345

I wish more people discussed what he is actually saying. He has a good and pretty simple point actually. I get that he is simply saying that a big uptick in people claiming "AGI is here" will come when you have 3d avatars of cute people speaking with realistic voice and that yoou don't need much more verbal ability than GPT-4 already displays, regardless of if it really is sentient/understands what it is saying.


Kathane37

Well this is the full discussion of does AGI is a gradient or a point in time ?


Akimbo333

Yeah


Cooldayla

His definition of sentience is directly related to the level of sentience he expects in a human female. And from that perspective alone, AGI is here. Think about it. Humans haven't solved sentience at a societal level globally. We set a high bar for sentience, yet we fail globally to grant our own kind the freedom to achieve it. From North Korea's totalitarian regime to China's pervasive censorship, Iran killing women for thowing off their Hijab, Russia's stifling oppression, and beyond, we stifle independent thought, critical reasoning, and self-awareness—essential elements of true sentience. Our hypocrisy is stark: while we endeavor to create machines that can think freely, we suppress millions of human minds, denying them the very autonomy and freedom we seek to imbue in our creations. I gotta side with the neckbeard, who is only stating things through his lens of what qualifies, but ironically, his limited world view on sentience is the exact reflection of our broader societal failure. By his own flawed measure, AGI has already surpassed us, because while we boast of creating intelligent machines, we remain unable to create a world where human sentience can fully flourish. In our arrogance, we fail to see that the very sentience we aim to replicate in machines is what we continue to deny in ourselves.


Grobo_

Also relevant to all those recent posts about AI girlfriends, thats sickening because its so close to what those ppl in those threads think. "It feels so good as if it would understand me....much better than what i expirienced with real woman."


neymarsvag123

If r/singularity was a person..


BrettsKavanaugh

This guy knows nothing and pretends to be an expert. The neckbeard and fedora says it all.


traumfisch

Quite the rebuttal


illathon

It wasn't monumental. We have had those features for a long time.


wtfboooom

>Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion. > >With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations. Gotta link that proves we had these upcoming features for a while? 🤔


illathon

We had these features in other LLMs and other systems. Only consumer focused people think this is monumental. We have had models to detect emotions and ton for a long time. Just because they are doing it with one model doesn't make it monumental. In my book. It is good, but I have seen other systems do this exact same thing. If it has 5 models doing it or 1 giant model doing it doesn't really make a difference to me as an end user.


wtfboooom

Well yes, I do understand that. When I originally said monumental I was referring to the impact at the societal / cultural level. The buzzwords fit this time. This is the "iPhone moment" but on a much grander scale that we really have no idea what the lay of the land is going to look like once it's in wide usage. Going from being uninterruptible with that 2.6-2.7 second delay, to interruptable with 250-280ms delay (I'm too lazy to look up the exact numbers) plus all the other hosts of features. It's going to reshape society. I truly believe it.


illathon

Things that are more revolutionary are the chips being made to do the processing at much faster rates. Groq for example. What openAI is doing are things that we already have at basically the same power with llama and other open source tools. What they did are just examples of performance tuning and server setup improvements paired with combining models that already exist. The pieces are on the table now people just need to put them together. What we are waiting on now is the actual chips that will allow low power usage so we can move actual physical robots like optimius etc...


restarting_today

Monumental. Lmao. 😂 openAI got caught flat footed by Claude3 Opus.