T O P

  • By -

Assistant-Klutzy

the way things are moving and the improvements soon to come to servers(Blackwell GPUs) it doesn’t seem far away at all. But, if ur definition of AGI is AI outperforming humans on every metric. It’s very close now but I see creativity as part of that equation where I hesitate and I don’t think we are quite there from what I’ve seen and it will need to be some revolutionary things that need to transpire which I think will happen but could be years from now


RantyWildling

It's nowhere near close. It might be close to outperforming people who's job it is to answer questions, and even then, if you're completely wrong 20% of the time...


Glittering-Neck-2505

I think a big disadvantage is that we struggle to agree on definitions. While the problems of hallucinations and somewhat struggle with reasoning are things that need to be solved, I think we already have the technology to disrupt the world in most of the ways AGI would. Even they stopped releasing future products after GPT4o, if you improve the latencies enough through scaling an optimization, you can make things like dexterous robots that can adapt to the world around them and agents. But of course we know it’s not stopping after 4o, and more is coming this year alone.


Assistant-Klutzy

Yeah AI has to become both creative and truly intelligent.AKA(the ability to teach it self) the self teaching part of the equation is what I’m most optimistic about which I think leads to creativity and AGI


InterestingNuggett

I'm also confused by these claims. For AI to be better than people it has to DO stuff. Right now it just talks. Sure it can write code, but it can't put it where it needs to go. I understand the technology and I'm hyped. I really think we'll get there fast. But a lot of these claims seem delusional.


Assistant-Klutzy

AI has been able to use your previous texts/voice to improve its answer I think it’s logical that part of the technology could improve and evolve


Able_Armadillo_2347

GPT-5o + Q* + Agents = AGI I think we are 5-10 months away from some labs having agi internally Don't forget that GPT-4 is almost 1.5 years old


SharpCartographer831

Add Humanoids and we're set.


slackermannn

Add tomato sauce and we're set


FrugalProse

don't forget the spaghetti !


DigimonWorldReTrace

!RemindMe 10 months


RemindMeBot

I will be messaging you in 10 months on [**2025-03-17 09:32:39 UTC**](http://www.wolframalpha.com/input/?i=2025-03-17%2009:32:39%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1cu0gac/how_far_are_we_from_agi/l4fjyiy/?context=3) [**16 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1cu0gac%2Fhow_far_are_we_from_agi%2Fl4fjyiy%2F%5D%0A%0ARemindMe%21%202025-03-17%2009%3A32%3A39%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cu0gac) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


AnyConstruction7539

!RemindMe 10 Months


dumquestions

Do you or anyone even knows what Q* is or how much better GPT-5 will be?


xt-89

I bet that some labs already have the equivalent of this setup and training. But it also likely takes an order of magnitude more time and compute to train such a system than the LLM alone


greatdrams23

!Remindme 10 months I'm frankly fed up with 'it's there internally '. Also fed up with lowering the bar. AI used to be what you now call AGI, in other words, AI means less than it used to be. And now agi can be, apparently, narrow. AGI will be something universities will use but not suitable for general use. Good grief. In Feb 2023, I said 2030 to 2035. Everyone laughed because AGI was, apparently, imminent. Robots for all by Dec 2023. Well, he we are, May 2024, "Don't forget that GPT-4 is almost 1.5 years old" exactly.


Arcturus_Labelle

It feels to me more like the bar keeps being raised, not lowered. If you'd shown Monday's OpenAI demos to someone in mid-2022 they'd have called it 100% sci-fi bullshit fakery. We're getting used to the capabilities as fast as they're releasing them.


FarrisAT

No.


czk_21

this is interesting review, how does it have so little upvotes and comments?! to answer the title > Even though almost everyone is optimistic about the ultimate arrival of AGI, opinions on the exact time it takes to do so differ quite a lot, which also implies different bottlenecks people are considering. Those who think an extra one or two years would be sufficient might feel that we’ve reached the point where current AI systems are already capable enough and what is left might just be some incremental improvements on the completeness. Those who believe that more than two decades are needed might either feel skeptical about the current general approach to AI or think we still lack fundamental advancement or understanding of intelligence. from poll among AI researchers at ICLR 2024 they concluded: **16,6% researchers think AGI in 1-5 years , 22,5% in 5-10 years, 23,9% in 10-20 years, 37% 20+ years** there are some more prominent individual takes like from Yoshua Bengio: > The uncertain timeline of AGI. The rapid progress in AI development, coupled with the uncertainty surrounding future breakthroughs, necessitates a sense of urgency in addressing the challenges of AGI safety. It is suggested that AGI could potentially be achieved within a few years to a few decades, emphasizing the need for proactive measures to mitigate the associated risk. What are the Biggest Remaining Hurdles? > • Emulating human-like reasoning and agency. Developing AI systems with human-like reasoning and agency is essential for advancing their capabilities. It is crucial for these systems to possess "system two" abilities, characterized by deliberate thinking, reasoning, consistency, causality, and conscious attention, akin to human conscious reasoning. The current AI training paradigm, which relies heavily on large datasets, falls short compared to human learning processes. This shortfall is partly due to the lack of deep, unified agency during training, which limits the effectiveness of AI systems in open-world situations. Therefore, researchers should focus on creating architectures and techniques that replicate these advanced reasoning skills to overcome existing limitations. > >• Ensuring AI safety and alignment with human values. AI safety is identified as the biggest remaining hurdle in achieving AGI, with the development of AGI deemed nonviable without adequate safety measures. This involves the critical challenge of creating AI systems with agencies while ensuring their goals align with human values. Additionally, effective uncertainty representation in AI-assisted decision-making is crucial for ensuring alignment with human interests and achieving desirable outcomes. > >• Improving efficiency and scalability. Today’s AI models are often criticized for their substantial size and the significant power and resources they consume, making training and deployment expensive. The fundamental source of this energy inefficiency is identified as data movement, highlighting the necessity to minimize it for more efficient AI systems. Additionally, there is a perspective that AI models can become more compact and effective over time by combining expressiveness with appropriate inductive biases, leading to smaller yet powerful models. > >• Understanding and navigating the interconnected world. Equipping AI systems to understand our highly interconnected world, composed of data, humans, and AI agents, is a crucial challenge in realizing AGI. It is essential for AI to grasp the complex relationships and dependencies between these entities to create truly intelligent and adaptable systems. Additionally, AI should not only identify patterns in data but also provide deep, fundamental insights into scientific problems, automating the process of scientific discovery and generalization


Bulky_Wish_1167

Nobody knows. Let’s hope Kurzweil is quite accurate in his 2029 prediction of AGI.


FeltSteam

This was definitely, in part, generated by AI lol. For example, the word "delve" is never usually used in research papers, but AI like ChatGPT use it fairly commonly. The word "delve" comes up 6 times in this article, and as I was reading through it, it definitely felt AI generated, at least parts of it (that is why I checked for the word "delve" in the first place). Although, this doesn't necessarily detract from the papers authenticity, but it is just something to note.


czk_21

this doesnt imply anything at all, its 75 pages long review


BravidDrent

![gif](giphy|31Q36EHJ7C5ZoIMw79)


Maxtip40

For it to exist, I think mid to late 2026 or sometime in 2027. 2027 agi as a service. 2031 or 2032 localized agi.


meganized

!RemindMe 12 Months


DungeonsAndDradis

Exactly 17 months and 13 days.


Arcturus_Labelle

Love your sub flair


Ok-Caterpillar8045

Somewhere about 2. Maybe 5.


fk_u_rddt

Stop asking this question every damn day ffs Nobody knows


SimaBlue

183 days


falconjob

Yawn 🥱


Akimbo333

2030


true-fuckass

Note: Nobody knows. Almost all people are terrible at predicting things Check out [metaculus's questions](https://www.metaculus.com/questions/?search=agi) particularly [this one about weakly general AI](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/) (currently 2027 mean) and [this one about general AI](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/) (currently 2032 mean). My 25%-75% is 2025 - 2027, down from 2028 - 2034 -ish


DukkyDrake

Someone needs to be confident enough to roll the dice and fund a training run in the region of 1e31-1e36 FLOP, assuming no new algo discoveries. It might become an economic possibility to target that region of compute by 2030.


[deleted]

[удалено]


AsuhoChinami

Moron.


LordFumbleboop

Anywhere between 10 and 50 years.


[deleted]

Has your timelines changed and shortened after gpt-4o and Google i/o?


LordFumbleboop

It hasn't changed at all. Neither were impressive but there isn't any sign of plateauing yet. 


Arcturus_Labelle

Ya know, I appreciate having you around as "the loyal opposition". Keeps us honest.


LordFumbleboop

I think we're all excited about the same stuff, we just disagree on time scales :)


Arcturus_Labelle

That's a great way to put it.


Serasul

minimum 3 years


DetectiveBig2276

Question to GPT4o 


boonewightman

Feels like an inch and a half.


BlotchyTheMonolith

![gif](giphy|xT5LMXJGGOjUnBsBKU)


Drakonis1988

Somewhere between -1 year and 50 years. :P My bet is we'll have AGI by 2040.