AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist

Mar 6, 2024
1
0
10
Visit site
Citing Kurzweil is about like citing the National Enquirer. While LLMs have some impressive results, their very limited generalized pretraining on events beginning in 2021, their frequent "hallucinations", and need for huge numbers of GPUs/TPUs in giant, power-sucking data centers just to to produce basic generative results suggests that they are far further away than a few years.
It appears that hundreds of trillions of tokens may be needed to really achieve generalized intelligence, and no we are not close to that, nor are we in a period of exponential growth for these resources. Moore's law has not been reliably operating in this decade or even before, so that deus ex machina will not rescue general AI any time soon. We are still talking about about trillios of dollars of investment and radical redesign of TPUs or other neural computing hardware, and computer storage to make much of a dent in this problem. Therefore, a decade is likely the earliest and very optimistic timeframe for something approximating human intelligence.
Also, massively scaling down the computational complexity through knowledge distillation and other techniques will mean that scaling beyond the ability to simulate a single human intelligence would be necessary to achieve the singularity. The singularity was predicated on the idea that cheap resources could simulate generalized intelligence. Instead it appears that it is anything but cheap.
 
Last edited:

God

Mar 7, 2024
2
0
10
Visit site
Surprised Ben apparently got two things wrong:

1. He didn't cite LMMs (Large Multi Modal Model), he only cited LLMs. He apparently completely ignored GPT-4V, a multi modal model able to process both image and text.

2. ChatGPT i.e. natural language/text version, is reasonably seen as "emergent AGi" not narrow Ai. It's able to do several tasks quite well, and comparable to many humans, despite its flaws. Claiming that it's narrow seems wrong.

It's reasonably odd that Ben cites ChatGPT as "narrow", while it; Passes Bar Exams, Physics Exams, Math Exams, Coding Exams, writes poems, writes music, writes stories, help robots follow instructions, etc. Maybe Ben is underestimating the impact of Language for humans?

Source:
i. Arxiv: Levels of AGI: Operationalizing Progress on the Path to AGI (Meredith et al)
ii. GPT-4V(ision) System Card (Open Ai)
 

God

Mar 7, 2024
2
0
10
Visit site
Citing Kurzweil is about like citing the National Enquirer. While LLMs have some impressive results, their very limited generalized pretraining on events beginning in 2021, their frequent "hallucinations", and need for huge numbers of GPUs/TPUs in giant, power-sucking data centers just to to produce basic generative results suggests that they are far further away than a few years.
It appears that hundreds of trillions of tokens may be needed to really achieve generalized intelligence, and no we are not close to that, nor are we in a period of exponential growth for these resources. Moore's law has not been reliably operating in this decade or even before, so that deus ex machina will not rescue general AI any time soon. We are still talking about about trillios of dollars of investment and radical redesign of TPUs or other neural computing hardware, and computer storage to make much of a dent in this problem. Therefore, a decade is likely the earliest and very optimistic timeframe for something approximating human intelligence.
Also, massively scaling down the computational complexity through knowledge distillation and other techniques will mean that scaling beyond the ability to simulate a single human intelligence would be necessary to achieve the singularity. The singularity was predicated on the idea that cheap resources could simulate generalized intelligence. Instead it appears that it is anything but cheap.

ChatGPT is already reasonably seen as "emergent AGi" not narrow Ai. It's also supposedly has around 1 trillion params. Notably, I've cited elsewhere that biological brains have around 500T params, though a case of encephalitis that damaged 90% brain volume , left around 10% or 50 trillion biological params, and he was claimed to have functioned "normally". (The Lancet 2007), so maybe around 50T params is the sweet spot.

~Separately, it's reasonably odd that Ben cites ChatGPT as "narrow", while it; Passes Bar Exams, Physics Exams, Math Exams, Coding Exams, writes poems, writes music, writes stories, help robots follow instructions, etc despite its flaws. Maybe Ben is underestimating the impact of Language for humans?

Sources:
i. Wikipedia/Neuron (500T biological params)
ii. Arxiv: Levels of AGI: Operationalizing Progress on the Path to AGI (Meredith et al)