Interesting that in an article entitled "Why I'm betting against AGI hype", the author doesn't actually say what bet he is making - i.e. what specific decisions is he making, based on his prediction that AGI is much less likely to arise from LLMs than the probability the market is implicitly pricing in suggests. What assets is he investing in or shorting? What life decisions is he making differently than he otherwise would?
I say this not because I think his prediction as stated here is necessarily wrong or unreasonable, but because I myself might want to make investment decisions based upon this prediction, and translating a prediction about the future into the correct executions today is not trivial.
Without addressing his argument about AGI-from-LLMs - because I don't have any better information myself than listening to Sutskever on Dwarkesh's podcast - I am somewhat skeptical that the current market price of AI-related assets is actually pricing in a "60-80%" chance of AGI from LLMs specifically, rather than all the useful applications of LLMs that are not AGI. But this isn't a prediction I'm very confident in myself.
> I’ve listened to the optimists—the researchers and executives claiming [...]
Actually researchers close to the problem are the first ones to give farther out target dates. And Yann LeCun is very vocal about LLMs being a dead end.
And, that's why there's so much investment. It's more of a "when" question, not an "if" question (although I have seen people claim that only meat can think).
An issue with betting against the AGI hype is he's basing it on
>The AGI-from-LLMs thesis fails...
but what is you get a better algorithm? Hinton's neural network work was in the 1980s, transformers are from 2017 but none of it worked that well then because the hardware wasn't that good. Now we have loads of fast hardware and thousands of bright people working in AI and things seem ripe for algorithm improvements. I'm pretty sure it's possible because the brain works so much more efficiently than the LLMs.
The hype is that there is a meaningful AGI discussion that affects today’s decision making. Valuations mirror the sentiment that current llm AI will decrease costs by limiting white collar jobs and perhaps bring in a few new revenue streams taking advantage stale unstructured information.
Other academic and self aggrandising discussions on the advent of AGI do exist, but even cold fusion might arrive earlier.
Hire digital employees rather than human ones. When all your interaction is digital, replacing the human on the other end with a theoretically just as capable AI is one possibility. Then, have the AI write docs for your AI employee, spin up additional employees like EC2 instances on AWS. Spin up 30 to clear out your Trello/Monday.com/Jira board, then spin them back down as soon as they've finished, with no remorse, because they're just AI robots. That's what you could do with such a technology anyway.
That's for regular human-level AGI. The issue becomes more start for ASI, artificial super intelligence. If the AI employee is smarter than most, if not all humans, why hire humans at all?
Of course, this is all theoretical. We don't have the technology yet, and have no idea what it would even cost if/when we reach that.
This isn't Star Trek and we will never have Star Trek, whatever your exact fantasy is this ends badly. If we had an iota of foresight and agency left , we would be setting up guillotines, not signing up for another round of work/pay/die. All we would need to do is teach nothing but history, philosophy, and math: we would know what we tried before, why and what we should strive for, and how to achieve it. But let's just buy some Vanguard or NVDA or whatever makes the number go up.
Interesting that in an article entitled "Why I'm betting against AGI hype", the author doesn't actually say what bet he is making - i.e. what specific decisions is he making, based on his prediction that AGI is much less likely to arise from LLMs than the probability the market is implicitly pricing in suggests. What assets is he investing in or shorting? What life decisions is he making differently than he otherwise would?
I say this not because I think his prediction as stated here is necessarily wrong or unreasonable, but because I myself might want to make investment decisions based upon this prediction, and translating a prediction about the future into the correct executions today is not trivial.
Without addressing his argument about AGI-from-LLMs - because I don't have any better information myself than listening to Sutskever on Dwarkesh's podcast - I am somewhat skeptical that the current market price of AI-related assets is actually pricing in a "60-80%" chance of AGI from LLMs specifically, rather than all the useful applications of LLMs that are not AGI. But this isn't a prediction I'm very confident in myself.
Armchair commentary.
> I’ve listened to the optimists—the researchers and executives claiming [...]
Actually researchers close to the problem are the first ones to give farther out target dates. And Yann LeCun is very vocal about LLMs being a dead end.
> farther out target dates
And, that's why there's so much investment. It's more of a "when" question, not an "if" question (although I have seen people claim that only meat can think).
He is starting a business that depends on them being a dead end
Sounds like he’s putting his money where his mouth is.
Same guy that predicted LLMs couldn't do something in 5000 years and they did it next year? (Google this, seriously)
Couldn't do what? You haven't told us what to search for.
Open browser and write "lecunn 5000"
An issue with betting against the AGI hype is he's basing it on
>The AGI-from-LLMs thesis fails...
but what is you get a better algorithm? Hinton's neural network work was in the 1980s, transformers are from 2017 but none of it worked that well then because the hardware wasn't that good. Now we have loads of fast hardware and thousands of bright people working in AI and things seem ripe for algorithm improvements. I'm pretty sure it's possible because the brain works so much more efficiently than the LLMs.
Summary of the current situation...
LLMs have shown us just how easily we are fooled.
AGI has shown us just how little we understand about "intelligence".
Standby for more of the same.
The hype is that there is a meaningful AGI discussion that affects today’s decision making. Valuations mirror the sentiment that current llm AI will decrease costs by limiting white collar jobs and perhaps bring in a few new revenue streams taking advantage stale unstructured information. Other academic and self aggrandising discussions on the advent of AGI do exist, but even cold fusion might arrive earlier.
I don't think there's a lot of "AGI hype".
I think all the hype is more about ai replacing human effort in more ambiguous tasks than computers helped with before.
A more interesting idea would be - what would the world do with AGI anyway?
Can't you think what a world with a species smarter than humans could be like? Yeah, it's difficult
Hire digital employees rather than human ones. When all your interaction is digital, replacing the human on the other end with a theoretically just as capable AI is one possibility. Then, have the AI write docs for your AI employee, spin up additional employees like EC2 instances on AWS. Spin up 30 to clear out your Trello/Monday.com/Jira board, then spin them back down as soon as they've finished, with no remorse, because they're just AI robots. That's what you could do with such a technology anyway.
That's for regular human-level AGI. The issue becomes more start for ASI, artificial super intelligence. If the AI employee is smarter than most, if not all humans, why hire humans at all?
Of course, this is all theoretical. We don't have the technology yet, and have no idea what it would even cost if/when we reach that.
"...a philosophical confusion about the nature of intelligence itself...."
That is how it is done today. One asks one's philosophical priors what one's experiments must find.
Contrarianism as a mental property of humans
This isn't Star Trek and we will never have Star Trek, whatever your exact fantasy is this ends badly. If we had an iota of foresight and agency left , we would be setting up guillotines, not signing up for another round of work/pay/die. All we would need to do is teach nothing but history, philosophy, and math: we would know what we tried before, why and what we should strive for, and how to achieve it. But let's just buy some Vanguard or NVDA or whatever makes the number go up.
[dead]