• Trapit Bansal: pioneered RL on chain of thought and co-creator of o-series models at OpenAl.
• Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
• Huiwen Chang: co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
• Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
• Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
• Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
• Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.
• Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.
• Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
• Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
• Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.
Many worrisome aspects with this in terms of smelling like a super bubble.
Even being bullish on LLMs, it is not obvious this is the right paradigm even for AGI let alone something beyond AGI.
Seems like it could be 10 years from now
"Remember during the peak of the bubble when Zuckerberg was paying researchers 100 million dollars to try to make a super humanoid robot out of just a mouth?"
I'm couldn't be more disgusted by sama@'s response to Zuck's strategy:
23:05
the strategy of a ton of upfront guaranteed comp and that being the reason you tell someone to join like
23:10
really the degree to which they're focusing on that and not the work and not the mission Um I don't think that's
23:17
going to set up a great culture Uh and you know I hope that we can be the best
23:24
place in the world to do this kind of research Uh I think we built a really special culture for it and I think that
23:30
we're set up such that if we succeed at that and a lot of people on our research team believe we will or we're have a
23:36
good chance at it then everybody will do great financially and it's I think it's incentive aligned with like mission
23:42
first and then economic awards and everything else flowing from that So I think that's good There's many things I respect about Meta as a company Um but I
Un hun.
Sam Altman's critique of Meta's recruitment strategy is a textbook example of startup rhetoric. By framing high, guaranteed compensation as a cultural failing that detracts from the "mission," he attempts to moralize a clear economic disadvantage.
This is the core of the startup playbook: persuade employees to forsake their financial best interests in favor of high-risk, high-reward "adventures." There's nothing inherently wrong with that pitch, but the subsequent sanctimony is galling. When talented individuals make a rational choice for their own benefit, Altman's insinuation that they aren't the "people that mattered" is both revealing and repulsive. He's not angry about a breach of principle; he's angry that Zuckerberg is outbidding him.
Well said. Though 20 year old me would have judged you on this comment for being "greedy" ;) Such is the cycle of life - you think you are beyond earthly affairs only to discover earthly affairs matter until you are above the earth
With luck, they'll vaporize billions of dollars on nothing of consequence.
If they come up with anything of consequence, we'll have an incredibly higher level of Facebook monitoring of our lives in all scopes. Also such a level of AI crap (info/disinfo in politics, crime, arts, etc.) that ironically in-person exchanges will be valued more highly than today. When everything you see on pixels is suspect, only the tangible can be trusted.
im pretty sure that when the top companies are poaching leaders in AI for some of the largest payouts in history… we are not in an AI winter. so they were completely wrong. im guessing you are one of them.
Zuck has spent more on Metaverse than AI. A multi-trillion dollar company throwing a few billion at a problem means nothing. Show me the results, not the hype.
Dot-com bubble: popped, and if you don't have a .com presence you have a .net, .org, or similar
Blockchain bubble: popped? Looking good, always profitable to bet on crime and stupidity.
Real estate bubble? Popped, but what do think about the asset price of housing now?
Moore's Law wasn't a law, it was a reflection of investment which reach dizzying heights in its heyday. I think there's a ton of over-hype now but some stuff will come out of it.
> Poor Sam Altman, 300B worth of trade secrets bought out from under him for a paltry few hundred million
Sorry, you don't lose people when you treat them well. Add to that Altman's penchant for organisational dysfunction and the (in part resulting) illiquidity of OpenAI's employees' equity-not-equity and this makes a lot of sense. Broadly, it's good for the American AI ecosystem for this competition for talent to exist.
In retrospect, I wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive. And to emphasize, I'm not talking just about Altman.
That is, when you create this cutting edge, powerful tech, it turns out that people are willing to pay gobs of money for it. So if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy.
That's why I want to gag a little when I hear all this flowery language about how AI will cure all these diseases and be a huge boon to humanity. Let's get real - people are so hyped about this because they believe it will make them rich. And it most likely will, and to be clear, I don't blame them. The only thing I blame folks for is trying to wrap "I'd like to get rich" goals in moralistic BS.
> wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive
Based on behaviour, it appears they didn't think they'd do anything impactful. When OpenAI accidentally created something important Altman immediately (a) actually got involved to (b) reverse course.
> if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy
I'm not so sure. OpenAI would have held a unique position as both first mover and moral arbiter. That's a powerful place to be, albeit not a position Silicon Valley is comfortable or competent in.
I'm also not sure pursuing monetisation requires a for-profit structure. That's more a function of the cost of training, though again, a licensing partnership with, I don't know, Microsoft, would alleviate that pressure without requiring giving up control.
It wasn't exactly a scam, it's just nobody thought it'd be worth real money that fast, so the transition from noble venture to cash grab happened faster than expected.
I'm skeptical that OpenAI was ever feasible as a nonprofit under it's original mission, which was:
> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
As soon as the power of AI became apparent, everyone wanted (and in some ways, needed) to make bank. This would have been true even if the original training costs weren't so high.
This stuff is ridiculously important for healthcare: It’s a demographic fact that both the US and the world at large are simply not training enough doctors and nurses to provide today’s standard of care at current staffing levels as the population ages.
We need massive productivity boosts in medicine just as fast as we can get them.
I sincerely doubt this understaffing of medical professionals is a technology problem, and I believe it much more likely to be an economic structural problem. And overall, I think that powerful generative AI will make these economic structural problems much worse.
It's a gatekeeping problem. Doctors don't want more doctors because it dilutes their own value, so medical school and residency spots are kept artificially limited.
This is oft-repeated truism, but what evidence do you have for this?
Here are some facts:
- Ultimately, the main chokepoint for the number of trained physicians is the number of residency spots. You can cut the price of med school to $0, you'll eventually end up with minimally more fully trained doctors because they need a residency spot.
- Residency spots are paid for by the federal government. Congress controls the number of available spots. Medical professional bodies do not determine this.
- The AMA has consistently asked members to support legislation increasing funding for GME positions (https://www.ama-assn.org/about/leadership/more-medicare-supp...). At one point (late 90s) they opposed expanding the slots, but this has not been true for some time. And, even if it were true, it's ultimately still not their call.
What huge productivity boosts will this provide that doesn't include "the time you spend with the doctor is shorter than it once was?"
My sister is in a healthcare field. Automatic charting is useful, but not a game changer. Healthcare companies seem to be largely interested in placing AI in between their nurses/doctors and their patients. I'm not terribly excited about that.
It doesn't take super intelligence to give my elderly father a bath or wipe his ass.
I think the main problem is we would almost need an economic depression so that at the margin there were for less alternative jobs available than giving my father a bath.
Then also consider that say we do have super-intelligence that adds a few years to his life because of better diagnostics and treatment of death. It actually makes the day to day care problem worse in the aggregate.
We are headed towards this boomer long term care disaster and there is nothing that is going to avert it. Boomers I talk to are completely in denial of this problem too. They are expecting the long term care situation to look like what their parents had. I just try to convince every boomer I know that they have to do everything they can do physically now to better themselves to stay out of long term care as long as possible.
Come on everybody, please don't post shallow dismissals. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
That includes personal attacks, regardless of who it's about. When a comment has a high bile/information ratio, it's off topic here.
You may not owe $CelebrityBillionaire better, but you owe this community better if you're participating in it.
• Trapit Bansal: pioneered RL on chain of thought and co-creator of o-series models at OpenAl.
• Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
• Huiwen Chang: co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
• Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.
• Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
• Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
• Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.
• Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.
• Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
• Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
• Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.
Anybody have an idea of how much these people are making?
Minimum wage in California is $16.50 per hour, so they are making that at least.
Altman said on a recent podcast that Zuckerberg is poaching OpenAI researchers, giving offers of up to $100,000,000 (one hundred million).
Many worrisome aspects with this in terms of smelling like a super bubble.
Even being bullish on LLMs, it is not obvious this is the right paradigm even for AGI let alone something beyond AGI.
Seems like it could be 10 years from now "Remember during the peak of the bubble when Zuckerberg was paying researchers 100 million dollars to try to make a super humanoid robot out of just a mouth?"
Also Alexandr Wang, Nat Friedman.
This list is specifically about the poaching from OpenAI / DM I believe
I'm couldn't be more disgusted by sama@'s response to Zuck's strategy:
23:05 the strategy of a ton of upfront guaranteed comp and that being the reason you tell someone to join like
23:10 really the degree to which they're focusing on that and not the work and not the mission Um I don't think that's
23:17 going to set up a great culture Uh and you know I hope that we can be the best
23:24 place in the world to do this kind of research Uh I think we built a really special culture for it and I think that
23:30 we're set up such that if we succeed at that and a lot of people on our research team believe we will or we're have a
23:36 good chance at it then everybody will do great financially and it's I think it's incentive aligned with like mission
23:42 first and then economic awards and everything else flowing from that So I think that's good There's many things I respect about Meta as a company Um but I
Un hun.
Sam Altman's critique of Meta's recruitment strategy is a textbook example of startup rhetoric. By framing high, guaranteed compensation as a cultural failing that detracts from the "mission," he attempts to moralize a clear economic disadvantage.
This is the core of the startup playbook: persuade employees to forsake their financial best interests in favor of high-risk, high-reward "adventures." There's nothing inherently wrong with that pitch, but the subsequent sanctimony is galling. When talented individuals make a rational choice for their own benefit, Altman's insinuation that they aren't the "people that mattered" is both revealing and repulsive. He's not angry about a breach of principle; he's angry that Zuckerberg is outbidding him.
Sources
Well said. Though 20 year old me would have judged you on this comment for being "greedy" ;) Such is the cycle of life - you think you are beyond earthly affairs only to discover earthly affairs matter until you are above the earth
[stub for offtopicness]
Please see https://news.ycombinator.com/item?id=44432526
Is mark Zuckerberg systematically behind the curve on every hype?
> Is mark Zuckerberg systematically behind the curve on every hype?
Trend following with chutzpah, particulalry through acquisitions, has been a winning strategy for Zuckerberg and his shareholders.
He's just trying to figure out how to monetise your WhatsApp messages
In the "metaverse"
This in includes fashion and hairstyles it seems...
the idea of mark zuckerberg being at the helm of digital super-intelligence sickens me.
He spent $50B to be at the helm of the "Metaverse". Don't worry, we are safe.
Better than Samuel Altman being at the helm.
the cringiest possible future
Mark Zuckerberg hiring top AI researchers worries me more than Iran hiring nuclear scientists.
better than sam altman having them
Sam Altman is leading OpenAI out of the goodness of his heart. He told Congress he has zero equity or financial stake in OpenAI.
With luck, they'll vaporize billions of dollars on nothing of consequence.
If they come up with anything of consequence, we'll have an incredibly higher level of Facebook monitoring of our lives in all scopes. Also such a level of AI crap (info/disinfo in politics, crime, arts, etc.) that ironically in-person exchanges will be valued more highly than today. When everything you see on pixels is suspect, only the tangible can be trusted.
do you remember the chorus of people on HN two years ago who said that the next AI winter was already upon us?
Were they wrong? It's pretty undeniable that every model release since GPT-4 has been less impactful than the last.
im pretty sure that when the top companies are poaching leaders in AI for some of the largest payouts in history… we are not in an AI winter. so they were completely wrong. im guessing you are one of them.
Zuck has spent more on Metaverse than AI. A multi-trillion dollar company throwing a few billion at a problem means nothing. Show me the results, not the hype.
Bubbles are always the craziest right before they pop
Dot-com bubble: popped, and if you don't have a .com presence you have a .net, .org, or similar Blockchain bubble: popped? Looking good, always profitable to bet on crime and stupidity. Real estate bubble? Popped, but what do think about the asset price of housing now?
Moore's Law wasn't a law, it was a reflection of investment which reach dizzying heights in its heyday. I think there's a ton of over-hype now but some stuff will come out of it.
Poor Sam Altman, 300B worth of trade secrets bought out from under him for a paltry few hundred million.
> Poor Sam Altman, 300B worth of trade secrets bought out from under him for a paltry few hundred million
Sorry, you don't lose people when you treat them well. Add to that Altman's penchant for organisational dysfunction and the (in part resulting) illiquidity of OpenAI's employees' equity-not-equity and this makes a lot of sense. Broadly, it's good for the American AI ecosystem for this competition for talent to exist.
In retrospect, I wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive. And to emphasize, I'm not talking just about Altman.
That is, when you create this cutting edge, powerful tech, it turns out that people are willing to pay gobs of money for it. So if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy.
That's why I want to gag a little when I hear all this flowery language about how AI will cure all these diseases and be a huge boon to humanity. Let's get real - people are so hyped about this because they believe it will make them rich. And it most likely will, and to be clear, I don't blame them. The only thing I blame folks for is trying to wrap "I'd like to get rich" goals in moralistic BS.
> wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive
Based on behaviour, it appears they didn't think they'd do anything impactful. When OpenAI accidentally created something important Altman immediately (a) actually got involved to (b) reverse course.
> if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy
I'm not so sure. OpenAI would have held a unique position as both first mover and moral arbiter. That's a powerful place to be, albeit not a position Silicon Valley is comfortable or competent in.
I'm also not sure pursuing monetisation requires a for-profit structure. That's more a function of the cost of training, though again, a licensing partnership with, I don't know, Microsoft, would alleviate that pressure without requiring giving up control.
It wasn't exactly a scam, it's just nobody thought it'd be worth real money that fast, so the transition from noble venture to cash grab happened faster than expected.
Getting rich going good is better than just getting rich. People like both.
Which part are you skeptical about? that people also like to do good, or that AI can do good?
I'm skeptical that OpenAI was ever feasible as a nonprofit under it's original mission, which was:
> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
As soon as the power of AI became apparent, everyone wanted (and in some ways, needed) to make bank. This would have been true even if the original training costs weren't so high.
Will the Superintelligence finally make the Metaverse profitable and popular?
[flagged]
[flagged]
So much wasted money it makes me sick…
There are so much money needed to solve another problems, especially for health.
I don’t blame the new comers, but Zuckerberg.
This stuff is ridiculously important for healthcare: It’s a demographic fact that both the US and the world at large are simply not training enough doctors and nurses to provide today’s standard of care at current staffing levels as the population ages.
We need massive productivity boosts in medicine just as fast as we can get them.
I sincerely doubt this understaffing of medical professionals is a technology problem, and I believe it much more likely to be an economic structural problem. And overall, I think that powerful generative AI will make these economic structural problems much worse.
It's a gatekeeping problem. Doctors don't want more doctors because it dilutes their own value, so medical school and residency spots are kept artificially limited.
This is oft-repeated truism, but what evidence do you have for this?
Here are some facts:
- Ultimately, the main chokepoint for the number of trained physicians is the number of residency spots. You can cut the price of med school to $0, you'll eventually end up with minimally more fully trained doctors because they need a residency spot.
- Residency spots are paid for by the federal government. Congress controls the number of available spots. Medical professional bodies do not determine this.
- The AMA has consistently asked members to support legislation increasing funding for GME positions (https://www.ama-assn.org/about/leadership/more-medicare-supp...). At one point (late 90s) they opposed expanding the slots, but this has not been true for some time. And, even if it were true, it's ultimately still not their call.
What huge productivity boosts will this provide that doesn't include "the time you spend with the doctor is shorter than it once was?"
My sister is in a healthcare field. Automatic charting is useful, but not a game changer. Healthcare companies seem to be largely interested in placing AI in between their nurses/doctors and their patients. I'm not terribly excited about that.
It doesn't take super intelligence to give my elderly father a bath or wipe his ass.
I think the main problem is we would almost need an economic depression so that at the margin there were for less alternative jobs available than giving my father a bath.
Then also consider that say we do have super-intelligence that adds a few years to his life because of better diagnostics and treatment of death. It actually makes the day to day care problem worse in the aggregate.
We are headed towards this boomer long term care disaster and there is nothing that is going to avert it. Boomers I talk to are completely in denial of this problem too. They are expecting the long term care situation to look like what their parents had. I just try to convince every boomer I know that they have to do everything they can do physically now to better themselves to stay out of long term care as long as possible.
Better on ML than the next VR vaporware.
zuck funds health research (a lot, and very ML focused) already
[flagged]
I really do wish there was a way to downvote "because I don't like what the person is saying, even if it's true"
Do you realize how much health-related research Zuckerburg’s foundation does? There was literally a post on here last week about it, geez.
Superintelligence or even just AGI short circuits all our problems.
Just another tool that can be used for good or bad.
[dead]