rashidujang 2 hours ago

> There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.

It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.

  • wlesieutre 2 hours ago

    Besides, can't they just allocate more ChatGPT instances to accelerating their development?

  • palmotea an hour ago

    > It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed.

    A lot of advice is that way, which is why it is advice. If following it were easy everyone would just do it all the time, but if it's hard or there are temptations in the other direction, it has to be endlessly repeated.

    Plus, there are always those special-snowflake guys who are "that's good advice for you, but for me it's different!"

    Also it wouldn't surprise me if Sam Altman's talents aren't in management or successfully running a large organization, but in machiavellian manipulation and maneuvering.

  • dathinab 2 hours ago

    the thought that this might be done one recommendation of ChatGPT has me rolling

    think about it, with how much bad advice is out there in certain topics it's guaranteed that ChatGPT will promote common bad advice in many cases

  • amelius 2 hours ago

    Imho it just shows how relatively simple this technology really is, and nobody will have a moat. The bubble will pop.

    • deelowe 2 hours ago

      Not exactly. Infra will win the race. In this aspect, Google is miles ahead of the competition. Their DC solutions scale very well. Their only risk is that the hardware and low level software stack is EXTREMELY custom. They don't even fully leverage OCP. Having said that, this has never been a major problem for Google over their 20+ years of moving away from OTS parts.

      • amelius an hour ago

        But anyone with enough money can make infra. Maybe not at the scale of Google, but maybe that's not necessary (unless you have a continuous stream of fresh high-quality training data).

        • piva00 an hour ago

          If making infra means designing their own silicon to target only inference instead of more general GPUs I can agree with you, otherwise the long-term success is based on how cheap they can run the infra compared to competitors.

          Depending on Nvidia for your inference means you'll be price gouged for it, Nvidia has a golden goose for now and will milk it as much as possible.

          I don't see how a company without optimised hardware can win in the long run.

          • amelius 15 minutes ago

            The silicon can be very generic. I don't see why prices of "tensor" computation units can't go down if the world sees the value in them, just like how it happened with CPUs.

    • simianwords 2 hours ago

      amazing how the bubble pops either from the technology either being too simple or being too complex to make a profit

      • amelius an hour ago

        The technology is simple, but you need a ton of hardware. So you lose either because there's lots of competition or you lose because your hardware costs can't be recuperated.

  • tiahura 2 hours ago

    Also, google has plenty of (unmatched?) proprietary data and their own money tree to fuel the money furnace.

    • FinnKuhn 2 hours ago

      As well as their own hardware and a steady cash flow to finance their AI endevours for longer.

  • ryandvm an hour ago

    Don't forget the bleak subtext of all this.

    All these engineers working 70 hour weeks for world class sociopaths in some sort of fucked up space race to create a technology that is supposed to make all of them unemployed.

    • tim333 29 minutes ago

      You can have a more upbeat take on it all.

  • woeirua 2 hours ago

    Wait, shouldn't their internal agents be able to do all this work by now?

    • JacobAsmuth an hour ago

      They have a stated goal of an AI researcher for 2028. Several years away.

rappatic 2 hours ago

> the company will be delaying initiatives like ads, shopping and health agents, and a personal assistant, Pulse, to focus on improving ChatGPT

There's maybe like a few hundred people in the industry who can truly do original work on fundamentally improving a bleeding-edge LLM like ChatGPT, and a whole bunch of people who can do work on ads and shopping. One doesn't seem to get in the way of the other.

  • whiplash451 2 hours ago

    The bottleneck isn’t the people doing the work but the leadership’s bandwidth for strategic thinking

    • kokanee 2 hours ago

      I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.

    • tiahura 2 hours ago

      How is strategic thinking going to produce novel ideas about neural networks?

      • ceejayoz 2 hours ago

        The strategic thinking revolves around "how do we put ads in without everyone getting massively pissed?" sort of questions.

  • logsr 2 hours ago

    There are two layers here: 1) low level LLM architecture 2) applying low level LLM architecture in novel ways. It is true that there are maybe a couple hundred people who can make significant advances on layer 1, but layer 2 constantly drives progress on whatever level of capability layer 1 is at, and it depends mostly on broad and diverse subject matter expertise, and doesn't require any low level ability to implement or improve on LLM architectures, only understanding how to apply them more effectively in new fields. The real key thing is finding ways to create automated validation systems, similar to what is possible for coding, that can be used to create synthetic datasets for reinforcement learning. Layer 2 capabilities do feed back into improved core models, even if you have the same core architecture, because you are generating more and improved data for retraining.

  • ma2rten 2 hours ago

    Delaying doesn't necessarily mean they stop working on it. Also it might be a question of compute resource allocation as well.

  • techblueberry 2 hours ago

    Far be it from me to backseat drive for Sam Altman, but is the problem really that the core product needs improvement, or that it needs a better ecosystem? I can't imagine people are choosing they're chatbots based on providing the perfect answers, it's what you can do with it. I would assume google has the advantage because it's built into a tool people already use every day, not because it's nominally "better" at generating text. Didn't people prefer chatgpt 4 to 5 anyways?

    • tim333 20 minutes ago

      ChatGPT's thing always seems to have been to be the best LLM, hence the most users without much advertising and the most investment money to support their dominance. If they drop to second or third best it may cause them problems because they rely on investor money to pay the rather large bills.

      Currently they are not #1 in any of the categories on LLM arena, and even on user numbers where they have dominated, Google is catching up, 650m monthly for Gemini, 800m for ChatGPT.

      Also Google/Hassabis don't show much sign of slacking off (https://youtu.be/rq-2i1blAlU?t=860)

    • jinushaun an hour ago

      If that was the case, MS would be on top given how entrenched Windows, Office and Outlook are.

      • techblueberry an hour ago

        I'm not suggesting that OpenAI write shit integrations with existing ecosystems.

  • jasonthorsness 2 hours ago

    ha what an incredible consumer-friendly outcome! Hopefully competition keeps the focus on improving models and prevents irritating kinds of monetization

    • another_twist 2 hours ago

      If there's no monetization, the industry will just collapse. Not a good thing to aspire to. I hope they make money whilst doing these improvements.

      • Ericson2314 2 hours ago

        If people pay for inference, that's revenue. Ads and stuff is plan B for inference being too cheap, or the value being too low.

      • thrance 2 hours ago

        If there's no monetization, the industry will just collapse, except for Google, which is probably what they want.

  • rob74 2 hours ago

    I for one would say, the later they add the "ads" feature, the better...

sometimes_all 2 hours ago

For regular consumers, Gemini's AI pro plan is a tough one to beat. The chat quality has gotten much better, I am able to share my plan with a couple more people in my family leading to proper individual chat histories, I get 2 TB of extra storage (which is also sharable), plus some really nice stuff like NotebookLM, which has been amazing for doing research. Veo/Nanobanana are nice bonuses.

It's easily worth the monthly cost, and I'm happy to pay - something which I didn't even consider doing a year ago. OpenAI just doesn't have the same bundle effect.

Obviously power users and companies will likely consider Anthropic. I don't know what OpenAI's actual product moat is any more outside of a well-known name.

  • piva00 an hour ago

    Through my work I have access to Google's, Anthropic's, and OpenAI's products, and I agree with you, I barely touch OpenAI's models/products for some reason even though I have total freedom to choose.

Phelinofist 2 hours ago

IMHO Gemini surpassed ChatGPT by quite a bit - I switched. Gemini is faster, the thinking mode gives me reliably better answers and it has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.

  • cj an hour ago

    Is there a replacement for ChatGPT projects in Gemini yet?

    That's the only ChatGPT feature keeping me from moving to Gemini. Specifically, the ability to upload files and automatically make them available as context for a prompt.

    • hek2sch 26 minutes ago

      Isn't that already nouswise? Nouswise would ground answers based on low level quotes plus you get an api from your projects.

    • tiahura an hour ago

      Isn’t that what gems are?

      • cj an hour ago

        Ah, yes indeed. Not sure how I missed that.

  • mvdtnz 31 minutes ago

    > [Gemini] has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.

    Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.

theoldgreybeard 2 hours ago

You can't make a baby in 1 month with 9 women, Sam.

rf15 3 hours ago

This sounds like their medicine might be worse than what they're currently doing...

dwa3592 2 hours ago

why couldn't GPT5.1 improve itself? Last I heard, it can produce original math and has phd level intelligence.

skywhopper 3 hours ago

    There will be a daily call for those tasked
    with improving the chatbot, the memo said,
    and Altman encouraged temporary team transfers
    to speed up development.
Truly brilliant software development management going on here. Daily update meetings and temporary staff transfers. Well known strategies for increasing velocity!
  • trymas 2 hours ago

    …someone even wrote a book about this. Something about “mythical men”… :D

    • zingababba 30 minutes ago

      Needs an update re: mythical AI.

  • another_twist 2 hours ago

    "The results of this quarter were already baked in a couple of quarters ago"

    - Jeff Bezos

    Quite right tbh.

    • tiahura an hour ago

      Like when OpenAI started experiencing a massive brain drain.

  • lubujackson 2 hours ago

    Don't forget scuttling all the projects the staff has been working overtime to complete so that they can focus on "make it better!" waves hands frantically

  • giancarlostoro 2 hours ago

    I've had ideas for how to improve all the different chatbots for like 3 years, nobodys has implemented any of them (usually my ideas get implemented in software somehow the devs read my mind, but AI seems to be stuck with the same UI for LLMs), none of these AI shops are ran by people with vision it feels like. Everyone's just remaking a slightly better version of SmarterChild.

    • whiplash451 2 hours ago

      Did you open-source / publish these ideas?

      • giancarlostoro an hour ago

        I'm not giving any of these people my ideas for free. Though I did think of making my own UI for some of these services at some point.

    • simianwords 2 hours ago

      I really want a UI that visualises branching. I would like to branch out of specific parts of the responses and continue the conversation there but also keep the original conversation. This seems to be a very standard feature but no one has developed it.

      • giancarlostoro an hour ago

        Would require something like snapshotting context windows, but I agree, something like this would be nice.

    • theplatman 2 hours ago

      i agree - it shows a remarkable lack of creativity that we're still stuck with a fairly subpar UX for interacting with these tools

  • simianwords 2 hours ago

    Its easy to dismiss it but what would you do instead?

  • mlmonkey 2 hours ago

    The beatings will continue until morale^H^H^H^H^H^H chatGPT improves...

alecco 2 hours ago

OpenAI was founded to hedge against Google dominating AI and with it the future. It makes me sad how that was lost for pipe dreams (AGI) and terrible leadership.

I fear a Google dystopia. I hope DeepSeek or somebody else will counter-balance their power.

  • bryanlarsen 2 hours ago

    That goal has wildly succeeded -- there are now several well financed companies competing against Google.

    The goal was supposed to be an ethical competitor as implied by the word "Open" in their name. When Meta and the Chinese are the most ethical of the competitors, you know we're in a bad spot...

    • alecco an hour ago

      I said DeepSeek because they are very open (not just weights). A young company and very much unlike Chinese Big Tech and American Big Tech.

  • tiahura an hour ago

    Doesn’t it seem likely that it all depends on who produces the next AIAYN? Things go one way if it’s an academic, and another way if it’s somebody’s trade secret.

zingababba 33 minutes ago

Does anyone have a link to the contents of the memo?

spwa4 2 hours ago

We are in a pretty amazing situation. If you're willing to go down 10% in benchmark scores, you easily 25% your costs. Now with Deepseek 3.2 another shot across the bow.

But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.

pengaru 2 hours ago

Surely they can just use AI to go faster and attend their daily calls for them...

Fricken 2 hours ago

I take this code red as a red flag. Open AI should continue to concern itself with where it will be 5 years from now, not lose sight over concern about where it will 5 months from now.

  • theplatman 2 hours ago

    open ai is at risk of complete collapse if it cannot fulfill its financial obligations. if people willing to give them money don't have faith in their ability to win the AI race anymore, then they're going out of business.

    • Fricken an hour ago

      Exactly. They aren't going to win the AI race chasing rabbits at the expense of long-term goals. We're 3 years into a 10 year build-out. Open AI and it's financiers are too impatient, clearly, and they're fucking themselves. Open AI doesn't need to double it's revenue to meet expectations. They need to 50x their revenue to meet expectations. That's not the kind of problem you solve by working through the weekend.

  • dylan604 2 hours ago

    Back in the day before Adobe bought Macromedia, there was a constant back and forth between Illustrator and Freehand where each release would better the competitor at least until the competitor's next release.

    Does anyone in AI think about 5 years from now?

    • Fricken an hour ago

      Google is well positioned because they were thinking about AI from the earliest days. The race not a sprint, it just seems that way.

poszlem 2 hours ago

To be honest, this is the first month in almost a year when I didn't pay for ChatGPT Pro and instead went for Gemini Ultra. It's still not there for programming, where I use Claude Max, but for my 'daily driver' (count this, advice on that, 'is this cancer or just a headache' kind of thing), Gemini has finally surpassed ChatGPT for me. And I used to consider it to be the worst of the bunch.

I used to consider Gemini the worst of the bunch, it constantly refused to help me in the past, but not only has it improved, ChatGPT seems to have gone down the 'nerfing' road where it now flat out refuses to do what I ask it to do quite often.

mrkramer 2 hours ago

Google is shivering! /s

vivzkestrel 2 hours ago

In one of the Indian movies, there is a rather funny line that goes like this "tu jiss school se padh kar aaya hai mein uss school ka headmaster hoon". It would translate like this "The school from which you studied and came? I am the principal of that school". Looks like Google is about to show who the true principal is