More likely that their core database hit some scaling limit and fell over. Their status page talks constantly about them working with their "upstream database provider" (presumably AWS) to find a fix.
My guess. They use AWS hosted Postgresql and autovacuuming fell permanently behind without them noticing, and can't keep up with organic growth, and they can't scale vertically because they already maxed that out before. So they have to do crash migrations of data off their core DB which is why it's taking so long.
An outage of this magnitude is almost ALWAYS the direct and immediate fault of senior leaderships priorities and focus. Pushing too hard in some areas, not listening to engineers on needed maintenance tasks etc.
And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.
Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.
Think of airplane safety. I think it is similar. A good culture can make sure $root-cause is more likely detected, tested, isolated, monitored, easy to roll back and so on.
Prediction: Someone confidently broke something, then confidently 'fixed' it, with the consequence of breaking more things instead. And now they have either been pulled off of the cleanup work or they wish they had been.
Wow >31h I am surprised they couldnt rebuild their entire systems in parallel on new infra in that time. Can be hard if data loss is invokved tho (a guess). Would love to see the post mortem so we all can learn.
I doubt it’s infra failure but software failure. Their bad design has caught up and they can’t toss more hardware for some reason. Most companies have this https://xkcd.com/2347/ in their stack and it’s fallen over.
Lots get starry-eyed and aim for five nines right out of the gate where they should have been targeting nine fives and learning from that. Walk before you run.
> Change controls are tighter, and we’re investing in long-term performance improvements, especially in the CMS.
This reads as if overall performance was an afterthought and this doesn’t seem practical; it should be a business metric, it is important to the users after all.
Then again, it’s easy to comment like this in hindsight. We’ll see what happens long term.
Hugops to the people working on this for the last 31+ hours.
Running incidents of this significance is hard, draining and requires a lot of effort, this going on for so long must be very difficult for all involved.
Hugs to the ones dealing with this and the users of Webflow who invested in them for their clientele. Hoping they'll release a full postmortem once the sky clears up.
Because imagine your local biz can either pay a designer 1k a year or DIY and pay godaddy 200 bucks. Or 30 bucks for Wordpress and 20 hours of fiddling and asking their cousin for help.
Its not great by our standards but I bet many of us drink the house wine not something more sophisticated, right :)
Why? Genuinely asking. Did you mean because there are free alternatives to self-host? I don't think that it would be so easy for someone in the market for a WYSIWYG blog builder to set everything up themselves.
Exactly. Because of the abundance of the one-click deploy WordPress offerings from value providers like OVH / Hetzner I would think margins are very low for WYSIWYG site builders.
The problem is they get good in that specific disaster. They can only plug a hole in the dike after the hole exists, then they look at the hole and make a plug the exact shape of that hole. The next hole starts the process over for it specifically. Each time. There's no generic plug that can be used each time. So sure, the get very good at making specific plugs. They never get to the point of making a better dike that doesn't spring so many leaks.
okay. and? the CTO isn't the last word in anything. if they are overruled to keep releasing new features, acquiring new users/clients, sales forward dev cycles, then the whole thing has potential to collapse under the weight of itself.
It's actually the job of the CEO to keep all of the c-suite people doing jobs. Doesn't seem to stop the CEO salary explosions.
Companies, after a disaster, focus lots of effort on that particular disaster, leaving all the other potential disasters unplanned for.
If you work at Webflow, you can anticipate LOTS of work in disaster recovery in the next 12 months. This has magically become a high priority for the CEO, who previously wanted features more than disaster recovery planning.
They will wait to focus massive resources on their security until after they get hacked.
Claude, here is the bug, fix it. This is the new log output, fix the error. Fix the bug. Try a different approach. Reimplement the tests you modified. The bug is still happening, fix it. Fix the error.
We're out of credits, create a new account. We've been API rate limited? When did that start happening? When are we going to get access again?
More like "Good luck users of the future" that have to wade through failing infrastructure and tools that were vibe coded to begin with, rate limits notwithstanding.
I have no clue of "webflow" purpose based on it's marketing/buzzword filled landing page, but seems it's just a "no code" abstraction on top of HTML/CSS?
yet another SaaS that really does not need to be online 24/7. It could have been a simple app where you could "no code" on local machine and async state with webflow servers.
It's painful to use, but lets non-technical clients edit copy and create content in a safe environment. There's a runtime CMS types creator and a WYSIWYG html editor with facility for code blocks from global to inline scopes. Also comes with batteries included deploy. It's basically a one or two levels higher Squarespace/Wix
if you have a web based SaaS, everyone gets the updates. if you have a "simple app", then you are dependent on all of the users being up to date which you just cannot guarantee. also, what is a "simple app" that does not care about differences among various OSes found in the wild? how large of a team do you need for each of those OSes to support as wide of a user base as a web only app?
the customer can self-determine just fine using a web based SaaS no-code website builder. it's not like this is a different type of app. the thing is making a website that is more also hosted by the maker of the app. if you want to make a website to host on your own servers, then you are not the target audience of the web app.
you're like the person complaining that the hammer isn't very useful for driving in the screw. you need a different tool/app if you want to make a site you host yourself
My SRE brain reading between the lines is they have been feature factory and tech debt finally caught up to them.
My guess is reason they been down so long is they don’t have good rollback so they attempting to fix forward with limited success.
More likely that their core database hit some scaling limit and fell over. Their status page talks constantly about them working with their "upstream database provider" (presumably AWS) to find a fix.
My guess. They use AWS hosted Postgresql and autovacuuming fell permanently behind without them noticing, and can't keep up with organic growth, and they can't scale vertically because they already maxed that out before. So they have to do crash migrations of data off their core DB which is why it's taking so long.
If so it is probably a good time to apply for an SRE position there unless they really do not get it!
An outage of this magnitude is almost ALWAYS the direct and immediate fault of senior leaderships priorities and focus. Pushing too hard in some areas, not listening to engineers on needed maintenance tasks etc.
And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.
Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.
PROLONGED outages are a failure point that more often than not, require organizational dysfunction to happen.
Think of airplane safety. I think it is similar. A good culture can make sure $root-cause is more likely detected, tested, isolated, monitored, easy to roll back and so on.
My sympathy for those in the mud dealing with this. Never a fun place to be. Hope y'all figure it out and manage to de-stress :)
We're sorry https://www.youtube.com/watch?v=9u0EL_u4nvw
Edit, an outage of this length smells of bad systems architecture...
Prediction: Someone confidently broke something, then confidently 'fixed' it, with the consequence of breaking more things instead. And now they have either been pulled off of the cleanup work or they wish they had been.
Wow >31h I am surprised they couldnt rebuild their entire systems in parallel on new infra in that time. Can be hard if data loss is invokved tho (a guess). Would love to see the post mortem so we all can learn.
I doubt it’s infra failure but software failure. Their bad design has caught up and they can’t toss more hardware for some reason. Most companies have this https://xkcd.com/2347/ in their stack and it’s fallen over.
CEO's statement: https://www.reddit.com/r/webflow/comments/1mcmxco/from_webfl...
> 99.99%+ uptime is the standard we need to meet, and lately, we haven’t.
Four nines is not what I would be citing at this point. (That's less than an hour per year, so they burned that for next three decades)
Maybe aim for 99% first.
Otherwise a pretty honest and solid response, kudos for that!
One could have nearly 3 such incidents per year and still have hit 99%.
I always strive for 7 9s myself, just not necessarily consecutive digits.
It could be consecutive too and even start with a 9 and be all nines here you go : 9.9999999%
I strive for one 9, thank you. No need to overcomplicate. We use Lambda on top of Glacier.
why go for 9's when you can go for 8s? you can aim for 88.8888888!
There's an old rant I cannot find at the moment that argued that most systems that believe they are 5 9's are really more like 5 8's.
Hit that and you also master time travel.
I know you were going for a BTTF reference, but a Primer (2004) reference would be a better fit for a VC forum.
https://en.wikipedia.org/wiki/Primer_(film)
Lots get starry-eyed and aim for five nines right out of the gate where they should have been targeting nine fives and learning from that. Walk before you run.
Interesting the phrase "I'm sorry" was in there. Almost feels like someone in the Big Chair taking a bit of responsibility. Cheers to that.
> Change controls are tighter, and we’re investing in long-term performance improvements, especially in the CMS.
This reads as if overall performance was an afterthought and this doesn’t seem practical; it should be a business metric, it is important to the users after all.
Then again, it’s easy to comment like this in hindsight. We’ll see what happens long term.
As a former webflow customer I can assure you performance was always an afterthought.
I mean, if customers don’t leave them over this, higher ups likely won’t care after dust settles.
Decent update. Guess people are really waiting for a fix tho!
Hugops to the people working on this for the last 31+ hours. Running incidents of this significance is hard, draining and requires a lot of effort, this going on for so long must be very difficult for all involved.
Hopefully they are rotating teams not people staying awake for a dangerous amount of time.
Hugs for their SREs sweating bullets rn
Hugs to the ones dealing with this and the users of Webflow who invested in them for their clientele. Hoping they'll release a full postmortem once the sky clears up.
I'm more surprised that WordPress-like platforms are profitable businesses in 2025.
Because imagine your local biz can either pay a designer 1k a year or DIY and pay godaddy 200 bucks. Or 30 bucks for Wordpress and 20 hours of fiddling and asking their cousin for help.
Its not great by our standards but I bet many of us drink the house wine not something more sophisticated, right :)
Why? Genuinely asking. Did you mean because there are free alternatives to self-host? I don't think that it would be so easy for someone in the market for a WYSIWYG blog builder to set everything up themselves.
Exactly. Because of the abundance of the one-click deploy WordPress offerings from value providers like OVH / Hetzner I would think margins are very low for WYSIWYG site builders.
Decent demand just awful margins.
And most non-tech (and many in tech) have never heard about OVH/Hetzner.
We moved away from webflow because it was slow (got the nickname web-slow internally).
Plus, despite marketing begging for the WYSIWYG interface they actually weren't creative enough to generate new content at a pace that required it.
We massively increased conversion rates by going full native and having 1 Engineer churn out parts kits/kitbash LPs from said kits.
Scale for reference: ~$10M/month
Companies get very good at handling disasters - after the disaster has happened.
The problem is they get good in that specific disaster. They can only plug a hole in the dike after the hole exists, then they look at the hole and make a plug the exact shape of that hole. The next hole starts the process over for it specifically. Each time. There's no generic plug that can be used each time. So sure, the get very good at making specific plugs. They never get to the point of making a better dike that doesn't spring so many leaks.
It is the job of the CTO to ensure the company has anticipated as many as possible such situations.
It's not a very interesting thing to do however.
okay. and? the CTO isn't the last word in anything. if they are overruled to keep releasing new features, acquiring new users/clients, sales forward dev cycles, then the whole thing has potential to collapse under the weight of itself.
It's actually the job of the CEO to keep all of the c-suite people doing jobs. Doesn't seem to stop the CEO salary explosions.
I think we are agreed.
Companies, after a disaster, focus lots of effort on that particular disaster, leaving all the other potential disasters unplanned for.
If you work at Webflow, you can anticipate LOTS of work in disaster recovery in the next 12 months. This has magically become a high priority for the CEO, who previously wanted features more than disaster recovery planning.
They will wait to focus massive resources on their security until after they get hacked.
You just described every company.
(And also why security is always a losing battle)
Will the company survive long enough to produce a postmortem?
Bring back Failwhale
Incident link: https://status.webflow.com/incidents/0xg8xq3l0h0q
Wow, that whole page does not inspire confidence. It’s 99% LLM slop.
What We’re Doing:
-We are making ongoing adjustments to our infrastructure to improve stability and ensure reliable scaling under elevated load
-Analyzing system patterns and optimizing backend processes where resource contention is highest
-Implementing protective measures to safeguard platform integrity
Expect every thing you read from here on our to be "AI Slop".
It's not going to get better in any way.
y’all relax they are vibe coding the fix right now
So now they’re Webno?
Claude, here is the bug, fix it. This is the new log output, fix the error. Fix the bug. Try a different approach. Reimplement the tests you modified. The bug is still happening, fix it. Fix the error.
We're out of credits, create a new account. We've been API rate limited? When did that start happening? When are we going to get access again?
Good luck engineers of the future!
Comment of the year 2025! Thanks for that :D
You forgot to add “think hard!” :)
And a subtle threat: "... or else".
Whatever you do, don't mention cats.
How do you know?
More like "Good luck users of the future" that have to wade through failing infrastructure and tools that were vibe coded to begin with, rate limits notwithstanding.
I have no clue of "webflow" purpose based on it's marketing/buzzword filled landing page, but seems it's just a "no code" abstraction on top of HTML/CSS?
yet another SaaS that really does not need to be online 24/7. It could have been a simple app where you could "no code" on local machine and async state with webflow servers.
It's painful to use, but lets non-technical clients edit copy and create content in a safe environment. There's a runtime CMS types creator and a WYSIWYG html editor with facility for code blocks from global to inline scopes. Also comes with batteries included deploy. It's basically a one or two levels higher Squarespace/Wix
if you have a web based SaaS, everyone gets the updates. if you have a "simple app", then you are dependent on all of the users being up to date which you just cannot guarantee. also, what is a "simple app" that does not care about differences among various OSes found in the wild? how large of a team do you need for each of those OSes to support as wide of a user base as a web only app?
Cost of having a reliable product with some self-determination for the customer.
the customer can self-determine just fine using a web based SaaS no-code website builder. it's not like this is a different type of app. the thing is making a website that is more also hosted by the maker of the app. if you want to make a website to host on your own servers, then you are not the target audience of the web app.
you're like the person complaining that the hammer isn't very useful for driving in the screw. you need a different tool/app if you want to make a site you host yourself