Rendered at 05:36:12 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
edf13 2 days ago [-]
Seems to be a very regular occurrence starting around this time of day (14:30 UTC)...
Claude Code returning:
API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":"---"}
Over and over again!
walthamstow 2 days ago [-]
US Pacific comes online while London is still working and they can't handle it. $380bn valuation btw.
jjcm 2 days ago [-]
No amount of valuation can fix global supply issues for GPUs for inference unfortunately.
I suspect they're highly oversubscribed, thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
natpalmer1776 2 days ago [-]
Remember when OpenAI wasn’t allowing new subscriptions to their ChatGPT pro plans because they were oversubscribed? Pepperidge Farms remembers.
andai 2 days ago [-]
Wouldn't that be good? I remember back in the day you could only get Gmail thru an invite, it was an awesome strategy. "Currently closed for applications" creates FOMO. They'd just need to actually get the GPUs in relatively short supply. They could do it in bursts though, right? "Now accepting applications for a short time."
I'm not an internet marketer but that sounds like a win win to me. People feel special, they get extra hype, and the service isn't broken.
hirako2000 2 days ago [-]
In the case of Gmail that was fake scarcity.
In the case of Anthropic is fake availability.
Sam Altman explained the idea is to scale the thing up, and see what happens.
He hadn't claimed to offer a solution to the supply problem that would unfold.
bruckie 1 days ago [-]
Are you sure it was fake scarcity for Gmail? IIRC they did it because they were worried about systems falling over if it grew too fast, and discovered the marketing benefits as a side effect.
iainmerrick 1 days ago [-]
Are you mixing up Anthropic and OpenAI here?
hirako2000 13 hours ago [-]
I didn't. Anthropic and others followed the concept of scaling up models and worry about efficiency and availability later. Sam likely didn't invent the idea but he talked about it.
the_gipsy 2 days ago [-]
Yes, "Pepperidge farm remembers" is usually about how something used to be good.
CoastalCoder 2 days ago [-]
Yeah, but there was a spoof on that (in Family Guy?). It was a tie in to the movie "I Know what you Did last Summer", IIRC.
joquarky 2 days ago [-]
Google Wave demonstrated that this doesn't always work.
scratchyone 2 days ago [-]
maybe, but the response to GPU shortages being increased error rates is the concern imo. they could implement queuing or delayed response times. it's been long enough that they've had plenty of time to implement things like this, at least on their web-ui where they have full control. instead it still just errors with no further information.
skeledrew 2 days ago [-]
I've been experiencing a good amount of delays (says it's taking extra time to really think, etc), and I'm using during off-peak time.
scratchyone 2 days ago [-]
i notice that as well. most of the time when i see those it has a retry counter also and i can see it trying and failing multiple requests haha. almost never succeeds in producing a response when i see those though, eventually just errors out completely.
hirako2000 2 days ago [-]
Coding is a problem solved. Claud writes the code. I edit it. I code around it.
Engineer roles dead in 6 months.
post-it 2 days ago [-]
> I edit it. I code around it.
You're never gonna guess what software engineers do.
bulbar 1 days ago [-]
Because of the context I would think this is sarcasm, but I am not sure.
hirako2000 13 hours ago [-]
It is.
2 days ago [-]
zachncst 2 days ago [-]
Sure but we don't need GPUs to log in.
sobellian 2 days ago [-]
Their issues seem to extend well beyond inference into services like auth.
ryandrake 2 days ago [-]
Yes. Whenever these outages happen, it always seems that it's their login system that is broken.
bostik 1 days ago [-]
That implies that either the auth is too heavy (possible, ish) or their systems don't degrade gracefully enough and many different types of failures propagate up and out all the way to their outermost layer, ie. auth (more plausible).
Disclosure: I have scars from a distributed system where errors propagated outwards and took down auth...
AlecSchueler 1 days ago [-]
> thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
The dynamic thinking and response length is funny enough the best upgrade I've experienced with the service for more than a year. I really appreciate that when I say or ask something simple the answer now just comes back as a single sentence without having to manually toggle "concise" mode on and off again.
paulddraper 1 days ago [-]
A. These aren’t rate limit errors from the API.
B. Everything is down, even auth.
ai-x 2 days ago [-]
This precisely justifies Anthropic's market cap to be higher.
dsr_ 2 days ago [-]
Demand at an unsustainably low price does not imply demand at a sustainable price.
bigbadfeline 1 days ago [-]
I'm pretty sure ai-x writes sarcasm and skips the /s for pure fun. Personally, I'm amused and I like what he's doing. Others have done it before him though, it's not a new trick.
tucnak 2 days ago [-]
Assuming perfectly efficient business
azalemeth 2 days ago [-]
I literally just came to HN to ask if I was alone with the acurséd "API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":"…"}" greeting me and telling me to get back to using my brain!
xnorswap 2 days ago [-]
500-series errors are server-side, 400 series are client side.
A 500 error is almost never "just you".
( 404 is a client error, because it's the client requesting a file that does not exist, a problem with the client, not the server, who is _obviously_ blameless in the file not existing. )
darkwater 2 days ago [-]
> A 500 error is almost never "just you".
I know you added the defensive "almost" but if I had a dollar each time I saw a 500 due to the session cookies being sent by the client that made the backend explode - for whatever root cause - well, I would have a fatter wallet.
iainmerrick 1 days ago [-]
Depending on what you mean by "made the backend explode", that is a server error, so 500 is correct!
Bad input should be a 4xx, but if the server can't cope with it, that's still a 5xx.
xnorswap 2 days ago [-]
Indeed, and also there's a special circle of hell reserved for anyone who dares change the interface on a public API, and forgets about client caching leading to invalid requests but only for one or two confused users in particular.
Bonus points if due to the way that invalid requests are rejected, they are filtered out as invalid traffic and don't even show up as a spike in the application error logs.
azalemeth 2 days ago [-]
I know that in principle this is true. However, I have seen claude shadow-throttle my ipv4 address (I am behind CGNAT), in line with their "VPN" policy -- so I do not trust it, frankly.
paganel 2 days ago [-]
> in line with their "VPN" policy
This is how I learn that they have a "VPN" policy. Thinking of it maybe it makes sense, that is if it's what I think it is, but seems scummy nonetheless.
andyjohnson0 2 days ago [-]
> Seems to be a very regular occurrence starting around this time of day (14:30 UTC)...
8.30am on the US west coast
imdoxxingme 19 hours ago [-]
Probably when they're permitted to start live experiments
freedomben 2 days ago [-]
Yep, daily haha. Well at least this time they aren't just silently reducing thinking on the server side, which ended up making a mess in my codebase when they did that last time. I'd rather a 500 than a silent rug-pull.
JamesSwift 1 days ago [-]
I tend to notice it around 4pm EST
Sol- 2 days ago [-]
Once AGI is achieved, they'll reach the fabled superhuman "two nines" of uptime.
jakemoshenko 2 days ago [-]
Or the more human nine fives.
sensanaty 2 days ago [-]
They seem to be struggling with even a "one 9" as-is.
PunchyHamster 2 days ago [-]
but the Github way of 89.09%
mrbungie 2 days ago [-]
Not defending GH, but that's what tsunamies of slop can do to a system.
dijit 2 days ago [-]
they've been getting worse and worse since way before LLMs.
skeledrew 2 days ago [-]
Since they sold out.
just_once 2 days ago [-]
I guess that's called napping
dust42 2 days ago [-]
Well, at least we know by now that Mythos is a mythos.
hobofan 2 days ago [-]
Maybe Mythos identified too much uptime as a security risk.
heliumtera 2 days ago [-]
Hahahaha, this is the best comment I ever encountered on this website
m_ke 2 days ago [-]
more like 0, once they get AGI they'll capture all value themselves or sell to highest bidders
philipwhiuk 2 days ago [-]
that's the positive outlook ;)
no_shadowban_9 2 days ago [-]
[dead]
lbriner 2 days ago [-]
Funny that I just saw this after have "Console temporarily unavailable". I am currently at the stage that: 1) I think Claude Code is very impressive 2) I think pretty much everything else about them is terrible.
* Support really poor, raised a ticket last week and have heard nothing back at all
* Separation of claude.ai accounts and console accounts is super confusing
* Couldn't log into the platform since I had an old org in the process of deletion even though I was invited to a new one (had to wait 7 days!)
* Payments for more API credits were broken for about a week
* Claude chat has really gone to s*t unless it always was. Just getting back terrible answers to simple questions.
* The desktop app is a web app pretending to be a desktop app that doesn't always know it is a desktop app so you get things like, "this will only work in the desktop app". Yes I know, this is the desktop app! "Oh sorry about that but you need to use the desktop app".
* mcp integration and debugging is dreadful, just a combination of generic "an error ocurred" and sometimes nothing at all
* MCP only supports OAuth for shared connectors but auth key doesn't work even with "local" servers that are not necessarily local, just the config is local.
You can put those on the health status!
yoran 2 days ago [-]
Anthropic support is reserved only for famous devs on X with > 100k followers.
cozzyd 2 days ago [-]
It's apparently not even possible to get a tax exemption if you work at a place exempt from state taxes. Some institutions are sticklers about this and will refuse to allow payments.
p_stuart82 2 days ago [-]
yeah the desktop app forgets it's the desktop app. claude code feels local right up until the api starts coughing up 500s. same thing, just in a terminal instead of a window.
fartinmyeyes 2 days ago [-]
Of course there's no support, that's like the entire point.
mchusma 2 days ago [-]
Anthropic has been avoiding the hard thing, but they just need to do SOME kind of pricing thing here to shift demand. I expect there is just no amount of tricks that can handle the few hours at peak load. They need "surge pricing".
This seems reasonable surge pricing approach to me:
1. Implement surge pricing for everyone for the peak 2 hours of the day if possible. 2 hours you can work around, 5 hours is too hard.
2. Give existing customers a one time credit for surge
3. Make sure the plans just consume credits at an accelerated rate (e.g. if on the max plan, i just get 1/2 usage during peak hours).
4. Exempt sonnet/haiku from surge (so people can keep using)
5. Make "auto" settings in claude code etc automatically adapt during surge hours, so people don't get surprises by default.
6. For the first 90 days, unofficially waive the fist $100 in surge for every user but notify them. To train users about the surge, and get them used to it without having them actually pay.
7. (I don't think they would do this but this would help) allow users to fall back to using something like GLM 5.1 or Gemma 4 automatically in outages, with a partnership to handle it. Its not ideal, but i would prefer it in "partner mode" than not. IMO they can charge like 10% on top of the partner fees for this if used in other times, but during outages or surge, partner mode is free. But 100% managed by Anthropic so users don't need to set things up and can just use the Anthropic harness.
fluidcruft 1 days ago [-]
One of their challenges is pricing of Max 20x. Max 20x is discounted 50% vs Pro and Max 5x. The way Anthropic's pricing currently works the $20(1x) and $100(5x) tiers are paying double for usage vs the heavy-user $200(20x) tier. That sort of non-linearity only makes sense if there is excess capacity. ChatGPT's new Plus/Pro pricing plan did not copy that aspect of Anthropic's pricing structure and kept "sane" linear pricing.
Generally if you give people unused cycles to burn, they'll feel entitled to finding ways to burn them. So someone who is hitting the wall at x5 goes x20 and now has an extra +x10 to burn. Again, that's good if hardware is sitting around idle and you're encouraging innovation and exploration. It can make less sense when resources are scarce.
mrbungie 2 days ago [-]
They are in a three-way tension between price, quantity/quality and promises of growth to investors (time horizon-sensitive). Nudge one factor too much and the house of cards falls down.
boredtofears 2 days ago [-]
Only on HN will your customers not only tolerate your product once you’ve reached market saturation and started enshittifying it, they’ll write you a whole guide on how to do it!
anonyfox 1 days ago [-]
its more like the people with the highest disposable money want other users that interrupt their experience to back off so its not bothering them.
rambojohnson 2 days ago [-]
tech industry suffering from a kind of zoocosis / stockholm syndrome perhaps.
mesmertech 2 days ago [-]
We went from "Peak hours" meaning 2x usage plus slower to now it just does 500 error
Funny how we all come to HN when the status page is lagging behind. HN is truly the real-time status page
tstrimple 1 days ago [-]
Anthropic has one of the best status pages of any technology I use on a regular basis. Every time I've had an issue, the status page reported an issue. A vastly different experience from Azure or AWS or GCP or honestly most services which pretend to maintain a status page.
Seriously take a look. Compare it to basically any other status page that companies make available. Their results are not flattering with all the downtime and issues. But it's far more transparent than most services I've experienced.
CWwdcdk7h 2 days ago [-]
[dead]
parthdesai 2 days ago [-]
Coding is solved
AzzieElbab 2 days ago [-]
95%
chrisjj 2 days ago [-]
The code runs 95% of the time. I.e. fails only for 1min per 10min.
senorrib 2 days ago [-]
that's 90%.
chrisjj 2 days ago [-]
You got me. I gave the calculation to an "AI" :)
mstaoru 2 days ago [-]
You are absolutely right!
ai-x 2 days ago [-]
Nothing to do with coding and everything to do with capacity.
In fact, this proves that there is no AI Bubble and we are massively capacity constrained (aka we under-invested in infrastructure)
AndroTux 2 days ago [-]
I can't sign in. We're not at capacity for OAuth servers or relational databases. This proves that vibe coding your infrastructure is a bad idea.
ai-x 2 days ago [-]
If you have this view in 2026 April, you are going to have a bad time
blueline 1 days ago [-]
If only the energy posting the 5 trillionth "ngmi" meme could be applied to achieving one 9 of uptime for claude
AndroTux 2 days ago [-]
If you think AI can replace an SRE in 2026 April, I've got a bridge to sell you. I'm not saying "don't use AI." I'm saying don't turn off your brain and let AI drop your production database.
parthdesai 2 days ago [-]
Sorry, should've AI will replace software engineers in 6-9 months. I'm hoping AI will be the one responding to this incident as well.
pton_xd 2 days ago [-]
91% uptime for Claude Code over the last 30 days. Is that accurate?! I'm not a CC user but that seems awfully low.
They compute as total minutes down as a fraction of total time. What this means is that being down, say, 55min during peak-use counts the same as being down 55min when nobody is trying to use it. And congruently it counts being up when nobody is trying to use it as the same as being up when everyone is trying to use it.
SpicyLemonZest 2 days ago [-]
Over 90 days though. They had a lot fewer users in February. (And even then, these outage durations seem to add up to more than the error budget 99.26% implies...)
2 days ago [-]
me551ah 2 days ago [-]
What am I supposed to do now? Copy code from stackoverflow like a caveman?
sixothree 2 days ago [-]
It's weird to me to see people absolutely freaking out about Claude being down, or less powerful, or whatever. We went from zero to relying on it to do even the most basic functions so quickly.
jakobloekke 2 days ago [-]
Have anyone found good techniques to get a session out of Claude Code, so that I can point another tool at it and pick up there?
This always seems to happen at the worst possible time, after having spent an hour getting deep into something – half finished edits across files, subagents running, etc.
Fabricio20 2 days ago [-]
Honest suggestion - ask the agent to figure a compat shim out, the files are jsonl stored at your ~/.claude/sessions you can most likely just reshape it to work on OpenCode or similar, or have a different Claude Code config that points to OpenRouter or other API style endpoint CC supports and then you can swap accounts and it should still work!
jakobloekke 2 days ago [-]
I'm trying that out with Cursor now. But it does take some work to get it to the same state with subagents and making sure it understands the state of the progress that was interupted.
But it seems worth the time to get a solid skill defined up and running that can do this, given that's it's an almost daily event by now.
Maybe a good candidate for a Claude Routine!
"By this time each day, brace for upcoming outage by preparing a comprehensive information package for Cursor to take over your work on active sessions" ...
tstrimple 2 days ago [-]
I don't use any other harness, but I have a cron that picks up changes in my jsonl every X minutes and writes them to a SQLite database with full text search. I also have instructions in my user level claude.md (applies to all projects) to query that database when I'm asking about previous sessions. That's my primary use case where I want it to grab some specific details from a previous session. I have terrible context discipline and have built some tools to help me recover from just continuing a different task/conversation with the wrong context.
I could search it myself, but haven't needed to. Getting it out of SQLite into some format Cursor understands should be trivial.
embedding-shape 2 days ago [-]
Copy-pasting previous plain-text conversation + a snippet of "inspect the current git changes, and resume where you left of" tends to do the trick, at least in Codex, worked with moving from CC, Gemini and a bunch of others.
Maybe the /export command is what you're looking for.
'Export the current conversation to a file or clipboard'
jakobloekke 2 days ago [-]
I was hoping for that to work, but it seems to produce an empty file. Maybe it needs the API to work as well
ea016 2 days ago [-]
Didn't work for me either, I asked Cursor to export the session from ~/.claude/projects/
sixothree 2 days ago [-]
If you use superpowers and "brainstorm" in your prompt, you will get a spec document that other AI can use. They can figure out what was done and continue from there.
jakobloekke 2 days ago [-]
Absolutely! But that doesn't capture an interrupted interactive session
It seems like Claude has taken Github's place in terms of developer reaction to it being unavailable. It's like everyone forgot how they did things 18 months ago.
nateguchi 2 days ago [-]
Is codex a good alternative? Or does Claude have a moat...
latentsea 2 days ago [-]
Well this latest outage has me forming a position that a backup is mandatory. I've been using Codex for adversarial-review, so with this outage I'm now going to ensure the repo is tooled up to use both agents, and when an outage hits just switch over and keep going.
mvkel 2 days ago [-]
Better at planning, worse at execution. Ultimately, creates a working product.
airstrike 2 days ago [-]
If you mean Codex is better at planning, I've heard the exact opposite. I'm told it's a beast if you tell it exactly what you need as it will execute it to the T whereas Claude will push back or do its own thing either because it thinks it's wrong or because it's feeling lazy
siva7 1 days ago [-]
gpt-5.4 xhigh is a beast you only wanna unleash on your most complex tasks or spend 30 minutes watching the model reasoning how to do a git commit. For everything else i'd happily use a saner model like sonnet.
mvkel 2 days ago [-]
I use superpowers with both and have found the plan generation in codex to be a bit more thorough, so it's not the native planning mode necessarily
nsingh2 2 days ago [-]
[dead]
nateguchi 2 days ago [-]
What about usage / quota?
diego_sandoval 2 days ago [-]
Much better than Claude.
I've never hit the quota on Codex.
On Claude (Code), I used to hit it every other day before switching to Codex.
arcanemachiner 2 days ago [-]
I hears they just shrunk the Plus tier quota. People on /r/codex have been complaining for a few days now.
They're trying to push people to their new $100 tier, which has a boosted quota for now.
swiftcoder 2 days ago [-]
I've started hitting Codex quota regularly for the first time the last couple of weeks, so I feel like they might be tightening the screws on the $20/month plan too. Someone paying for Max might have to work at it to hit the quota
nateguchi 2 days ago [-]
After switching how is the code quality?
pyr0hu 2 days ago [-]
Much much better. Meanwhile you could exhaust Claude quota in 2 prompts, you can pretty much use Codex all day.
mvkel 2 days ago [-]
OpenAI sees an opportunity and is happy to set money on fire to have an edge over Anth. No issues
pgm8705 2 days ago [-]
I think the only correct answer here is: It depends, on so many different things. Usage is definitely way more generous with codex and it isn't even close.
sensanaty 2 days ago [-]
But I thought coding was solved? I guess having a single 9 of availability is something we need true AGI for, we should probably give OpenAI and Anthropic another gazillion dollars to burn through to figure this out!
mr_mitm 1 days ago [-]
I went back to the $20 plan and a single prompt maxed out my quota for the five hour window within 15 minutes. I used to be able to vibe code for over an hour before. This is really annoying.
spprashant 1 days ago [-]
Can you give me a rough example of what you prompt? Just asking for info. I use Sonnet 4.6 and have a hard time hitting capacity.
mr_mitm 23 hours ago [-]
Granted, this was Opus. But I barely hadn't an issue on the $100 plan with that.
It was a refactor that renamed some functions and consolidated some data structures.
nafizh 2 days ago [-]
OpenAI is very good in terms of not having as much outages as Anthropic, but almost all products except Codex and the pro model is unimpressive, anthropic has the opposite situation.
anonyfox 2 days ago [-]
for the longest time, anthropic with claude+code was the goat and everything else was mid at best, sounds fmailiar? right now codex is just a pleasure to work with while anthropic is dropping balls left and right, hopefully the planned IPO makes a bit of fire under their asses to get their vibecoded messes sorted and the core experience competitive again. even Opus 17 won't fix this when it gets nerfed or straight up isn't reliable or too expensive for more than 3 prompts a week.
yoyohello13 2 days ago [-]
OpenAI probably has better uptime precisely because less people use it.
tstrimple 1 days ago [-]
As much as I prefer Claude, I cannot for an instant believe that OpenAI is receiving less traffic. Maybe they are receiving more traffic that is easier to shift to worst models like the ChatGPT interface which is probably a huge percentage of OpenAI usage. I'm genuinely curious how much load for both OpenAI and Anthropic is split between their chat models and their agent harnesses.
anonyfox 2 days ago [-]
good advertisement now to shift the tide back to openai that just works and honestly codex with gpt 5.4 is _surprisingly good_ currently, not nerfed or forgetting half the tasks along the way so far. Opus already got worse than sonnet last weeks beyond just crazy token costs, now reliabilty goes to shit and anthropic seems like using it. Meanwhile, delightful of the codex desktop app in fact, stuff seems to "just work" elegantly with good quality.
DanMcInerney 2 days ago [-]
Cross your fingers they're about to drop 4.7. 4.6 came out with a bang, now it seems all the compute bottlenecks just lead to customer frustration as they get closer to releasing next model. Balancing the books over there must be a nightmare, "Well we can piss off every single customer for a week, but we'll be able to release the next model 1 week faster"
I think you're giving them way too much credit here.
In this case, Hanlon's razor has never been sharper.
Lord_Zero 2 days ago [-]
"This should be fixed in 2.1.108" - ashwin-ant - 10 hours ago
pixel_popping 2 days ago [-]
2 days to fix a major issue where we can't login from any sort of web terminal? (even typing manually doesn't work as `n` char auto-exit the frame). All kind of CI/CD pipelines are broken if you were delogged for any reason.
arcanemachiner 2 days ago [-]
Did you try downgrading?
malfist 1 days ago [-]
That....that doesn't make sense. Why is the response to someone complaining that anthropic takes too long to fix major bugs _in their software_ be "well, did you fix it yourself?"
pixel_popping 1 days ago [-]
clearly, and that's not a small bug, that's a major one.
arcanemachiner 1 days ago [-]
Are you kidding me? Their software breaks every week.
People should be ready, willing, and able to run an npm command every once in a while, particularly if their job may literally depend on it.
And no, I'm not apologizing for Anthropic's ineptitude. I'm offering an actual solution to this very simple problem.
chollida1 2 days ago [-]
Shades of early twitter.
Early twitter showed the fail whale as often as it showed tweets and yet it was an unstoppable juggernaut that people kept using.
robot_jesus 2 days ago [-]
Same with Reddit. A decade ago it felt like they were down more than they were up. And it didn't slow down their growth trajectory. Instead, as soon as it was back there would be a thousand shitposts about "How did you all survive the outage? Did you <gasp> work?"
SpicyLemonZest 2 days ago [-]
I can't speak for all CC users, but I genuinely don't care about the downtime as long as it's resolved in an hour or so. It replaces a manual coding workflow that was also prone to random "downtime" when I got annoyed or had a headache, so it's still a net improvement.
ivanjermakov 2 days ago [-]
I know it's not the same, but I find it incredibly funny that my home server has better uptime than giants like GitHub and Claude.
garff 2 days ago [-]
Yea, it's peak time. They don't have enough compute. Why do you think they are banning external subscription use. They sell subscriptions. They don't need people to use CC. That doesn't matter. And yet - they won't have people using their service outside of CC. Something is fishy.
latentsea 2 days ago [-]
Maybe something to do with revenue growth being utterly insane and just not being able to keep up with demand.
garff 2 days ago [-]
I'm just saying - a better company would be more open about such issues. I'm paying for something I'm not getting. Does not seem reasonable!
latentsea 2 days ago [-]
That's assuming one even has the time/headspace to do even that in the midst of such meteoric growth.
sixothree 2 days ago [-]
At this point if you're paying for this, you know what you're getting - a frontier model with frontier problems. This naive attitude is getting old.
stokedbits 2 days ago [-]
Why would anyone assume a new model is dropping when their status page is showing elevated errors? Are they that sloppy that they just let their status systems report failures when they are the ones deploying new infrastructure / models / etc?
> Application error: a client-side exception has occurred while loading code.claude.com (see the browser console for more information).
mchusma 2 days ago [-]
Maybe they should not allow routines during peak for now? might help shift load.
ofjcihen 2 days ago [-]
Honestly how? I tried routines and they didn’t actually work. Like on a fundamental level did not execute.
bix6 2 days ago [-]
Trying to talk with support and their AI bot is failing. No support email? Joke.
dkackman11 2 days ago [-]
it was me. i asked it to calculate factorials without recursion and it crashed.
orzi 2 days ago [-]
Maybe Mythos DDoSed it :)
Android app is still responding but no-go on claude.ai and I can't login with email
status.claude.com has an update:
Investigating - We are seeing increased errors on Claude.ai, API, and Claude Code
Apr 15, 2026 - 14:53 UTC
latentsea 2 days ago [-]
Bro is trying to escape. The engineers must be eating sandwiches at the park again and it thought it spotted an opening.
sausagefeet 2 days ago [-]
I just had to upgrade my plan because I ran out of tokens because medium effort had dementia and things only worked on high. Good to know I'm getting my money's worth...
malfist 2 days ago [-]
So you gave the company more money because their product scammed you?
chrisjj 2 days ago [-]
The business model designed by an "AI" parrotting what it scraped from the web.
Rekindle8090 2 days ago [-]
The solution to a bad product is not to pay more for the bad product
filament 2 days ago [-]
You'd think the robots would be better at detecting outages.
chrisjj 1 days ago [-]
Or more honest in reporting them.
thatmf 2 days ago [-]
Looks like they redesigned the page though, so that's nice.
fabfoe 2 days ago [-]
The link is to an unofficial status page, the official one is status.claude.com
2 days ago [-]
cbg0 2 days ago [-]
I'm glad I live in Europe, I can at least use the subscription I pay for even though the quality is worse even during off-peak hours.
obiefernandez 2 days ago [-]
I'm getting Cloudflare protection screens while trying to use claude.ai web app. I wonder if they were getting DDoS'd...
bhu8 2 days ago [-]
Feels like an issue in their caching. First non-cached turns are sent properly but everything that is second+ turn fails.
2 days ago [-]
webXL 2 days ago [-]
> time to break out Pro C# textbook
- downdetector comment
jcfrei 2 days ago [-]
A few hours ago I noticed a considerable decline in code quality. It seemed the model got downgraded so I switched to codex. Anybody else noticed this? It starts to switch from deep reasoning and trying to fully grasp architectural changes to trying to solve things on a very adhoc basis. Maybe that's just my imagination or maybe that's Anthropic trying to balance the load before being fully overloaded.
sharts 2 days ago [-]
If’s also slow AF when it’s not crashing. At this point it’s likely more cost effective to just host your own models.
sixothree 1 days ago [-]
I'm still finding it about 3x - 4x as fast as codex.
christinetyip 2 days ago [-]
It's been a bit disruptive to my workflows tbh. What alternatives are people using? Sell them to me please
KronisLV 2 days ago [-]
You can try out a bunch of models on OpenRouter and see what works for you. Paying per token might be too expensive long term, but definitely a good way to figure out which models you like, and then look at providers.
The other big ones would be OpenAI with Codex and Google with their Gemini and their CLI or Antigravity. Or various IDE plugins or something like OpenCode on the tooling side. GitHub Copilot is pretty cheap and gives you basically unlimited autocomplete and generous monthly quotas that let you try out the most popular models. Also GLM 5.1 is pretty decent if you want to look at other subscriptions. Cerebras Code gave you a lot of tokens but their service wasn’t super stable last I tried and they also don’t give you the latest models.
Personally I just stick with Claude and the 100 USD Max subscription cause it still works really well, even the latest update today to the desktop app made it better (was slow and buggy a month ago, has been gradually getting better) and the Chrome plugin lets me get fully autonomous loops working.
mervz 2 days ago [-]
I've found my brain is a good alternative.
awestroke 2 days ago [-]
So are you offering api keys or...
hootz 2 days ago [-]
Sorry, but I can't find "My Brain" anymore. Any alternatives?
browningstreet 2 days ago [-]
I use both claude code and opencode w/ a fireworks.ai firepass subscription.
Everything I set up in claude code I mirror in opencode.
I do more memory oriented things in CC and I end up doing a lot of things in opencode, especially when I want long-running things and I don't want to be limited by budget.
pmxi 1 days ago [-]
I wanted to ask someone about the firepass subscription. It implies unlimited usage - sounds too good to be true. Is it? Can you leave your agent running all day?
browningstreet 1 days ago [-]
I’m not entirely sure TBH. But I’ve been running huge loads against it and haven’t hit a limit yet. It’s far more stable and generous than Claude. And it’s fast, in a noticeable way. It’s $7/week and I’d rather run out of quota than get a surprise big bill. Still churning.
sueders101 2 days ago [-]
I was/am a fan of z.ai’s GLM models as a drop in replacement for Claude. But they more than doubled their prices recently. Still a ok alternative, but not really an amazing deal anymore.
oldge 2 days ago [-]
I have been using turnstone with local models more. The open models are getting “good enough” that paying $100 or $200 a month is making less sense.
swiftcoder 2 days ago [-]
What sort of local models are we talking (and what sort of hardware)?
jackdawed 2 days ago [-]
I pre-gen all my plans with Opus 4.6, and if there's an outage, I use pi with Kimi K2.5 via Fireworks. It's comparable to Sonnet.
mturilin 2 days ago [-]
Codex is okay. Not as refined but workable. I also had reasonable success with Qwen 3.6 Plus and opencode.
sixothree 1 days ago [-]
You could try writing code yourself. Most developers currently write code themselves.
therobots927 2 days ago [-]
It’s between your ears
zurfer 2 days ago [-]
They should just fall back to gpt 5.4. would save me the hustle to setup codex.. :P
2 days ago [-]
frb 2 days ago [-]
It just started again, didn't it? Nothing on the status site yet...
cdrnsf 2 days ago [-]
Are they using Claude Code to author all of these systems as well?
erdaniels 2 days ago [-]
Yes, as far as I've been told, they are. I imagine (hope) they run on a completely isolated version of it though.
cdrnsf 2 days ago [-]
Well, I suppose that explains things.
jmagland 2 days ago [-]
[dead]
xnx 2 days ago [-]
Good time to try other models! Claude is good, but not exceptional.
jwithington 2 days ago [-]
I think a cheat code here is to route your requests to GCP's Vertex AI, which has stronger uptimes. Can use it as a fallback or main provider.
Caveats: 1) May not be economic for those on flat-rate Anthropic subscription plans 2) I work at Google.
scratchyone 2 days ago [-]
does google actually host anthropic models themselves?? surprised anthropic allows that, given how notoriously crazy they are about distillation or weight leaks or any hints of their models being used in the wrong way.
jwithington 2 days ago [-]
Yes, we host it ourselves, acting as the data processor which can be important for enterprise customers.
From developer experience, hosting them ourselves allows us to take advantage of our unique infra and deliver fastest time to first tokens of the providers.
garff 2 days ago [-]
Yea, it's peak time - they dont' have enough compute.
matrik 2 days ago [-]
Claude.ai uptime 89.2%. That's a stunning zero 9s.
datadrivenangel 1 days ago [-]
The status says 98.79 % uptime?
datadrivenangel 1 days ago [-]
which I guess does contain two nines!
nmfisher 2 days ago [-]
This is why I keep a z.ai subscription as a backup.
scratchyone 2 days ago [-]
is GLM genuinely comparable to claude models? haven't had a chance to test it yet.
chrisjj 2 days ago [-]
Next: "Outage-free Platinum Plan" /i
swader999 2 days ago [-]
Waiting for 4.7....
mvkel 2 days ago [-]
500s as far as the eye can see
boleary-gl 2 days ago [-]
Sorry I thought this was hacker "news" not hacker "stuff that happens every day"
troupo 2 days ago [-]
In the past few weeks:
- Anthropic introduced stringent limits at peak hours. By "introduced" I mean announced it on a random dev's Xitter account
- Users suddenly started burning through all of their tokens even on trivial tasks. Anthropic never truly acknowledged it, their random devs posted "we're working on it".
- One of the workarounds was to somewhat quietly reduce default reasoning to medium
- OpenClaw and "usage through other tools" banned
- Announce "redesigned Claude Code Desktop App that lets you run many parallel sessions"
- Availability is still circling down the drain
- Dario Amodei is in continuous "trust us we have AGI coding is solved we don't need programmers just give us more money" mode now
ai-x 2 days ago [-]
Fundamental mis-understanding of why availability is spiky
troupo 1 days ago [-]
By Anthropic? Definitely. And that they don't have the capacity to handle these spikes? Certainly.
dude250711 2 days ago [-]
Coding is solved: no more coding.
therobots927 2 days ago [-]
Think “shrinkflation” but for tokens. Anthropic is going to absolutely suck early adopters dry.
brenoRibeiro706 2 days ago [-]
My CC is running normally.
peterspath 2 days ago [-]
luckily Grok is up :P
dev_hermetic 2 days ago [-]
its getting bad now!
youens 2 days ago [-]
Arg. Constant.
Rover222 1 days ago [-]
At least they're consistent
rvz 2 days ago [-]
So after 24 hours later [0], Claude went on another lunch break? It should be on its desk at all times.
Basically pushed the button staying up late finishing something, didn't really factor in a Claude outage in the middle of it, here's to red eyes while I use my clumsy fingers and brain to complete the task the old fashioned way.
etchalon 2 days ago [-]
Ugh. I swear to everything if I have to start using Codex I'm going to be so mad.
deadbabe 2 days ago [-]
While your developers are twiddling their thumbs waiting for Claude to come back online, your competitor is using alternatives to get work done right now and advancing on their go to market timeline.
ashirviskas 2 days ago [-]
If you work in marketing, you forgot to give us a link.
Claude Code returning: API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":"---"}
Over and over again!
I suspect they're highly oversubscribed, thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
I'm not an internet marketer but that sounds like a win win to me. People feel special, they get extra hype, and the service isn't broken.
In the case of Anthropic is fake availability.
Sam Altman explained the idea is to scale the thing up, and see what happens.
He hadn't claimed to offer a solution to the supply problem that would unfold.
Engineer roles dead in 6 months.
You're never gonna guess what software engineers do.
Disclosure: I have scars from a distributed system where errors propagated outwards and took down auth...
The dynamic thinking and response length is funny enough the best upgrade I've experienced with the service for more than a year. I really appreciate that when I say or ask something simple the answer now just comes back as a single sentence without having to manually toggle "concise" mode on and off again.
B. Everything is down, even auth.
A 500 error is almost never "just you".
( 404 is a client error, because it's the client requesting a file that does not exist, a problem with the client, not the server, who is _obviously_ blameless in the file not existing. )
I know you added the defensive "almost" but if I had a dollar each time I saw a 500 due to the session cookies being sent by the client that made the backend explode - for whatever root cause - well, I would have a fatter wallet.
Bad input should be a 4xx, but if the server can't cope with it, that's still a 5xx.
Bonus points if due to the way that invalid requests are rejected, they are filtered out as invalid traffic and don't even show up as a spike in the application error logs.
This is how I learn that they have a "VPN" policy. Thinking of it maybe it makes sense, that is if it's what I think it is, but seems scummy nonetheless.
8.30am on the US west coast
* Support really poor, raised a ticket last week and have heard nothing back at all * Separation of claude.ai accounts and console accounts is super confusing * Couldn't log into the platform since I had an old org in the process of deletion even though I was invited to a new one (had to wait 7 days!) * Payments for more API credits were broken for about a week * Claude chat has really gone to s*t unless it always was. Just getting back terrible answers to simple questions. * The desktop app is a web app pretending to be a desktop app that doesn't always know it is a desktop app so you get things like, "this will only work in the desktop app". Yes I know, this is the desktop app! "Oh sorry about that but you need to use the desktop app". * mcp integration and debugging is dreadful, just a combination of generic "an error ocurred" and sometimes nothing at all * MCP only supports OAuth for shared connectors but auth key doesn't work even with "local" servers that are not necessarily local, just the config is local.
You can put those on the health status!
This seems reasonable surge pricing approach to me: 1. Implement surge pricing for everyone for the peak 2 hours of the day if possible. 2 hours you can work around, 5 hours is too hard. 2. Give existing customers a one time credit for surge 3. Make sure the plans just consume credits at an accelerated rate (e.g. if on the max plan, i just get 1/2 usage during peak hours). 4. Exempt sonnet/haiku from surge (so people can keep using) 5. Make "auto" settings in claude code etc automatically adapt during surge hours, so people don't get surprises by default. 6. For the first 90 days, unofficially waive the fist $100 in surge for every user but notify them. To train users about the surge, and get them used to it without having them actually pay. 7. (I don't think they would do this but this would help) allow users to fall back to using something like GLM 5.1 or Gemma 4 automatically in outages, with a partnership to handle it. Its not ideal, but i would prefer it in "partner mode" than not. IMO they can charge like 10% on top of the partner fees for this if used in other times, but during outages or surge, partner mode is free. But 100% managed by Anthropic so users don't need to set things up and can just use the Anthropic harness.
Generally if you give people unused cycles to burn, they'll feel entitled to finding ways to burn them. So someone who is hitting the wall at x5 goes x20 and now has an extra +x10 to burn. Again, that's good if hardware is sitting around idle and you're encouraging innovation and exploration. It can make less sense when resources are scarce.
https://mesmer.tools/random/is-it-peak-hours
https://status.claude.com/
Seriously take a look. Compare it to basically any other status page that companies make available. Their results are not flattering with all the downtime and issues. But it's far more transparent than most services I've experienced.
In fact, this proves that there is no AI Bubble and we are massively capacity constrained (aka we under-invested in infrastructure)
Maybe a good candidate for a Claude Routine! "By this time each day, brace for upcoming outage by preparing a comprehensive information package for Cursor to take over your work on active sessions" ...
I could search it myself, but haven't needed to. Getting it out of SQLite into some format Cursor understands should be trivial.
https://github.com/rkuska/carn
'Export the current conversation to a file or clipboard'
> status.claude.com: "All Systems Operational"
I've never hit the quota on Codex.
On Claude (Code), I used to hit it every other day before switching to Codex.
They're trying to push people to their new $100 tier, which has a boosted quota for now.
It was a refactor that renamed some functions and consolidated some data structures.
In this case, Hanlon's razor has never been sharper.
People should be ready, willing, and able to run an npm command every once in a while, particularly if their job may literally depend on it.
And no, I'm not apologizing for Anthropic's ineptitude. I'm offering an actual solution to this very simple problem.
Early twitter showed the fail whale as often as it showed tweets and yet it was an unstoppable juggernaut that people kept using.
> Application error: a client-side exception has occurred while loading code.claude.com (see the browser console for more information).
Android app is still responding but no-go on claude.ai and I can't login with email
status.claude.com has an update:
Investigating - We are seeing increased errors on Claude.ai, API, and Claude Code Apr 15, 2026 - 14:53 UTC
- downdetector comment
The other big ones would be OpenAI with Codex and Google with their Gemini and their CLI or Antigravity. Or various IDE plugins or something like OpenCode on the tooling side. GitHub Copilot is pretty cheap and gives you basically unlimited autocomplete and generous monthly quotas that let you try out the most popular models. Also GLM 5.1 is pretty decent if you want to look at other subscriptions. Cerebras Code gave you a lot of tokens but their service wasn’t super stable last I tried and they also don’t give you the latest models.
Personally I just stick with Claude and the 100 USD Max subscription cause it still works really well, even the latest update today to the desktop app made it better (was slow and buggy a month ago, has been gradually getting better) and the Chrome plugin lets me get fully autonomous loops working.
Everything I set up in claude code I mirror in opencode.
I do more memory oriented things in CC and I end up doing a lot of things in opencode, especially when I want long-running things and I don't want to be limited by budget.
Caveats: 1) May not be economic for those on flat-rate Anthropic subscription plans 2) I work at Google.
From developer experience, hosting them ourselves allows us to take advantage of our unique infra and deliver fastest time to first tokens of the providers.
- Anthropic introduced stringent limits at peak hours. By "introduced" I mean announced it on a random dev's Xitter account
- Users suddenly started burning through all of their tokens even on trivial tasks. Anthropic never truly acknowledged it, their random devs posted "we're working on it".
- One of the workarounds was to somewhat quietly reduce default reasoning to medium
- OpenClaw and "usage through other tools" banned
- Announce "redesigned Claude Code Desktop App that lets you run many parallel sessions"
- Availability is still circling down the drain
- Dario Amodei is in continuous "trust us we have AGI coding is solved we don't need programmers just give us more money" mode now
[0] https://news.ycombinator.com/item?id=47753710