Rendered at 14:37:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
jqpabc123 2 days ago [-]
Is Anthropic trying to limit usage or drive people away?
They are most likely attempting to become profitable.
AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.
A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.
By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.
For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.
LatencyKills 2 days ago [-]
I've been an engineer for almost 30 years (@ MS & Apple). I've been using Claude to perform code reviews for several of my macOS apps (that I wrote without AI).
A few weeks ago, its performance was impressive and helpful.
In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").
Areena_28 7 hours ago [-]
Thank you, someone pointed out this. Using claude has become a pain in my ass now. I used to use it for writing but now it has lost its creative thinking. In addition, its free plan daily limit gets easily exhausted with 2 prompts...makes me think after getting all the hype, what happens to these platforms
rl3 2 days ago [-]
Using Claude 4.6 [Extended thinking] exclusively via the web interface: No, not really.
My use case is having Claude tear through an extremely complex Kubernetes setup, reviewing code and drafting plans.
Despite the near-instant answers, it still manages to do this effectively at a speed I can't even hope to keep up with as a human. It's reconciling concerns that easily span dozens of dimensions with each problem I give it.
The trade-off here is that you sometimes see the model make subtle errors in thinking, but they're easily recognized and corrected for when called out. I've also noticed the model will make a statement and then correct itself mid-stream, which sometimes muddles my job of reviewing its output.
Compared to competing top-tier models taking anywhere from 5-40 minutes for a sometimes impeccably-reasoned answer, there's no comparison velocity-wise. The real win is the speed at which Claude troubleshoots, though. Near-instant turns really wins here.
It's tempting to directly assume speed is proportional to quality, but we really don't know what's going on at any given provider's back-end serving configuration, nor the internal model routing configuration.
palata 2 days ago [-]
I was working really well, so I paid a yearly subscription. Two weeks later, it got to a point where I wouldn't pay for it if I had a choice.
I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.
I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.
jqpabc123 2 days ago [-]
"make it great, get people to pay yearly subscriptions, and then make it bad again".
This is called "bait and switch".
cyanydeez 6 hours ago [-]
of uou do it slowly enough, its enshittification
shahbaby 1 days ago [-]
same experience here, fool me once I guess
jazz9k 2 days ago [-]
It's most likely an intentional downgrade, so they can sell a better model to corporate and enterprise clients. This was bound to happen, especially since all of these companies are bleeding cash.
throwawayffffas 2 days ago [-]
My guess they are trying lower quantization. I have not noticed I recently begun trying qwen 3.5 locally.
niobe 2 days ago [-]
100%. Claude is, I'm sorry to say, basically nerfed.
I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.
jqpabc123 2 days ago [-]
vote with your feet.
All vendors are under the same sort of pressure. What you've experienced is likely to be duplicated elsewhere.
niobe 12 hours ago [-]
That's already true.. Every time a new "benchmark-leading" model is released from anyone, they give it maximum resources for a week or two, then drop the performance.
The point is no-one can reliably use these already unreliable and inconsistent tools when the vendors are making it worse by fiddling in the backend. Claude went from useful to near useless for me in a week, with no otherwise visible change. The produce being sold is literally changing under the hood on a whim. This is a deceptive trade practice, but on a more practical level, it's just not possible to get consistent results with these products when their output quality is so subject to change.
jqpabc123 9 hours ago [-]
Bottom line: There is nothing deterministic about AI itself or the vendors promoting it. Everything is subject to the vendor's whim.
Do you feel lucky?
cyanydeez 6 hours ago [-]
theyre stilll powerful, so the reality is: grab a amd395 or mac studio and start having realiable workflows.
the nondeterminism is still in the model but the world of FOMO-driven uncertainty fades away.
you still get to fill in as ace coder whenver theres gaps in its capabilities and upgrade when you want.
live on your cycle, not VCs
the_inspector 2 days ago [-]
At least I could not get claude projects working, after it worked perfectly about 2 months ago.
Because Anthropic introduced a chatbot as only line of support, help is not on is way.
loolhahalmao 2 days ago [-]
vibing prod has its consequences. i don't believe theyre purposefully trying to make their products worse, but it's a result of not reviewing / testing their code and then trying to stem costs, resulting in higher cost for users with worse quality.
8b16380d 2 days ago [-]
Yeah it’s hot garbage, I’ve moved to gpt5.4, which actually performs well enough for now
They are most likely attempting to become profitable.
AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.
A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.
By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.
For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.
A few weeks ago, its performance was impressive and helpful.
In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").
My use case is having Claude tear through an extremely complex Kubernetes setup, reviewing code and drafting plans.
Despite the near-instant answers, it still manages to do this effectively at a speed I can't even hope to keep up with as a human. It's reconciling concerns that easily span dozens of dimensions with each problem I give it.
The trade-off here is that you sometimes see the model make subtle errors in thinking, but they're easily recognized and corrected for when called out. I've also noticed the model will make a statement and then correct itself mid-stream, which sometimes muddles my job of reviewing its output.
Compared to competing top-tier models taking anywhere from 5-40 minutes for a sometimes impeccably-reasoned answer, there's no comparison velocity-wise. The real win is the speed at which Claude troubleshoots, though. Near-instant turns really wins here.
It's tempting to directly assume speed is proportional to quality, but we really don't know what's going on at any given provider's back-end serving configuration, nor the internal model routing configuration.
I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.
I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.
This is called "bait and switch".
I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.
All vendors are under the same sort of pressure. What you've experienced is likely to be duplicated elsewhere.
Do you feel lucky?
the nondeterminism is still in the model but the world of FOMO-driven uncertainty fades away.
you still get to fill in as ace coder whenver theres gaps in its capabilities and upgrade when you want.
live on your cycle, not VCs