Hacker Newsnew | past | comments | ask | show | jobs | submit | aliljet's commentslogin

I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?

OpenCode or Pi are popular agent harnesses. Lots of IDEs integrate LLMs now. I believe there’s also a Qwen Code that exists, but I have yet to try it.

OpenCode?

Have they effectively communicated what a 20x or 10x Claude subscription actually means? And with Claude 4.7 increasing usage by 1.35x does that mean a 20x plan is now really a 13x plan (no token increase on the subscription) or a 27x plan (more tokens given to compensate for more computer cost) relative to Claude Opus 4.6?

They have communicated it as 5x is 5 x Pro, and 20x is 20 x Pro (I haven’t looked lately so not sure if that’s changed).

They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.

As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.


I think as far as the maybe more important weekly allotment Max 5 is 10x Pro and Max 20 is 20x Pro. For the 5 hour window it is as the names would suggest though.

Definitely 13x, at least for now

Feels like buying toilet paper.

Have they effectively communicated what a 20x or 10x Claude subscription actually means? And with Claude 4.7 increasing usage by 1.35x does that mean a 20x plan is now really a 13x plan (no token increase on the subscription) or a 27x plan (more tokens given to compensate for more computer cost) relative to Claude Opus 4.6?

Anthropic isn't going to give us that information. It's not actually static, it depends on subscription demand and idle compute available.

Given they have all of the information and all of the control, do you trust them to be fair?

so it's all "it depends" as a business offering, lmao. all marketing

The more efficient tokenizer reduces usage by representing text more efficiently with fewer tokens. But the lack of transparancy does indeed mean Anthropic could still scale down limits to account for that.

a few months ago it was for weekly:

pro = 5m tokens, 5x = 41m tokens, 20x = 83m tokens

making 5x the best value for the money (8.33x over pro for max 5x). this information may be outdated though, and doesn't apply to the new on peak 5h multipliers. anything that increases usage just burns through that flat token quota faster.


I am 90% sure it's looking at month long usage trends now and punishing people who utilize 80%+ week over week. It's the only way to explain how some people burn through their limit in an hour and others who still use it a lot get through their hourly limits fine.

It's hard to say. Admittedly I'm a heavy user as I intentionally cap out my 5x plan every week - I've personally found that I get more usage being on older versions of CC and being very vigilant on context management. But nobody can say for sure, we know they have A/B test capabilities from the CC leaks so it's just a matter of turning on a flag for a heavy user.

wait. that's insanity. where did you get those numbers from? the 5x plan is obviously the right place to be...

someone did the math and posted it somewhere, I forgot where, searching for it again just provides the numbers i remember seeing. at the time i remembered what it was like on pro vs 5x and it felt correct. again, it may not be representative of today.

You’re probably thinking of this article: https://she-llac.com/claude-limits

I'm broadly curious how people are using these local models. Literally, how are they attaching harnesses to this and finding more value than just renting tokens from Anthropic of OpenAI?

Idk about everyone else, but I don’t want to rent tokens forever. I want a self hosted model that is completely private and can’t be monitored or adulterated without me knowing. I use both currently, but I am excited at the prospect of maybe not having to in the near to mid future.

I’ve increasingly started self hosting everything in my home lately because I got tired of SAAS rug pulls and I don’t see why LLM’s should eventually be any different.


Exactly. Relying on external compute for professional work is a non-starter IMO.

Qwen3.5-9B has been extremely useful for local fuzzy table extraction OCR for data that cannot be sent to the cloud.

The documents have subtly different formatting and layout due to source variance. Previously we used a large set of hierarchical heuristics to catch as many edge cases as we could anticipate.

Now with the multi-modal capabilities of these models we can leverage the language capabilities along side vision to extract structured data from a table that has 'roughly this shape' and 'this location'.


I use local models for asking about personal financial or health data that I want to keep local and private. Or even just whipping up quick and dirty prototypes for whatever I can think of but not seriously enough to spend tokens that I rather use on real projects.

I used vLLM and qwen3-coder-next to batch-process a couple million documents recently. No token quota, no rate limits, just 100% GPU utilization until the job was done.

Some tasks don’t require SOTA models. For translating small texts I use Gemma 4 on my iPhone because it’s faster and better than Apple Translate or Google Translate and works offline. Also if you can break down certain tasks like JSON healing into small focused coding tasks then local models are useful

Is it really better? In which languages?

Yes it is and has been for a very long time, it has been years now. Gemini 1.5 Pro is when LLM translations started significantly outperforming non-LLM machine translation, and that came out over 2 years ago.

Ever since then Google models have been the strongest at translation across the board, so it's no surprise Gemma 4 does well. Gemini 3 Flash is better at translation than any Claude or GPT model. OpenAI models have always been weakest at it, continuing to this day. It's quite interesting how these characteristics have stayed stable over time and many model versions.

I'm primarily talking about non-trivial language pairs, something like English<>Spanish is so "easy" now it's hard to distinguish the strong models.


I've been using gemma4 for translating Mongolian to English. It runs circles around Google Translate for that language pair, it's not even close.

I'm using the smaller vision models (Qwen3.5-4B currently) with Frigate, a FOSS self-hosted "AI" NVR. It's good enough at analyzing images to figure out mostly what's happening, and doesn't require the big knowledge base that bigger models have.

Also use a bigger model for summarizing or translating text, which I don't consume in realtime, so doesn't need to be fast. Would be a thing I could use OpenAI's batch APIs for if I did need something higher quality.


The people i know that use local models just end up with both.

The local models don’t really compete with the flagship labs for most tasks

But there are things you may not want to send to them for privacy reasons or tasks where you don’t want to use tokens from your plan with whichever lab. Things like openclaw use a ton of tokens and most of the time the local models are totally fine for it (assuming you find it useful which is a whole different discussion)


The open weights models absolutely compete with flagship labs for most tasks. OpenAI and Anthropic's "cheap tier" models are completely uncompetitive with them for "quality / $" and it's not close. Google is the only one who has remained competitive in the <$5/1M output tier with Flash, and now has an incredibly strong release with Gemma 4.

Unless you have a corporate lock-in/compliance need, there has been no reason to use Haiku or GPT mini/nano/etc over open weights models for a long time now.


I use LMStudio to host and run GLM 4.7 Flash as a coding agent. I use it with the Pi coding agent, but also use it with the Zed editor agent integrations. I've used the Qwen models in the past, but have consistently come back to GLM 4.7 because of its capabilities. I often use Qwen or Gemma models for their vision capabilities. For example, I often will finish ML training runs, take a photo of the graphs and visualizations of the run metrics and ask the model to tell me things I might look at tweaking to improve subsequent training runs. Qwen 3.5 0.8b is pretty awesome for really small and quick vision tasks like "Give me a JSON representation of the cards on this page".

The privacy/data security angle really is important in some regions and industries. Think European privacy laws or customers demanding NDAs. The value of Anthropic and OpenAI is zero for both cases, so easy to beat, despite local models being dumber and slower.

It’s easy to find a combination of llama.cpp and a coding tool like OpenCode for these. Asking an LLM for help setting it up can work well if you don’t want to find a guide yourself.

> and finding more value than just renting tokens from Anthropic of OpenAI?

Buying hardware to run these models is not cost effective. I do it for fun for small tasks but I have no illusions that I’m getting anything superior to hosted models. They can be useful for small tasks like codebase exploration or writing simple single use tools when you don’t want to consume more of your 5-hour token budget though.


Oh lord, are the LLMs already replacing LLMs?

I've been largely using Qwen3.5-122b at 6 bit quant locally for some c++/go/python dev lately because it is quite capable as long as I can give it pretty specific asks within the codebase and it will produce code that needs minimal massaging to fit into the project.

I do have a $20 claude sub I can fall back to for anything qwen struggles with, but with 3.5 I have been very pleased with the results.


How much VRAM do you need for that?

I squeeze Qwen3.5-122B-A10B at Q6 into 128GB. It's a great model.

Wow what kind of hardware do you have? Mac Studio, dgx spark, strix halo? How fast is it?

While they can be run locally, and most of the discussion on HN about that, I bet that if you look at total tok/day local usage is a tiny amount compared to total cloud inference even for these models. Most people who do use them locally just do a prompt every now and then.

This is why I'd like to see a lot more focus on batched inference with lower-end hardware. If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or so, you don't really need top-of-the-line hardware even for SOTA results.

That’s a good point. I think I saw Together.ai with that offering, but for some reason just never think to throw random non urgent coding tasks at it overnight

> If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or so

But they can't? The usage pattern is the polar opposite. Most people running these models locally just ask a few questions to it throughout the day. They want the answers now, or at least within a minute.


If you want the answer right now, that alone ups your compute needs to the point where you're probably better off just using a free hosted-AI service. Unless the prompt is trivial enough that it can be answered quickly by a tiny local model.

They are okay for vibe coding throw-away projects without spending your Anthrophic/OAI tokens

always inside claude code, just using ollama, takes 2 seconds

I was thinking the same thing. My only guess is that they are excited about local models because they can run it cheaper through Open Router ?

I am working on a research project to link churches from their IRS Exempt org BMF entry to their google search result from 10 fetched. Gwen2.5-14b on a 16gb Mac Mini. It works good enough!

It's entertaining to see HN increasingly consider coding harness as the only value a model can provide.


There are really nice GUIs for LLMs - CherryStudio for example, can be used with local or cloud models.

There are also web-UIs - just like the labs ones.

And you can connect coding agents like Codex, Copilot or Pi to local coding agents - the support OpenAI compatible APIs.

It's literally a terminal command to start serving the model locally and you can connect various things to it, like Codex.


I'm honestly curious who on these plans that's not working with an unlimited enterprise budget would even choose to burn real dollars like this beyond the subscription? What is the personal use case? It seems exorbitantly expensive after you've exhausted your subscription.

There's a weird 'token anxiety' you get on these platforms. And you basically don't know how much of this 'limit' you may consume at any time. And you actually don't even know what the 'limit' is or how it's calculated. So far, people have just assumed Anthropic will do the kind thing and give you more than you could ever use...


This reminds me of the early days of cell phones. Limits everywhere and you paid for it by the kilobyte. Think at one point I was paying 45c per text message. I hope this gets better and we do not need gigawatt datacenters to do this stuff.


We're in the process of building new gigawatt datacenters for the sole purpose of doing this stuff. If we end up not needing them, there's gonna be a whole lot of capacity sitting around soaking up ongoing maintenance costs.

For ex. of the five new data centers being planned in Wisconsin, the two I know of that have public energy consumption estimates will need more electricity than all of the residential electric usage in Wisconsin combined at 3.9 gigawatts.

https://www.wpr.org/news/data-centers-could-cost-wisconsins-...


All I know is I never want to hear another person talk about how my personal electrical usage is excessive after all the power usage needed for these data centers. My house should be able to feel comfortable in the summer if we're building these many data centers.


Yeah, I've been juggling some patches to opencode to help me see where my codex usage limits are at. As of a month ago, that information was not visible on the ChatGPT web UI.

You just work till suddenly the AI dumps you out, and sit there wondering how many hours or days you have to wait. It's incredible that this experience is at all ok, is accepted


I wonder how crazy the scale here can get. How far can I go? The bps.space guy is heading into space. Can the community hit the moon? Literally.


Amateurs have reached the karman line, orbit is still pretty much out of reach. The people who get close to the karman line use two stage passive stabilized airframes and solid fuel motors. The airframes are basically works of art and it takes a lot of luck because of passive stabilization and Mach 3+ speeds. Many pictures of these rockets have their paint and leading fin edges burned off when they're recovered. Propellent is expensive and an attempt at > 100k feet is about $5-6k an attempt in propellent alone.

This guy is widely respected in the hobby and this flight made it to 293k feet https://www.youtube.com/watch?v=gmv7G6Rf5WE

Check out the liquid bi-prop engines the halfcat guys have, apparently they were just certified by the HPR hobby governing organization Tripoli which means they can be insured at sponsored launches. With a liquid fueled engine you can do thrust vectoring (nozzle gymbaling) easier than solid fuel motors so active stabilization is more feasible. If you have active stabilization then all you need is thrust to weight > 1, enough fuel, and you'll eventually get to whatever altitude you want. Orbit means orbital velocity and that's just a whole other ball game.

https://www.halfcatrocketry.com/


> 293k feet

89 kilometers


Space Concordia, a Canadian university space-oriented student group, which is sort of amateur-level given that it’s driven by students and donations, attempted to reach space not that long ago with a liquid fueled single staged rocket. Here is a video of the launch https://www.youtube.com/live/610YciEs8qg?t=4594&is=aAWo8Y7vi...


Thank you so much for sharing this video, it's just amazing to see a bunch of young amateurs getting so excited about things that would have been virtually inaccessible 20 years ago.


It’s beautiful to see. They have put in such extreme amounts of hard work to get that thing into the air. Designing a robust affordable liquid propelled rocket from scratch is hard. There are so many design decisions, complex simulations, manufacturing difficulties, and tests for every little part of that 11+ m rocket. Accounting for extreme forces, heat variations, vibrations, wind, atmosphere, liquid sloshing, rotation, etc during ascent and descent. It’s not only mechanical/aviation engineering but also software, electrical, sourcing donations, documenting everything in forms of design and risk assessment reports etc etc.

You also have to try to account for every little possible failure mode before launching which is why rockets seldom succeed on the first attempt.

And then dealing with authorities to create new launch sites and permits which probably hasn’t been done in decades in Canada.


Indeed, there are so many different ways a rocket a fail. Launch rail buttons detach, motor chuffs, motor explodes, fin falls off, structural failure (banana), parachute doesn't fire, parachute doesn't deploy, parachute detaches - to name just a few.


Might be worth checking out the "Copenhagen Suborbitals" group (they have a YouTube channel) and see if they're still active! It's been years but I think I recall they were trying to build something capable of getting a person into space (not sure if orbit was a goal).


"Space" is 100km. The moon at its closest is about 350,000km.

So the jump from the former to the latter is... significant.


Distance is usually the wrong measure in space. Something like delta-v will give you a much better scaling as once you manage to get something to orbit the rest is actually a lot closer than it would seem on the ground.

Not to say the effort somehow becomes peanuts, cheap, or easy... but the jump in delta-v needed to go from "100 km vertical ascent" to "hit the moon 350,000 km away" is more like a ~6-7x increase than a 3,500x one. If the moon were instead 700,000 km away the factor would still be ~6-7x.

Cool site for delta-v estimates https://deltavmap.github.io/


Everything you've said is correct, but Delta-V scales logarithmicly with fuel load - you need to carry the new fuel. So for purpose of discussing altitude (a valid way to look at getting to the moon) the size of the rocket, and the fuel expended, does in fact grow much closer to linearly.

I think I'll go land on Mun and Minmus now...


What I actually started with was comparing Electron to the current bos.space rocket and seeing the relationship was nowhere near linear. The above is the largest component of why I could think of but there is always more than 1 thing going in.


Wow even as a bit of a rocket nerd i'd never thought about it that way, that's pretty cool!


And you need a serious amount of money, effort and expertise to each 100km with a rocket.


Amateur rocketry achieving orbit would be significant. Reaching the moon would be substantially more difficult.


As crazy as this seems, it's unlocking another variation of software engineering I didn't think was accessible. Previously, super entrenched and wicked expensive systems that might have taken years of engineering effort, appear to be ripe for disruption suddenly. The era of software systems with deeply engineered connectivity seem to be on the outs...


Are there evals showing how this improves outputs?


Improves outputs relative to what? Compared to previous contexts of 1M, it improves outputs by allowing them to exist (because previously you couldn't exceed 200K). Compared to contexts of <200K, it degrades outputs rather than improves them, but that's what you'd expect from longer contexts. It's still better than compaction, which was previously the alternative.


There's a vibe in at least the PNW that feels like the tech sector is sloughing jobs and avoiding creating new ones courtesy of AI. I genuinely wonder if that feeling is backed by reality and whether it's large enough to be translating into national statistics across all industries.


In Washington it is much broader than the tech sector.

Washington is being buried in indefensibly bad legislation that is extremely hostile to large companies and tech companies of every size for openly ideological reasons. It has rapidly become one of the worst business environments in the country when it used to be one of the best. Many companies have stopped or reduced hiring in Seattle and are moving operations to other States; there is a new announcement in the news every other day.

I know several longtime residents that have recently moved out of State or are no longer domiciled there as a consequence. There was an article in the news just this week that housing prices are starting to decline rapidly in Seattle.

It is looking like they couldn't help themselves and killed the golden goose.


Which policies specifically? Certainly not the income tax on million+ income, seems pretty modest. We moved from TX. Property tax rate is low, no income tax sub million in income, schools are great (and almost all new), roads are fine and transit seeing massive investment. They definitely need to fix budget, but there's _ample_ wealth here to deal with it. I think they'll figure it out.

_Oregon_ has bad policies (10% income tax on all, upwards of 14% on high income earners at 400k); schools are in a rough place, their legacy pension system is a disaster. But Washington seems fine imo. TX and such states will always be a draw while their cost of living is low, if you don't mind the heat and general lack of outdoors (relative to PNW). IMO the weather and housing prices are the main tradeoffs between WA and TX.


You can add in the increasing B&O (revenue) taxes, payroll taxes, data center taxes, and the expansion of the extremely high sales taxes to things that effectively make Washington uncompetitive. The cost of doing business has become unreasonably high and is so badly structured that it creates perverse incentives for how you organize business.

And then you have a litany of new business regulation across every sector of the local economy. My recent favorite, which fortunately did not make it out of this session due to heavy lobbying by tech, was requiring data centers to turn-off power during periods of high electricity demand. It's insane that this is even being seriously considered.

Oregon is also a mess but it has always been a mess.

Texas isn't the only alternative. Turning Washington into California with worse weather even makes California relatively attractive.


>You can add in the increasing B&O (revenue) taxes, payroll taxes, data center taxes, and the expansion of the extremely high sales taxes to things that effectively make Washington uncompetitive.

None of this matters. We have been hearing how California is doing the same shit for years and people are moving out in droves, but turns out California house prices are still high because people are staying there and its still a very good place to live and work on the average, despite way higher cost of living.

So Washington is going to do just fine.


Oregon has some decent things going for it. Multnomah county is rolling out Preschool for All and it's wildly popular. I know lots of people who were going to move, but stayed in Oregon just because they got into the early lottery for it.


There’s no way preschool for all is broadly popular.

It soaks the “rich” with an income threshold that isn’t indexed to inflation and kicks in at an income level where preschool is still a major affordability challenge.

And then you pay PFA and don’t get preschool for your kid because we’re still years away from having enough seats for everyone.

So it is preschool for some (multco paying for seats in existing preschool, aka kicking your kid out of their preschool spot) paid for by the broad middle class.

Even Kotek was ragging on it.

2020’s 125k/200k thresholds should be today’s 150/250 thresholds. They are not.

https://www.opb.org/article/2025/06/26/kotek-multnomah-count...


This is all a temporary problem. PFA will roll out to everyone, income thresholds can be (and are) renegotiated, and as someone who has a large PFA tax burden, I'm happy to pay for it even if my kids will age out before I get the benefit. I have never met anyone outside of ranting internet commenters who is actually mad about this situation.

Establishing free universal child care as the norm that everyone agrees we have to find a way to provide is the real virtue here. Detractors like you are missing the forest for the trees.


“Long term care tax”


When did blatantly unconstitutional laws become modest?


Why are income brackets unconstitutional?


I don't think they killed the goose at all.

The tech companies killed the golden goose that was handed to them. They got too greedy. Amazon basically got carte blanche to build in Seattle, and plenty of tax credits to do so.

Amazon and their founder then told WA gov that they were going to relocate to Florida. WA gov said "well, we paid billions for your infrastructure, so if you're going to leave, please partially refund us" and Bezos whined and whined and whined. Imagine, a guy worth (at the time) nearly half a trillion dollars being told that he should have to pay a few hundred million dollars for his broken promises.

Imagine being given incredibly generous tax incentives for decades that allowed you to build a multi trillion dollar company, and then whining when the giver of those incentives asks for a tiny portion of that to be paid back when you tell them you're leaving.


For readers not in Washington, there is currently legislation being worked on that is essentially a millionaire's tax, (simplified as) 10% income tax on income over 1 million dollars, inflation adjusted.

There are a few very angry, emotional, and vocal opponents of this in most corners of the internet, although very few of them actually make a million dollars and there are many million+ income people supporting this.

Demographically, there are over 3 million households in WA, and only 20k of them would be affected.


The bigger news is that it would be WA's first-ever income tax, along with the tax on capital gains income they just introduced. You can look at any historical example of introducing income tax in the US to see that the rates always expand to lower brackets over time.


Ahh, another favorite talking point. Yes, because the tax burden is already carried by the people you claim to worry about


Those people you wanna tax will just wfh from another state. Then you'll wonder why tax revenue is down and why no one is hiring.


People aren't leaving Seattle to save a small amount in taxes every year


I lived in the Seattle area and would be affected by some of these taxes. I moved to California recently. WA lost its tax advantage, so If I’m now going to be paying the same taxes, I might as well enjoy better weather and schools for my kids.


So, you _already_ moved, _for non tax reasons_


Maybe the opponents consider it a foot in the door; a wedge that can be expanded gradually to include lower tiers at lower percentages AKA the beginning of a WA State Income Tax. There are not few 400k households in Seattle.

The majority of states have one so it's not that big a deal, but it'll be less often said "I'm going to turn down this higher SF offer for Seattle b/c of lower COL...".

I'm not sure where the next refuge will be. Austin? Memphis?


And there is such small thing as state constitution that explicitly forbids any income tax.

Current government is using it as toilet paper, first by introducing capital gains tax, and now income tax.

I see in another comments though that you argue in bad faith by dismissing opponent arguments as “small amount”, “talking points”. If you don’t have anything real to say, don’t bother to answer.


The state constitution does not forbid an income tax. We both know it is more nuanced than that. Don't accuse me of bad faith in the same comment that you present an inaccuracy in the form of simplification that suits your argument


There is nothing nuanced about that. You look into 2 places and see read it for yourself. Stop spreading lies.

—-

https://app.leg.wa.gov/RCW/default.aspx?cite=1.90.100

RCWs > Title 1 > Chapter 1.90 > Section 1.90.100

RCW 1.90.100

Personal income tax prohibition.

Neither the state nor any county, city, or other local jurisdiction in the state of Washington may tax any individual person on any form of personal income. For the purposes of this chapter, "income" has the same meaning as "gross income" in 26 U.S.C. Sec. 61.

——

https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim...

Gross income defined (a) General definition Except as otherwise provided in this subtitle, gross income means all income from whatever source derived, including (but not limited to) the following items: (1) Compensation for services, including fees, commissions, fringe benefits, and similar items; (2) Gross income derived from business; (3) Gains derived from dealings in property; (4) Interest; (5) Rents; (6) Royalties; (7) Dividends; (8) Annuities; (9) Income from life insurance and endowment contracts; (10) Pensions; (11) Income from discharge of indebtedness; (12) Distributive share of partnership gross income; (13) Income in respect of a decedent; and (14) Income from an interest in an estate or trust.


Similar thing in California. And good ol' fronttunner for president Gavin Newsom is actively trying to kill it.

Just to remind you that he's still indeed an Establishment Democrat. He won't drown us in fascism, but he sure isn't fighting for the working class.


Golden Goose? WA has a massive budget shortfall.


There has been zero accountability for that massive budget shortfall. Revenue has increased 2x over the last decade with nothing to show for it. People are rightly skeptical of giving them even more money. And they have gone about trying to increase revenue even more in just about the most toxic ways possible, which will almost certainly erode the tax base.

That state desperately needs to restructure its finances but the legislature is almost complete captured by clueless ideologues. Washington isn't California. Most of the attraction of living there historically was its extremely business-friendly environment.

I've lived a large fraction of my life in Washington and I'm watching the State commit suicide in real-time.


“ Most of the attraction of living there historically was its extremely business-friendly environment.”

How old are you? What propaganda told you this? In my generation (young millennial/genz) the attraction of living in Seattle, which pulled me and almost a dozen professional friends at this point has been:

- high quality urban living in a temperate environment. Including access to great parks, waterfront, bikeability in the city

- access to great outdoors and regional amenities like skiing, ocean fishing, hiking, wine country

- liberal policies and general friendly society (it’s friendlier here than the east coast)

- no state income tax (we’re all very high tax bracket)

- a high enough income population that you can find a plethora of high-end products and services that cluster around high income earners (only a few us cities have this stronger than Seattle I feel)


Oregon ticks most of those boxes except the difference is that Oregon has very few jobs. People flock to WA because of jobs created by long-standing business friendly policies.

That doesn't explain everything, obviously, but I think you need to take it into consideration. For decades I've heard this in some form from people: "Oregon is amazing, but I had to leave when I couldn't get a job." Meanwhile the Sea-Tac region has had amazing growth, packed wall-to-wall with a range of companies.


I agree, difference between explosive growth and “consistent draw” is large employers setting up in the region.

Another interesting anecdote is that I know many people who work remote for companies all over the world who moved to the Seattle area once they had a remote job. I am one of these people who moved once I got a remote job. Im not sure what kind of impact this has long run. I think the flywheel drawing high skill people to Seattle is still very strong.


Oregon is on the other end of the continuum when it comes to income taxes ;-)

If you're not too high an income earner, the Oregon income tax is worse than California's.

And no, Washington's sales tax doesn't come close to the Oregon income tax.


Weather is worse in the Portland area, can be a good few degrees warmer than Seattle in the summer


For my demographic (Early genz), there are only 3 reasons to be here:

A. Their job is only available here

B. No state income tax

(C?). They REALLY love skiing/hiking

People have always regularly left for NYC/Bay Area, but I predict it will start to happen in droves over the next few years as A rapidly fades and legislation begins to threaten B.


Have you read about _where_ the budget is going? You are complaining about accountability without offering a diagnosis or showing any understanding for what is actually happening.

The budget expansion is almost entirely by medicaide.

Looking at 2019-2023

* Human Services: +~50% nominal → ~+22% real — biggest absolute dollar growth, driven almost entirely by Medicaid expansion and COVID enrollment

* K-12: +23% nominal → ~0% real — flat in purchasing power

* Higher Education: +~20% nominal → ~-2% real — slight real decline

* Government Operations: +~30% nominal → ~+6% real — modest real growth, headcount/compensation driven

* Natural Resources: +~25% nominal → ~+2% real — roughly flat

* Total Budget: +43.5% nominal → ~+17% real


Seattle and Portland OR are ground zero for the burgeoning anti-AI movement.


Will you elaborate on the “indefensibly bad legislation”?


Like poor people have always understood, assume taxes always go up. Time for the rich to learn this lesson as well.


I don't know either. I do wonder if AI is just and excuse since saying "we have to let people go because the economy is bad and our costs are up." spooks investors while "We adopted magic AI and don't need people anymore" sounds like these companies are being proactive so investors don't dump their stocks.


They also want to get as close to a skeleton crew as possible. They believe developers can do everything while simultaneously driving down the cost of developers.

They've been boiling the frog with increasing job requirements since at least one or two decades ago, and AI is conveniently aligned towards this goal.


Considering the first companies to claim AI has made many redundant are the same companies that overhired during Covid, I think it's pretty clear how the wind is blowing.

Companies move in a group, if you're the only company doing layoffs you look weak and predators will pounce and the board will ask uncomfortable questions, but if everyone is doing it, they'll ask why you are NOT.


The idea doesn't really make sense to me. We know LLMs increase productivity, especially for coding, but increasing productivity shouldn't make you fire people unless your business has already exhausted any potential for growth. Instead we would expect the increased productivity to grow businesses further and increase hiring for all other tasks that LLMs are still not good at.


> Instead we would expect the increased productivity to grow businesses further

This assumes infinite demand which is not a good assumption imo. Especially if people are losing their jobs.


You're right. My point is that AI isn't at the heart of the job shedding, it's just a scapegoat for other structural problems in the economy.


> This assumes infinite demand which is not a good assumption imo.

Yes, but "AI replaces people by improving productivity by 20-50%" is clearly a case of https://en.wikipedia.org/wiki/Lump_of_labour_fallacy. So maybe the "people are losing their jobs" is just totally unrelated to AI . . . but people keep repeating that "companies can do same work with fewer people thanks to AI" nonsense, so there will always be a need to remind them how actual economics work.


In a shrinking economy there isn't much growth. They can take the productivity gains to shrink their payrolls and get the same output with fewer people

That said I don't think there is a ton of productivity growth yet with LLMs that would show up in the numbers that are getting thrown around. Companies are just finally seeing that they have a bunch of people not doing much at all and cleaning house


Yep, no disagreements from me there. Ultimately, economic stagnation is what's driving job losses, not increased productivity (such that it is) from AI.


I personally feel that people are coming to realize that whatever they build can be copied in a short amount of time so its value is much lower than it would have been in the past. So what's worth building?


AI is killing the notion that SW companies are infinite money printing machines. The idea is that someday soon(in the next 5-10 years as markets are forward looking), someone will vibe-code a replacement for Photoshop/TurboTax/Office and if nothing else that will kill the profit margins. This changes the entire economics of SW and affects current hiring and spending.


Quite the opposite. I just spent the past month "vibe coding" a pretty serious program in C. The tldr is yea I can build faster but I'm still sitting there testing, debugging, and focusing on specific features as I go and that's still a human limitation. The AI productivity is pivoted directly into higher complexity of features. It's not a magic wand that immediately builds a program that works perfectly out of the box. The zeitgeist just hasn't caught up to the reality of that.


And that's cool but your experience is not what the market is trading. The vibe is that vibe-coding will come together in the next few years and SW margins will be hit. That doesn't mean it's the reality, just that it is what the market is thinking.


Yes. I'm saying the market is wrong.


I'm not sure how much of it is actually AI vs just like, the bags of VC money have dried up and most tech companies can't anywhere near justify their personnel or often even existence without it.

Like companies have been doing the RTO "stealth" layoffs for years now, it's not even news anymore, this was already well underway.

There is also the obvious priapism of owners and investors to finally do to the remaining white collar workers what they have already done to everyone else. Whether or not AI actually can replace all these workers is nearly moot, they have fantasized about business without labor for so long they can't tell the difference from reality anymore.


Where is the money that was going to VC investments going now? With increasing inequality, I figure rich people have more money than ever that they need to figure out where to invest.


Interest on debt; shoring up the financial vehicles and insurance through which they diffused the catastrophic losses of their bad bets from the past few decades; stockpiled for the inevitable economic collapse and the feeding frenzy that will follow; land.


I just want everyone to understand that part of why everything is so expensive today is because our elite funneled the surpluses from the electronic revolution into boondoggles that not only didn't make back what they cost, but that demand even more labor, to this day, to cover maintenance and interest.

>Yeah, screw DEI!

lmao I'm talking about wars; sprawl; advertising and consumerism; wasteful or gatekept luxuries; feet-dragging on any number of technologies and policies that could have mitigated the damage, just to please incumbents.

We temporarily made life spectacularly better for like 5-10% of the population, and doomed everyone to either generations of toil, or a hard reset in the form of a "burn it all down" revolution.


For a long time interest rates were incredibly low which led a ton of investors to put money into VC funds, despite their very high risk.

When interest rates go up, money floods out of higher risk higher return areas like company formation, and floods back into buying bonds, so investors can collect the low-risk interest that didn't exist before.


The big money is going to the OpenAI/Anthropic types producing foundation models that have to raise billions on a regular basis. This is money that would normal be spread across the startup ecosystem instead of concentrated in a handful of massive companies. When it finally hits IPO, I'd bet that you see it start to get freed up for new investments

Just to drive the point home, in 2019 the total VC market was ~$300 billion. To date, roughly $235 billion is tied up in just OpenAI ($168b) and Anthropic ($67b)


More money is flowing into commodities. Gold price going up feeds into more mining.


Real estate?


If so, not commercial. Commercial has been in a slow collapse + shell game of shifting the debt burden.


Real estate is not doing well, it's "stalled" but not collapsing, but prices are staying steady and mortgage rates are not down to Covid levels.


There were definitely some companies that clearly overhired during Covid that are now "resetting" and blaming/crediting AI is certainly an excuse they can use.


I can only speak from anecdotal experience in that I just witnessed this week, dev team leads and architects “replaced” by Claude code, they kept the offshore junior-mid coders and are giving them $20/mo pro accounts… (doesn’t that seem a little backwards?)


Always good to see an A/B test done.


This is absolutely backwards.


AI isn't causing the job losses in health and hospitality.


I mean Dorsey literally just said publicly that he’s laying off people in order to utilize AI

like what more clear point do you want?

Whether or not you believe that this is a good or bad move, correct or lying move, whether AI is capable or not,

“AI” is the reason that CEOs are utilizing to cut roles

The timing of this is based on the fact that Capital is striking from deploying money to anything else outside of the largest deals that include AI as promise of higher profits

But ultimately it comes down to the fact that the people in control with all the money believe that the future is gonna need less human workers and is prioritizing giving money to organisms that will shed their workforces in order to run an experiment in AI capturing value on behalf of investors without having the additional overhead of personnel


Dorsey is in a huge bind with runway and lack of revenue. Blaming AI for a massive cut needed just to get by lets investors trick themselves into believing that he has a plan that makes the company grow to reach the level of profitability that the stock prices suggests will happen.

And perhaps Dorsey has a long enough of a runway for something to come along to save the company from eventual collapse. Maybe not, since firing 40% of a company tends to put a damper on innovative efforts that would massively grow revenues.


I think the point is that these tech leaders can be saying "AI" to appeal to their board/shareholders, but the truth is more mundane typical reasons for layoffs (bad economy, overhiring, offshoring, bad debt, etc).


Or it’s possible he was lying!

If Block is really so much more efficient, while doing well, they should invest that talent into expanded products and services. But that’s not what we’re seeing.

Some things:

- They acquired AfterPay for $29bn. Their market cap today, after the big AI bump, is $40bn. BNPL did not pay off the way payments companies thought it would.

- They have a weird internal combination of Cash and Square and AfterPay internally. They’re not as unified as they ought to be.

This feels more like Jack coming to terms with a company that’s hugely inefficient organizationally. It’s easier to clear out thousands of people and rebuild.


Sure, a CEO has never lied before about the reasons for layoffs.


I think COVID ruined people's ability to critically think. The amount of people in both journalism and across the economy, people are just taking the words of others (often those with malicious intents) with zero critical thought being applied.

For Block's case they have had multiple layoffs over the last 5 years, hardly the sign of an AI apocalypse and more of a sign of a business leader that only survived because of free money.


I agree 100%. I think that many business "leaders" will use AI as a cudgel to control their budgets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: