Yes, and also:
Is it good for a factory for when the average worker has three jobs and only gets six hours of sleep? Sounds like a recipe for recalls...
In fairness, homes in Detroit can be quite affordable. There are quite a few 2,000 sq. ft. homes listed for <$150,000 that may be 80 years old but are updated with a modern interior look. These would be affordable to a 22-year old factory worker making $80,000 ($40/hr) with no college debt, even if they didn't perfectly optimize their budget.
Good school districts are likely to be less affordable though, but cost of raising children is a whole other can of worms.
I think the parent's point is more that there are so few of those workers that you need seniority for any job security in the first place. The 22-year-olds are the first people getting fired in a scheme like this, and there's not exactly a burgeoning demand for their replacement.
The UAW-Ford contract doesn't allow that, the 1-2 year employees would be fired before the 22 year old employees with 4 years of seniority. We're specifically talking about union jobs (UAW), so the company isn't generally targeting specific workers to fire.
The union has a lot of control over which employees leave during layoffs, and often it's actually the workers with less seniority in the union who are let go. Generally unions negotiate for a "last hired, first fired" situation (youngest employees go first) and often more senior employees have "bumping rights" - if their entire specialization is eliminated, they can move laterally by displacing junior workers. It's also fairly difficult for Ford to individually target the most expensive workers because the UAW does a good job of making sure there's real cause for termination.
UAW's agreement with Ford contains the usual "last hired, first fired" provision on page 80 here[0] in Article VIII §16(c). "Bumping" is part of the agreement on page 79, VIII §13(b).
Perhaps I misunderstood you and you meant that it's hard for workers to get to the "4 year" mark in the first place because Ford can just churn the 1-3 year workers over and over again. The UAW contract also contains a "Preferential Placement Arrangements" clause which gives laid-off workers priority for re-hire whenever Ford is hiring again. Workers can lose their seniority - but there's a bit of a ratchet effect, they have to stay unemployed by Ford for a length of time equal to how long they were employed. So if they've worked there for 2 years, get fired, and are next in line to be rehired 18 months later, they'll enter back in with 3.5 years of seniority from their original date of hire.
Ford had no WARN layoffs[1] between 2012-2018 and only 3 in the past 6 years (affecting 4200 workers in total), so generally it seems that workers are able to achieve top-end wages and full seniority before being laid off.
Yep. It's a difficult needle to thread. Seniority IS important and there really needs to be a way to keep that knowledge. On the other hand, people retire, at some point... If you layoff the last batch of people (no matter how good they were at their job) and replace them with even newer people (in the case of my layoff with my "Team B" in the second most longest job I've had (8 years)). The incoming team at least had more training than our group did which was 2 weeks, they got 3 months. We were meant to be expendable but proved ourself during the pandemic with a massive influx ofwork-from-home installs, while people had quit and burned out from it all.
I still support Unions, even if in the US they're compromised, primarily (IMO) due to Taft-Hartley and the political restrictions placed upon them (mostly about cross-shop/cross-trade organizing, general strikes (sympathy strikes), etc...)
The best part was union leadership informing us the "good news" that there won't be 2 different pay-grades. And at least in the interim we did get a bump to "tier 1" for that time of negotiations while we were there before we were let go.
My point was that we shouldn't pretend "over $40 an hour" for senior staff is generous. A young person would still be better served financially learning a trade than working on the line at Ford.
We need a different metric of wealth generation than home ownership. It used to be very safe, but now it's an anchor as much as it is a lifeline. Home ownership can't pay off forever; eventually either mortgage rates, supply/demand, or construction costs will force homes to either lose value or become too expensive. Those with a house or mortgage are subject to the whims of the market.
Home ownership is tax-advantaged because the government wanted to create a real estate market. Well, it's here, but it's not that great anymore, and most people don't get to take advantage of it. So Congress needs to make paying rent tax-deductible.
> No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side
This is the whole point. The marginal dev will go to the path of least resistance, which is to skip the understanding and churn out a bunch of code. That is why it's a problem.
You are effectively saying "just be a good dev, there's literally nothing about AI which is stopping you from being a good dev" which is completely correct and also missing the point.
The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
To add to the above - I see a parallel to the "if you are a good and diligent developer there is nothing to stop you from writing secure C code" argument. Which is to say - sure, if you also put in extra effort to avoid all the unsafe bits that lead to use-after-free or race conditions it's also possible to write perfect assembly, but in practice we have found that using memory safe languages leads to a huge reduction of safety bugs in production. I think we will find similarly that not using AI will lead to a huge reduction of bugs in production later on when we have enough data to compare to human-generated systems. If that's a pre-existing bias, then so be it.
> The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
My mental model of it is that coding with LLMs amplified both what you know and what you don't.
When you know something, you can direct it productively much faster to a desirable outcome than you could on your own.
When you don't know something, the time you normally would have spent researching to build a sufficient understanding to start working on it can be replaced with evaluating the random stuff the LLM comes up with which oftentimes works but not in the way it ought to, though since you can get to some result quickly, the trade-off to do the research feels somehow less worth it.
Probably if you don't have any idea how to accomplish the task you need to cultivate the habit of still doing the research first. Wielding it skillfully is now the task of our industry, so we ought to be developing that skill and cultivating it in our team members.
I don't think that is a problem with AI, it is a problem with the idea that pure vibe-coding will replace knowledgeable engineers. While there is a loud contingent that hypes up this idea, it will not survive contact with reality.
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
Learning the ropes looks different now. You used to learn by doing, now you need to learn by directing. In order to know how to direct well, you have to first be knowledgeable. So, if you're starting work in an unfamiliar technology, then a good starting point is read whatever O'Reilly book gives a good overview, so that you understand the landscape of what's possible with the tool and can spot when the LLM is doing (now) obvious bullshit.
You can't just Yolo it for shit you don't know and get good results, but if you build a foundation first through reading, you will do a lot better.
Totally agreed, learning the ropes is very different now, and a strong foundation is definitely needed. But I also think where that foundation lies has changed.
My current project is in a technical domain I had very little prior background in, but I've been getting actual, visible results since day one because of AI. The amazing thing is that for any task I give it, the AI provides me a very useful overview of the thing it produces, and I have conversations with it if I have further questions. So I'm building domain knowledge incrementally even as I'm making progress on the project!
But I also know that this is only possible because of the pre-existing foundation of my experience as a software engineer. This lets me understand the language the AI uses to explain things, and I can dive deeper if I have questions. It also lets me understand what the code is doing, which lets me catch subtle issues before they compound.
I suppose it's the same with reading books, but books being static tend to give a much broader overview upfront, whereas interacting with LLMs results in a much more focused learning path.
So a foundation is essential, but it can now be much more general -- such as generic coding ability -- but that only comes with extensive hands-on experience. There is at least one preliminary study showing that students who rely on AI do not develop the critical problem solving, coding and debugging skills necessary to be good programmers:
On vibe coding being self-correcting, I would point to the growing number of companies mandating usage of AI and the quote "the market can stay irrational longer than you can stay solvent". Companies routinely burn millions of dollars on irrational endeavours for years. AI has been promised as an insane productivity booster.
I wouldn't expect things to calm down for a while, even if real-life results are worse. You can make excuses for underperformance of these things for a very long time, especially if the CEO or other executives are invested.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real
I hate to say it but that's never going to happen :/
I'm a bit cynical at this point, but I'm starting to think these AI mandates are simply another aspect of the war of the capital class on the labor class, just like RTO. I don't think the execs truly believe that AI will replace their employees, but it sure is a useful negotiation lever. As in, not just an excuse to do layoffs but also a mechanism to pressure remaining employees: "Before you ask for perks or raises or promotions, why are you not doing more with less since you have AI? You know that soon we could replace you with AI for much cheaper?"
At the same time, I'll also admit that AI resistance is real; we see it in the comments here for various reasons -- job displacement fears, valid complaints about AI reliability, ethical opposition, etc. So there could be a valid need for strong incentives to adopt it.
Unfortunately, AI is also deceptively hard to use effectively (a common refrain of mine.) Ideally AI mandates would come with some structured training tailored for each role, but the fact that this is not happening makes me wonder about either the execs' competency or their motives.
Anthropic valuation is 10% of Google. The 35 to get equivalent multiple is correct (well actually closer to 7 as another comment thread rightfully pointed that Anthropic is apparently on track to multiply their revenue by 5 in 2025).
I tend to agree, which makes it all the more amusing that companies brag about being so selective. It seems like largely artificial and random selectivity.
I'm confused by this post because I think Sorbet satisfies basically all the things the author wants, and my experience with Sorbet has been really good!
It pales in comparison to what the author is talking about, editor support for instance is not good, it took them an awful lot of time to add support for linux-aarch64, it's in general rough around the edges (having to maintain various custom type files for some gems it cannot auto-generate type info for) and in general feels like a chore to use.
He is not a "courseboi". SaaStr is a legit brand that's been around for a long time focusing on the sales side of SaaS.
You have to remember this is someone who is almost certainly completely non-technical and purely vibe coding. He won't know what things like code freeze, rollbacks, production database, etc actually mean in real engineering terms and he is putting his full trust in the LLM.
Very few people want to host/organize other people.
The end goal of throwing parties shouldn’t be friendship or getting invited to other people parties, it’s building a large loose network of people you’re acquaintances/shallow friends with and becoming a super connector.
If you ONLY want to make friends or get invited to parties I think focusing on finding specific people and spending time with them 1:1 is a much better way to do that.
reply