Hacker Newsnew | past | comments | ask | show | jobs | submit | alwillis's commentslogin

It's unfortunate Inverted Triangle CSS (ITCSS) isn't more popular. Instead of resisting the cascade, it embraces it and makes it work for the developer.

The summary: write your CSS in specificity order [1]:

    /scss/
    ├── 1-settings.         <- global settings
    ├── 2-design-tokens     <- fonts, colors, spacing, etc.
    ├── 3-tools             <- Sass mixing, CSS functions, etc.      
    ├── 4-generic           <- reset, box sizing, normalize, etc.
    ├── 5-elements          <- basic styles: headlines, buttons, links
    ├── 6-skeleton          <- layout grids, etc.
    ├── 7-components        <- cards, carousels, etc.
    ├── 8-utilities         <- utility and helper classes
    ├── _shame.scss         <- hacks to be fixed later    
    └── main.scss
ITCSS basically does away with specificity wars in a CSS codebase. Usually the only place !important is the utility layer.

[1]: https://matthiasott.com/notes/how-i-structure-my-css


This is brilliant, I was not aware of ITCSS. Thank you for sharing! The link you shared fits my brain a lot better than pure BEM/CUBE, which works but always felt weird and uncertain to my style. Sprinkling a bit of BEM on top of ITCSS feels just right. shame.scss is the snarky cherry on top. Thanks again, you have enlightened at least on person today! :)

> tailwind frees you from having to spend excessive time building abstractions of styles/classes that will invariably change.

Abstractions like a hero image, a menu, a headline? Sure, it's easy to overthink things but most of the time, it's not that complex.

> placing the styles directly into the markup that is affected by it reduces cognitive load, prevents excessively loose selectors

In my opinion, it's the opposite. Besides the obvious violation of DRY and the separation of concerns, inline CSS can't be cached and it creates a lot of noise when using dev tools for debugging. It actually increases cognitive load because you have to account for CSS in two different locations.

Lots of people use Tailwind because they don't want to deal with the cascade, usually because they never learned it properly. Sure, back in the day, the web platform didn't provide much built-in support for addressing side effects of the cascade, but now we have @layer and @scope to control the cascade.

Tailwind uses a remnant of '90s web development (inline CSS) and built an entire framework around it. I get why it appeals to some people: it lets you use CSS while not requiring an understanding of how CSS works, beyond its most basic concepts.


Your framing assumes incompetence across the board which is unlikely to be true for a framework of its popularity. Consider instead competent people are working on projects with different needs and they’ve recognized there are trade offs to both approaches and still decided Tailwind makes sense in their situation.

To be honest, CSS had the cascade but also had horrible tools for actually managing the cascade for a long time.

If CSS had nesting, variables, media queries, the other nice selector queries like :has, and modules out of the gate, we likely would have not needed much of the tooling like tailwind that eventually got built to manage it all with less boilerplate. We built the tools because even when these features rolled out they came in fits and starts so you couldn’t adopt it without polyfills and whatnot.


I disagree with that conclusion. I see tailwind as a cleaner more succinct version of css that is much easier to manage and add features too.

Sure it’s not as dry, but I’ve been bitten in this regard because css framework and templates are so intransparent, preventing me from simply changing padding or margin.

CSS is too detailed and too verbose. Frameworks like bootstrap are too high level and don’t give enough control. Tailwind hits the sweet spot whilst allowing me to be detailed if I want to. It allows me to just get it done.


Premature DRY and premature attempt at separation of concerns have resulted in absolutely horrible spaghetti code in too many code bases.

Many times it's fine to repeat yourself. Many times it's fine for a component to cross multiple concerns.


They can’t disclose the technical details yet. They did say a detailed write up is coming.

> Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.

I don’t think Mythos is hype for all kinds of reasons.

Anthropic is a young company but their track record is solid; they don’t seem to hype things just for the sake of hyping things. Sam Altman at OpenAI? We already know his track record…

I’m going Occam’s razor here: the simplest explanation is usually the correct one.

Anthropic had an “oh shit” moment when they realized what Mythos can do. They decided to do the responsible thing: give the industry a heads-up and an opportunity to use the preview to identify and fix the most dangerous zero-day vulnerabilities.

Since the FAANG companies have billions of users, it makes sense to start with them.

There’s still going to major issues for users of systems too old to get patches or updates. Or for IT organizations who think Mythos is a replay of Y2K, where, compared to the warnings, not lot happened.

The bottom line is someone with Mythos won’t need to be an experienced security expert to cause real problems. That’s kind of the point.


> replay of Y2K, where, compared to the warnings, not lot happened

My dad was on one of the many Y2K teams that major tech companies had to make sure nothing went wrong. I feel like history may have undersold what could've been if not for considerable effort leading up to Jan 1, 2000.


> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.

I don’t think so.

An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.

It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.

As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.


> An LLM can produce higher-quality documentation than most humans.

Can bears some heavy weight.

LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.

The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.

Now compare that to this: https://slate.com/technology/2004/11/the-death-of-the-last-m...

I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)

You can throw out almost everything by volume from LLM generated documentation without loosing any information.

Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.


> LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.

I should clarify: the documentation I’m talking about is not generated using a generic LLM prompt, which would mostly suck.

With the proper context and additions (skills, plugins, MCPs) LLMs can produce high-quality documentation. You'd also have subagents doing QA of the documentation.

But it does require effort; it’s not magic.


It's not just about documentation.

If stuff really goes wrong, you need people who deeply understand the codebase so that they know where to look and how to diagnose the issue. It might be the case in the future that LLMs become so powerful they'll diagnose any issue (I doubt it), but until then, we need people in the loop.


> Also, because third party app developers largely align with Apple's philosophy, less and less 3rd party software even works on my computers anymore.

I think it's more about 3rd party app developers attempting to improve their products and stay relevant.

If Apple releases a new framework or API that would make a developer’s app better, but it requires macOS 14 or later, are they not supposed to incorporate it?

I've noticed lots of 3rd party developers keep older versions of their apps available for older macOS versions.


On both macOS and iOS it is straightforward to target older devices while using the newer SDKs, and to use those new frameworks conditionally based on the user's OS. Of course, Apple's tooling makes this harder and harder to do, the older the targeted OS is.

> My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing.

I think the results say more about the great job the curl team has done maintaining their codebase.

This doesn’t mean Anthropic's Project Glasswing is a marketing stunt. Logically, it doesn’t make sense: when they announced Mythos Preview, Anthropic couldn’t meet customer demand; they didn’t have enough compute to go around. So they decide to hype an unreleased product to drive even more demand? All that would do is piss off their existing customers who already experiencing rationing and frequent outages.

Many forums were already flooded with "I cancelled Claude Code" as it was.

On the contrary, it would be incredibly irresponsible and unethical for such a young company with billions of dollars of other people’s money invested in them.

Because the Mozilla team used Mythos and found 271 vulnerabilities [1], does that mean they're in on the so-called "marketing stunt"?

Of course, if Anthropic had released Mythos to the public and bad actors used it to hack a large number of banks, hospitals, government agencies, etc. in a matter of days, the HN crowd would be all over them for acting irresponsibly and criticizing them for not knowing better.

[1]: "Behind the Scenes Hardening Firefox with Claude Mythos Preview" — https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...


Other AI tools have found 300 bugs and this new sentient T1000 only found one. Stenberg himself found 30 this year.

Mozilla is the current poster child but 271 in such a large codebase with thousands of user options, most of them being TOCTOU isn't that much. Sorry. TOCTOU can happen in any language when people are simply exhausted by the sheer volume of case explosions.

There is a third option: Anthropic could simply have reported the issue without mentioning the new model at all. But they don't, since they want to sell to governments and military and the artificial scarcity just provides a veneer of exclusivity that their clients will appreciate.


Six figure salaries and stock options?

The new book from lady who worked in Meta summarises it.

RSUs. Way simpler taxes.

That’s mostly because it’s an Electron app. It would be a fraction of that if it were a native app on macOS or Windows.

This is more likely referring to the VM disk image the feature allocates, which would have little to do with Electron.

This, the vm bundle which reappears after you delete it. They say it's For Cowork and Claude Code, but if you don't use Cowork or CC sandboxing, it has no value. Considering I'm always finding things to delete on apples anaemic 512gb because I run out of space.

Well Electron includes Chromium. Maybe that pulls the 4GB model as well… not sure if it is Chrome only.

>> It was about x64 being unable to keep up - independent of Intel’s fab capabilities, which have improved lately.

> But the big reason x64 couldn't keep up was that Intel's fab capabilities were horrible. Intel got stuck and couldn't get smaller nodes out, and competing fabs caught up and left Intel in the dust.

It also was that Intel couldn’t execute reliably on their own roadmap, forcing Apple at the time to do extra engineering to incorporate Intel's chips. Apple sells a lot of laptops; Intel never got their act together regarding mobile processors for MacBooks and MacBook Pros.

The 8-core Mac Pro used Intel Xeon 5500 series; at idle, it used 309 W; it used 9 fans for cooling [1]. It sounded like a jet engine when it was running. And while it was an elegant design for the time, they shouldn’t have needed to jump through these hoops.

[1]: https://support.apple.com/en-us/102839


> It also was that Intel couldn’t execute reliably on their own roadmap,

Intel kept putting out delusional roadmaps that would assume their 10nm fab process was going to be ready for mass production in just another quarter or two. They spent years refusing to plan for 10nm to not be ready, so all their new architectures were unshippable and they had to resort to just using copy and paste on their 2015 CPU cores. Their fab fuck-up was hardly the only mistake they made in that era, but it was the biggest underlying cause of their problems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: