Hacker News new | past | comments | ask | show | jobs | submit | cjk's comments login

I really wish Flynn[1] had taken hold before Kubernetes became so popular. Deploying and managing apps was a hell of a lot less complex, and very Heroku-like.

Sadly, it's no longer being developed as of 3-4y ago.

[1]: https://github.com/flynn/flynn


I buy that maintaining a CI setup is not a trivial thing, but if speed is the only other concern, why not just...spin up faster CI runners?

De-centralizing test running also makes it difficult/impossible to e.g. collect test analytics (like Buildkite can do natively) to help gain insight into flaky tests, how code changes impact test duration, etc., and visualize those things over time, especially if developers are using non-homogenous hardware.

That, and if your tests are not being run in CI, then what enforces that tests run and pass before a deploy occurs? Losing that guarantee is not great IMO.


Call me a curmudgeon, but I am still not a fan of Tailwind.

I know a lot of newer engineers that believe it’s a perfectly acceptable alternative to learning CSS, to which I always respond: using Tailwind does not absolve you from having to learn CSS.

There are also a lot of comments in this thread claiming that the “right” way to use Tailwind is to create composite classes using its utility classes, but I have rarely seen anyone do this in practice.

Instead, the inline classes pollute the markup to the point of becoming extraordinarily difficult to read, and unless you make a concerted effort to e.g. bury as much of that garbage markup in reusable components as possible, it remains a persistent eyesore.


People don't do that because it's not the right way and is explicitly discouraged by the docs: "While it’s highly recommended that you create proper template partials for more complex components, you can use Tailwind’s @apply directive to extract repeated utility patterns to custom CSS classes when a template partial feels heavy-handed."

https://tailwindcss.com/docs/reusing-styles#extracting-class...


I had to run the linked articles through a translator, but I didn't see any mention of Audrey Tang or claims that this was in any way an assassination attempt.


The second article states that Tang Feng (Audrey) is The Digital Minister, and the guy was trying to get to the Digital Department, seemingly in relation to a law against online fraud that they recently passed. He probably would have been happy to shoot other people in the process, but he definitely would have wanted to shoot the leader and public face of that department especially.


D'oh. I was searching for "Audrey". I guess it was silly of me to assume they'd use her English name in the article. Thanks.


Ehhhhhh. I get the sentiment, but I think this really depends on what the "main" language is and how amenable it is to scripting.

Personally, I prefer writing shell scripts regardless of what the main language is in a given project. They're portable (more or less), and can e.g. detect missing dependencies and install them if necessary, which isn't possible with the main language if you're missing its compiler or interpreter.


This is a comically bad take.


Yeah, IMO, `go run` is a really under-appreciated part of what makes Go productive and low-friction.

That, and the ability to cross-compile without installing a cross-toolchain for the target platform. Having spent an inordinate amount of time writing build systems and compiling/distributing cross-toolchains, this is a _huge_ deal.


I only wish they’d realized it was time to start over before unleashing the cancer that is Electron upon the world.


I stopped reading when I saw the suggestion to avoid hiring people that are insecure. Some of the best software engineers I’ve ever hired lacked self-confidence and struggled with insecurity. In my experience, those folks tended to have extremely high standards for themselves, and did great work.

Conversely, some of the worst software engineers I’ve ever hired were extraordinarily adept at projecting a high level of confidence without appearing to be overconfident.


macOS has been doing memory compression for years. Basically, apps and background processes that are idle have their memory compressed, and then upon exiting idle (due to user interaction, a timer firing, etc.), the memory is decompressed. The compressed memory doesn’t touch swap.

It’s possible that Apple Silicon has some hardware acceleration for doing this in-place memory compression, but otherwise it’s pretty old news, which makes this marketing shtick even more frustrating. 8GB was borderline not enough even back when memory compression was added to macOS (maybe ~10 years ago?).


> macOS has been doing memory compression for years

So has Windows.

With multi-core CPUs, it’s easy to do so without impacting performance in most cases.


And Linux (although not enabled by default on some distros).


So any working set that fits into 16 GB but exceeds 8 GB, eats into cpu performance and/or SSD longevity.

That's fine for a low-end laptop in 2023, and if you're upfront about that tradeoff.

For a ~$1500 laptop, that's just marketing bs.


Sounds like zswap for Linux. The "swap" part of it is poorly named as it also doesn't involve a physical disk, at least not directly.


zswap was implemented after Apple showcased their own memory compression. It's a copy of what macOS did first.


OSX Mavericks was released in 2013, while Compcache was initially developed in 2008.


Does that make it "not count" or somehow less effective?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: