Hacker News new | past | comments | ask | show | jobs | submit | cglan's comments login

DOA


I'm surprised more people here don't pay for iCloud for at least the bottom tier storage (50GB). The free 5GB is almost worthless in 2025 for doing nightly backups. I don't back up with Apple Photos but even with "just" app data my nightly auto backups are like 10-15GB.


Even on iPhone there are much better and much cheaper solutions out there (not to mention cross platform) and those have everything a couple of times better than Photos.app and then a bit more. Maybe except Apple's troupe of privacy claims.


Can I use these more and better alternatives to back up everything on all of my iOS devices?


Like what? I use Backblaze B2 to backup all my non-Apple stuff and that's $6/TB/mo. iCloud's 2TB plan is $10/mo, so actually cheaper per TB, but with Backblaze you only pay for what you use so it may be cheaper. But pricing is pretty comparable, and I can't even imagine what a PITA it would be to use B2 for Apple stuff, so certainly seems like a good value. Are you saying there are even cheaper solutions that also have good Apple integration?


I think its great this is an "Apple only" thing. People willing to pay extra $$$ for a status symbol should stick together.


It's literally $0.99/month in the US for the cheapest iCloud+ plan. That's not much of a status symbol.

Source: https://support.apple.com/en-us/108047


Status symbol? Here's my take on it - iPhone is dozen a dime here in my country now (3rd world) but iCloud, iMessage are not. iCloud+ is definitely not. People are used to WhatsApp here (just to take an example of messaging apps) and even if they ever stumble upon iMessage they immediately see what a decidedly inferior and opaque oddity that thing is.


everyone I know uses partiful for events these days


Maybe you should get out of your bubble…


Maybe you should get more invites


How many people do you think really use it?


I don't know but anecdotally living in a major city every social event I attend has a Partiful attached.


I don't see how this competes with partiful. Feels like it'll be another half baked never updated app from apple. I wish they'd open their apis and integrations more. Feels silly that these apps get first class access to apple apis, meanwhile better made apps are forced to do weird workarounds, or simply have no integrations.


I see this app as more like the Notes, iMessage or Freeform apps. There are tons of apps out there that do XYZ better, but Apple wants to ship a polished version that does 90% of everything the average user needs. It accomplishes three things (in my eyes):

1. It helps grow Apple's ecosystem by covering just enough ground to make third-party alternatives less necessary for most users.

2. It reduces one of the major "sticky" points that keep people in Facebook's own moat. Events and Marketplace are the two reasons I still use Facebook.

3. It encourages competition from the people who want to do that last 10% better than Apple's apps, raising the baseline and hopefully forcing innovation as well. Those apps lead to more App Store revenue, so, cynically, it's a win-win for Apple.


It’s based on the new GroupKit API, which sounds like something that would be available to other apps in the future. Otherwise it would just use some private API.


You're being disingenuous, or you're incompetent, if you think Apple isn't going to keep this API in their closed garden.


It’s not the first time that a new API is used in a first party app and then opened to all developers.


I thought because of the EU they have to make these private APIs public?


partiful is destined for the trash like meetup, evite, etc. Once they need to actually make money, they’ll ruin the platform. 100% guaranteed.

it’s a great platform for the moment, enjoy it while it lasts.


They're shipping the org chart: this app is someone's ticket to a promotion.


I agree. I tried to get it to work recently with datadog, but there was so many hiccups. I ended up having to use datadogs solution mostly. The documentation across everything is also kind of confusing


imo Datadog is pretty hostile to OTel too. Ever since https://github.com/open-telemetry/opentelemetry-collector-co... was nearly killed by them I never felt like they fully supported the standard (perhaps for good reasons)

OTel is a bear though. I think the biggest advantage it gives you is the ability to move across tracing providers


> the ability to move across tracing providers

It's a nice dream. At Google Cloud Next last year, the vendors kinda of came in two buckets. Datadog, and everyone trying to replace Datadog's outrageous bills.


Pretty sure Datadog is literally one of the top contributors to OTel.


I worry that vision is not going to become reality if the large observability vendors don't want to support the standard.


FWIW the "datadog doesn't like otel" thing is kind of old hat, and the story was a little more complicated at the time too.

Nowadays they're contributing more to the project directly and have built some support to embed the collector into their DD agent. Other vendors (splunk, dynatrace, new relic, grafana, honeycomb, sumo logic, etc.) contribute to the project a bunch and typically recommend using OTel to start instead of some custom stuff from before.


They support ingesting via otel (ie competing with other vendors for their customers) but won't support ingesting via their SDKs (they still try very hard to lock you in to their tooling).


Yeah their agent will accept traces from the standard Otel SDK but there is no way to change their SDK to send the traces to anyone other than Datadog when I last checked a couple(?) of years ago.

I mean I understand why they did that but it really removes one of the most compelling parts about Otel. We ended doing the hard work of using the standard Otel libraries. I had to contribute a PR or two to get it all to work with our services but am glad that's the route we went because now we can switch vendors if needed (which is likely in the not too distant future in our case.


part of the reason for that experience is also because DataDog is not open telemetry native and all their docs and instructions encourage use of their own agents. Using DataDog with Otel is like trying to hold your nose round over your head

You should try Otel native observability platforms like SigNoz, Honeycomb, etc. your life will be much simpler

Disclaimer : i am one of the maintainers at SigNoz


The biggest barrier to setting up oTel for me is the development experience. Having a single open specification is fantastic, especially for portability, but the SDKs are almost overwhelmingly abstract and therefore difficult to intuit.

I used to really like Datadog for being a one-stop observability shop and even though the experience of integrating with it is still quite simple, I think product and pricing wise they've jumped the shark.

I'm much happier these days using a collection of small time services and self-hosting other things, and the only part of that which isn't joyful is the boilerplate and not really understanding when and why you should, say, use gRPC over HTTP, and stuff like that.


You are generally correct but I've used https://github.com/openobserve/openobserve for several projects for dev-only complete OTel stack (dashboards included) and I liked it. There are better dashboards out there for sure, but for what I needed locally it did the job fantastically well. Zero complaints.

It's extremely easy to self-host, either on a dev machine, a VPS, or in any Docker-based PaaS.


And having to rebuild a golang binary based on this horseshit just to get a bugfixed collector is some horseshit: https://github.com/open-telemetry/opentelemetry-collector/tr... which is required (as best I can tell) because they text/template in the deps https://github.com/open-telemetry/opentelemetry-collector/bl...

Heaven help you if it's a contrib collector bugfix


I’ve thought of something like this for a while, I’m very interested in where this goes.

A highly async actor model is something I’ve wanted to explore, and combined with a highly multi core architecture but clocked very very low, it seems like it could be power efficient too.

I was considering using go + channels for this


The idea has kicked around in hardware for a number of years, such as: https://www.greenarraychips.com/home/about/index.php

I think the problem isn't that it's a "bad idea" in some intrinsic sense, but that you really have to have a problem that it fits like a glove. By the nature of the math, if you can only use 4 of your 128 cores 50% of the time, your performance just tanks no matter how fast you're going the other 50% of the time.

Contra the occasional "Everyone Else Is Stupid And We Just Need To Get Off Of von Neumann Architectures To Reach Nirvana" post, CPUs are shaped the way they are for a reason; being able to bring very highly concentrated power to bear on a specific problem is very flexible, especially when you can move the focus around very quickly as a CPU can. (Not instantaneously, but quickly, and this switching penalty is something that can be engineered around.) A lot of the rest of the problem space has been eaten by GPUs. This sort of "lots of low powered computers networked together" still fits in between them somewhat, but there's not a lot of space left anymore. They can communicate better in some ways than GPU cores can communicate with each other, but that is also a problem that can be engineered around.

If you squint really hard, it's possible that computers are sort of wandering in this direction, though. Being low power means it's also low-heat. Putting "efficiency cores" on to CPU dies is sort of, kind of starting down a road that could end up at the greenarray idea. Still, it's hard to imagine what even all of the Windows OS would do with 128 efficiency cores. Maybe if someone comes up with a brilliant innovation on current AI architectures that requires some sort of additional cross-talk between the neural layers that simply requires this sort of architecture to work you could see this pop up... which I suppose brings us back around to the original idea. But it's hard to imagine what that architecture could be, where the communication is vital on a nanosecond-by-nanosecond level and can't just be a separate phase of processing a neural net.


> By the nature of the math, if you can only use 4 of your 128 cores 50% of the time, your performance just tanks no matter how fast you're going the other 50% of the time.

I'm not sure I understand this point. If you're using a work-stealing threadpool servicing tasks in your actor model there's no reason you shouldn't get ~100% CPU utilisation provided you are driving the input hard enough (i.e. sampling often from your inputs).


To work steal, you must have work to steal. If you always have work to steal, you have a CPU problem, not a CPU fabric problem. CPU fabrics are good for when you have some sort of task that is sort of parallel, but also somehow requires a lot of cross-talk between the tasks, preferably of a very regular and predictable nature, e.g., not randomly blasting messages of very irregular sizes like one might see in a web-based system, but a very regular "I'm going to need exactly 16KB per frame from each of my surrounding 4 CPUs every 25ms". You would think of using a GPU on a modern computer because you can use all the little CPUs in a GPU, but the GPU won't do well because those GPU CPUs can't communicate like that. GPUs obtain their power by forbidding communication within cells except through very stereotyped patterns.

If you have all that, and you have it all the time, you can win on these fabrics.

The problem is, this doesn't describe very many problems. There's a lot of problems that may sort of look like this, but have steps where the problem has to be unpacked and dispatched, or the information has to be rejoined, or just in general there's other parts of the process that are limited to a single CPU somehow, and then Amdahl's Law murders your performance advantage over conventional CPUs. If you can't keep these things firing on all cylinders basically all the time, you very quickly end up back in a regime where conventional CPUs are more appropriate. It's really hard to feed a hundred threads of anything in a rigidly consistent way, whereas "tasks more or less randomly pile up and we dispatch our CPUs to those tasks with a scheduler" is fairly easy, and very useful.


Give it a shot. It isn't much code.

If you want to look at more serious work the Spiking Neural Net community has made models which actually work and are power efficient.


I've implemented something similar to this using Golang Channels during Covid lockdowns. You don't get an emergence of intelligence from throwing random spaghetti at a wall.

Ask me how I know.


I think at some point Go is just going to have to either be a very flawed language, or make some very big breaking changes. Between union types being difficult to do properly, and sum types being subject to infinite arguments on GitHub. I get the feeling that it’s just going to stay a flawed language that I grow annoyed with.

Literally the only two features I’ve ever wanted in go is a way to express optional return values without pointers, and a way to be able to write a set of enumerable values in a sane way. The inability to express both in Go is quite frankly ridiculous.

I use go extensively. I’ve written numerous tools and deployed lots of things to production with it. Both of these problems are such a sore point for me. So many go libraries have either ridiculous workarounds with foot guns due to these two missing features that it hurts to use most of them.


I don't understand, why did Go ignore the past ~30 years of PL insights?


Because it wasn't designed by PL researchers. It was designed by systems programmers who are used to C and just wanted a "better C". It was made popular because that happened within Google and they publicly gave it their backing so they wouldn't have to train new hirees on their new language.


Also its creator pulled a Molyneux and basically promissed journalists everything they asked about it. Not only would it be the perfect C++ replacement for all projects at Google, it would do systems and embedded programming and dozens of other things as well.


My first Go project (i think this was ~2014), i created a supervisorD clone as a school project (the coroutine/channel part of the languages were pretty much perfect for that).

After one week, i started calling Go: C+-. It felt like a superset of C with a lot of helpful tools, that kneecaped you each time you want to do something it's not meant to, like using memcpy. Why feel so much like C and not give you its most powerfull tool? (i was becoming pretty good with C memory management, pointer algorithms, and gcc at the time too, and not having those tools available to code/debug probably gave me a bad first impression).

But it did its job pretty well in the end.


The public backing by Google absolutely propelled Go into the spotlight, but Dart, also released by Google, hasn’t achieved anywhere near the same success. Considering how long ago Go was released, if the language didn't have its own merit, it would have fizzled out by now and failed to sustain its momentum or foster such a strong community.


Dart was never marketed (to my knowledge) as a general-purpose programming language. Go was marketed as the best thing since sliced bread, and especially as a "systems language", which it definitely isn't. It was also gaining popularity on HN at the same time Rust was gaining its initial wave of popularity (~2016-2017, around when I started reading HN), so the two were compared and written about a lot in a way that Dart never had the chance to since it never had a narrative foil.


In reality it turned out to be a worse C in many ways, because it has a GC and fat runtime (ruling it out for a huge chunk of what you might use C for) and lacks any kind of metaprogramming capability (yes, C macros are bad, but they're useful/necessary a lot of the time).

Regardless of their intention, it turned out to be a competitor to Java, not C.


I don't understand, why did you think Go "ignored" them?

I think a really important insight from ~50 years of PLs is that recent language features (say: lifetimes, or dependent types) do not always correlate with practical adoption, security guarantees, low cognitive overhead, teachability, readability, fast compilation, mechanical sympathy, and other such goals.

You can totally argue that Go should have been designed differently, but it's much harsher and untrue to say the designers ignored the ideas you have in mind.


Go the language is simple at the expense of code written in Go being complex. That’s why it sucks. Every problem it doesn’t solve or solves poorly to eschew complexity is a problem every Go codebase now has its own bespoke pattern or hack or library to solve. Its low cognitive overhead is false economy.


You can't have "low cognitive overhead" and Go's "let's pollute every single line of code with error handling". Same for "let's have multiple versions of the same type everywhere because we don't have generics" (thankfully, fixed). And so on.


"Ignoring" is the charitable framing of this.

Sum types and product types are fundamental to a type system, the same way that addition and multiplication are fundamental to arithmetic.

You wouldn't design a language with only multiplication and not addition. You wouldn't design boolean operations with only `&&` and not `||`. You wouldn't design bitwise operations with only `&` and not `^`. You wouldn't design set operations with only `∩` and not `∪`.

The alternative to "ignore" here is "ignorant", so it seems a nicer intention that they were aware of the fundamentals and chose not to use them.


Thanks for your input! All the design proposal RFCs have shown they cause an issue with the Go Compatibility Promise whereby adding any enum value can cause a semver-MAJOR breaking change. This causes a greater rate of ecosystem churn which, in every design proposal so far, has been deemed a net negative outweighing the modelling benefits.


Yep, I'm aware. The question you asked was about the Go language designers ignoring the fundamentals, which is the root cause of this situation today where fundamentals can't be bolted on to the language later


Generics


Yes, generics. Have all the people complaining about the lack of generics in Go adopted it now that it supports generics? No, they found other things to complain about. So this should probably be a lesson to stop looking for the "next big thing" to implement (non-nullability? Union types?).


Yes, people will complain about other things. Why is it surprising to you?

Generics was a big thing because it impacted how a lot of code was written. It doesn't mean people don't want other things in the language that is hell bent on ignoring any practical advances in languages of the past 50 years.

What you propose is to basically stop improving the language because people complain.


Because the designers of Go believed that most programmers at google are too stupid to understand and use them.


I see these kinds of hateful comments repeated on HN all the time, and these comments disgust me. Why phrase the criticism in such an incendiary way? What’s the goal here?


The exact quote

> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt


Exactly my point! I don’t think anyone can reasonably look at that quote and think “He thinks the programmers are too stupid.”


> They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt

How would you phrase this? Thinking programmers are too stupid to understand languages with modern features is just the plain literal meaning of the quote, no?


It's not about intelligence, it's about not having the experience to appreciate and make good use of "brilliant" language features.


I don’t see how anyone could miss the plain literal meaning so badly. He clearly doesn’t think very highly of his colleagues.


He spent decades at Bell Labs. He was also, IIRC, in his seventies.

So a more charitable interpretation would be that most of his colleagues are in their 20s and just out of school with a CS degree. He doesn't think they can (productively) use the languages the cutting-edge PL researchers generate.


Like I said, he clearly doesn’t think very highly of them. A fresh college graduate could learn how to use sum types in a day or two.


Sure they could. Sum types aren't in go, not because fresh grads can't learn how to use sum types, but because they couldn't see a clean way to add it to go and have it fit with the other stuff they were putting in go, which they thought was more important.

But trying to get new grads to use Haskell on the scale of a 10 million line code base, and have them not make a mess of it? That's the kind of thing he didn't trust the new grads with.


Just a clarification (that doesn't really change the point), he was in his early fifties when Go was first released


> They’re typically, fairly young, fresh out of school

these same lowly colleagues were also incapable of wiping their asses at one point in their life but over the years and skidmarks, they all mastered that beautiful language and now make heated bidet money.


I dont know how you can read that quote and come away with anything other than "He thinks the programmers are too stupid"


I guess you’re saying “not capable of understanding a brilliant language” = “stupid”? This equivalence seems obviously wrong to me.

Even if you believe that anyone not capable of writing Rust or Haskell is stupid, it doesn’t mean that Rob Pike thinks that these people are stupid.


I think Rob Pike didn't mean brilliant as a complete compliment here. It was more an indictment on astronaut engineering of languages rather then engineers. He is literally the designer of Go and likes to program in it.


Super true. One of the best tests of this is setting up a new laptop. Some of the best experiences are when you get a new laptop, and just clone the codebase and everything works as it did before, no special magic. Golang with vendored dependencies seems to be wonderful for this but I've had relatively decent experiences with newer java projects.

My worst experiences universally have always been python projects. I don't think I've had a single time where I cloned a python project and had it just work.

Beyond just the code, I've had lots of mixed experiences with CI/CD being smooth. I unfortunately don't think I've been in a single shop where deployments or ci have been a good experience. They often feel very fragile and undocumented and hard for newcomers.


I have a couple decades of experience and have ridden a small start up through public and I have worked intimately with 6 companies. I know about taking a product and ultra scaling it in both technical and organizational scale.

I will never recommend Python outside of a small team. It is organizational molasses. My current company has multiple teams striving to keep our Python tech stack serving our growing technical and organizational scale.

I have fixed this in two companies in no small part with migrating to Go. I am on my third.


The hoops that people go through to solve this sometimes creates something even more complex and not great, like forcing all development into a docker container..

Ever try conda though? I’ve had moderate success with pipenv, but tbh I don’t love it as it hides too many things when installing a package fails.


Doing a Python development environment inside of Docker can get particularly obnoxious in the long run because there are approximately a zillion ways that a base image upgrade can break things by changing something about the system Python packages.

(And by the time it happens you might have a real mess on your hands because by that point the dev container's dockerfile has quite possibly grown into an undocumented spaghetti tangle of band-aids as a result of every dev on the team tweaking things in whatever way seemed to make the most sense at the time without a whole lot of regard for the end-to-end cohesiveness of the situation.)

The standard advice in the Python community is "never trust the system Python", but tools like pyenv that we have for protecting ourselves from the operating system aren't always straightforward to get working sensibly inside of a container. It seems like it should be easy, but I've seen people get it wrong far more often than I've seen them get it right.

A big part of the problem is that the Python community has developed an extremely severe case of TMTOWTDI when it comes to dependency management, packaging and deployment. It's led to a situation where, if you're just googling around for problem solutions in an ad-hoc manner, you're likely to end up with a horrible chimera of different philosophies of how to do Python devops, and they won't necessarily mesh well together.


TMTOWTDI is antithetical to python as a language it’s so odd to me that its packaging and env management is the opposite.

Do you have a suggested solution? Im solo dev for now but will be adding more folks in foreseeable future. Stuck with python for some things (notebooks, model development.)


Honestly I don't think there's any quick answer, and providing specific advice on Hacker News would be playing into exactly the problem I was pointing out in the last paragraph of my previous post.

My usual advice is to just bite the bullet and invest the time it takes to understand how Python package management and resolution really works under the hood, and how all the various devops approaches that are built on top of it work with it, so that you can make informed decisions and truly own your own stack.


I quite like docker for local development.

Docker and docker compose do make it incredibly easy to start everything that's required for local development and testing. Your service A needs B and C? Grab those images of B and C and run them all on your machine. The only limitation is the amount of RAM you have available locally.

And if you think a bit about your Dockerfiles (ie. have the layers set-up to take advantage of caching, have icecc+ccache mounts for c++ projects to distribute compilation and cache results, have mounts for apt or other package manager cache downloaded packages that you use) the local image rebuilds can be quite fast. Those are the little tricks to make your life with docker less miserable.


Yea all of that is stuff I do not want to bother with. At all. “It’s nice if you take on maintenance burden of a bunch of additional moving parts” is the opposite of what I want. If you have to support a diverse set of languages / runtimes / environments and you deploy using containers, maybe it makes sense, but that seems like a use of the complexity budget I’d rather spend on… something else


In big organizations, it solves a lot of problems. Docker isn't perfect (hence interest and growth of Nix), but in day-to-day use it's fairly replicable.


> My worst experiences universally have always been python projects. I don't think I've had a single time where I cloned a python project and had it just work.

I'm curios if you can spot a pattern in the platform (win/osx/linux), type of project, or is it all over the place?

My own experience with Python boils down to creating a virtualenv, installing the deps, setting up configuration (or just copying it from somewhere) and creating a database, and I'm off to the races. The only exception in recent memory was when a project had two dozen microservices, half of the codebase was on private package repository, and we used Poetry. The combo required somewhat more involved setup. That said, IIRC all the projects had fully pinned package versions (package==x.y.z).

In contrast, every time I touch something in JS land I get the same experience you described for Python. On one project we literally copied node_modules across machines (including servers), because it was unbounded amount of time trying to do a full reinstall. Anecdotally, amount of churn in JS is much higher, and the maintenance load increases proportionally.

Usually it's something like:

- have a project in JS with some dependency X that's no longer on the bleeding edge, but works nice

- want to depend on a new package Y for some new feature

- the new package Y depends on a library Z that's higher than what the other dependency (X) can work with

- try to update the original dependency (X)

- wailing, gnashing of teeth, and considering the switch to agriculture instead

In my experience, if you're not closely tracking the bleeding edge, upgrading packages and updating your code accordingly, your JS developer experience will be abysmal.

Agree on the CD part, especially the fragility and more manual work than if the deploy is some manually driven (semi-)automated process.


You can get the same JS/node_modules experience with Python, just use pdm. ;)


I love Python but it always amazes me how hard it is for it to just... work.

So there is virtualenv, built in, but... if there is a venv directory, Python doesn't just use it.

Like you have app.py, and you python app.py, that doesn't run it with the venv python. This leads to all sorts of problems with scripts that assume they're running under venv. Which means you probably want to write a script that sources venv just so you don't forget, but if you place it in the same directory you may forget you need to call the script, so you probably want to add an extra directory to hide all the python code so you only see the shell script that you need to run to properly setup the environment to run the python code. Or just use an IDE.

Just "pip install." But pip isn't installed and ensure pip doesn't work? What do I even do then?

I recall downloading a project that required a library that wasn't available for the newest version of python, so when you tried to install the requirements pip wouldn't find it. I discovered this, naturally, because I updated my operating system so the python version changed which means the project that used to work stopped working! What is the solution for installing multiple python versions side by side? Hint: it's not an official project by the Python organization but something you can find on github.


My recent workflow is to use a great program called mise. You have a config file in your directory and hey presto, python venvs work, they install themselves if they don't already exist, and it will install the exact version of python you specify in your config. On top of that is will set environment variables for you and unload them when you change directory. If you combine this with uv (just tell mise you want uv installed in the config) you can run uv pip sync and instantly reflect any changes in your requirements file directly into your venv very quickly.


For the past 4-5 years this is what has worked exceptionally well for me:

- pyenv for installing multiple versions of python on my machine

- direnv for managing environments (env variables, python version, and virtual environment)

- pip for installing dependencies (pinning versions and only referencing primary packages in requirements.txt - none of their dependencies)

This makes everything extremely easy to work with. When I cd into a project directory direnv loads everything necessary for that environment.

Each project directory has a .env and a .envrc file. The .envrc looks something like this:

    layout python ~/.pyenv/versions/3.11.0/bin/python3
    dotenv .env
Absolutely no headaches working on dozens of local python projects.


> Absolutely no headaches working on dozens of local python projects.

The other day, I moved over to a new container base image that's supposed to run Ansible inside of it. Almost immediately, when trying to manage a RHEL8 compatible host, I got this error: https://github.com/ansible/ansible/issues/82068

I've had issues not only with Python projects that I write, but also with software that's relying on it. Then again, while there are both problems and ways around those, my experience has been similar with pretty much every tech stack out there: from Java apps that refuse to run on anything newer than JDK 8 (good luck updating dozens of Spring dependencies across approx. half a million lines of code in a codebase that's like a decade old), to hopelessly outdated PHP versions or software that works on ancient Yarn versions but not on newer ones and doesn't even build correctly when you move over to Node with npm or software that's stuck on old Gulp versions. Same for Ruby and Rails versions that will run, or .NET and ASP.NET codebases, where the framework code ends up being tightly coupled to the business logic, don't even get me started on front ends that rely on Angular (or AngularJS), Vue (2 to 3 migrations) or React. I've had Debian updates break GRUB, AMD video drivers for the iGPU on the 200GE preventing it from booting, differences between Oracle JDK and OpenJDK having a 10x impact on performance, Nextcloud updates corrupting the install, the same happening with GitLab installs, just a day ago I had a PostgreSQL instance refuse to start up with: PANIC: could not locate a valid checkpoint record. PostgreSQL, of all software.

Sometimes churn feels unavoidable if you don't want code to rot and basically everything is brittle. All software kind of sucks, sometimes certain qualities just suck a bit more than the average. Containers and various version management tools make it suck a bit less, though!


yeah I wrote python-wool and set it as my local alias for python so it does just look for a venv in the called program's path, and use that.

https://github.com/fragmede/python-wool


> My worst experiences universally have always been python projects.

Do you mind sharing why do you think this happens ? Although I never worked professionally with python, this sentiment matches with my experiences as a user. So I don't have a lot of context why this is the case.

Some siblings in this thread provided some explanations that mostly boils down to 'bad tooling' in one form or another. But this doesn't feel right.

In my opinion if it was just bad tooling this problem would be solved by now.


Every time I setup a JS project which is older than a few years, it's

1. Extremely difficult to setup the code base, because of dependency spaghetti 2. Lot of breaking changes across different libraries, making maintenance not so easy.

Easiest projects to maintain were written on Go, Java, Ruby,


You may want to consider using Nix, with nix flakes.


How much of that just hides complexity? I remember back in the day hiding a large amount of complexity behind vagrant.

A new dev could get up and running quickly with "install vagrant; vagrant up", but that was hiding a lot of complexity behind a very leaky abstraction.


> My worst experiences universally have always been python projects. I don't think I've had a single time where I cloned a python project and had it just work.

I got a new Chromebook from work, and had VSCode+Docker running an existing Postgres+Django+etc dev environment in literally 15 minutes. I was shocked. Devcontainers are magic, and poor Python DX is a skill issue.


> Poor Python DX is a skill issue

Oh yes, the language whose ecosystem only hears about backwards compatibility in their own death marches? Not their problem. It's the developers, it's _their_ problem.

Not the standard library which _removes_ packages, breaking code which I recently cloned. See "imp".

And not the next python version, which throws a syntax error on bare excepts, breaking old code for absolutely zero benefit beyond pretending to be a linter.



I’ve switched over to devbox, which is a thin, easy to use wrapper around Nix on osx. Haven’t had to install brew yet. Brew was simple in theory but I always ran into so many issues like this, or just various packages polluting everything. Fingers crossed I don’t have to go back

As someone who hates tinkering with this kinda stuff, I’m surprised how well it works so far


My issues with homebrew are:

1. I hate the concept of dependency management. I want every package to ship with all dependencies inside. Just download tarball, extract and that's about it.

2. homebrew often wants to install things I already have, like python.

3. No easy way to install old packages.

I don't understand why things are made harder than they should be.


The downside of the "bundle everything" approach (which is also used by Docker and it's ilk), is that whenever one of those dependencies needs to be fixed or upgraded (for reliability or security reasons), you have to find every instance of it on the entire system, which soon becomes an extremely difficult task.

Shared libraries don't have this problem. Yes, they're separate packages, but having dependencies that can be upgraded separately simplifies upgrading that dependency.


This is assuming that just upgrading the shared library will work for everything. Too often, some things are broken by the upgrade, and since you weren't explicitly trying to update the thing that broke, you might not notice until a later date, at which point you may struggle to remember what was updated that it relies on.


I don't disagree that Hyrum's Law[1] is definitely a thing, but in practice with libraries that attempt SemVer or similar compatibility guarantees and understand that they'll be used in a shared library environment, breakage is not that common.

It also doesn't work for some ecosystems (like Go) where the practice is to prefer static linking.

1: https://www.hyrumslaw.com


> The downside of the "bundle everything" approach (which is also used by Docker and it's ilk), is that whenever one of those dependencies needs to be fixed or upgraded (for reliability or security reasons), you have to find every instance of it on the entire system, which soon becomes an extremely difficult task.

How it becomes difficult task? Just download things and replace them, when I ask to update. I have fast internet and big SSD, that's fine for me. 90% of software I'm using on my Mac are installed via alternative ways and they already bundle all the dependencies, so I already living with it.


There are also ways to abstract the files on disk such that it appears every module has its own copy of “foo.so” but they’re all the same bytes on disk. Using content hashes for example. I believe this is how pnpm works.

I don’t buy the shared libraries solve problems argument either. Lots of software are pinned to a specific version anyway so just because some security update has come out for a shared lib doesn’t mean it will work with all your other software.


Dependencies are totally out of control, in much of OSS, not just Homebrew. I probably have 5-6 leaf node packages that I actually installed with brew install, but brew list shows close to 100 little dependencies. Same with apt on my Ubuntu/Debian systems. I physically installed a handful of applications but my system has hundreds of packages installed. At least it’s not DLL Hell: the dependency management tools are great and uninstallation is clean, but wow what an explosion of quantity of dependencies!

I don’t use node but I understand it’s a mess there too.


There’s command to see your leaf dependencies:

https://docs.brew.sh/Manpage#leaves---installed-on-request--...


Yep I use that a lot. I like to make sure I only have installed what I need to survive, and uninstall everything else.


But it's nice to reuse code


Isn't it nice to reuse code so you dont have to write it twice?

It doesn't mean we can't run it twice.


You might be able to a nice graph with this command: brew deps --installed --graph


>2. homebrew often wants to install things I already have, like python.

I think its important to understand why this is the case. The python you think you have already, out of the box in MacOS, is the system python. Its not the python you should be using - its the one that python-based tools that your system depends on, is using.

Brew installs other versions of python - and gives you access to tools that allow you to maintain completely independent, different versions of python - for a very good reason.

You simply should not be using the system python for tools that are outside the purview of the system tools - doing so can lead to broken essential system tools.

So, don't be so quick to resist this aspect of package management. Its also true of Linux, by the way - developers should be using their own python installations, and not just glomming libraries into the system-provided python tree .. to do so, is to live very dangerously as a systems operator and as well as a developer.


> I want every package to ship with all dependencies inside.

Stopping where? python? c libraries? glibc? the kernel?

"all the dependencies" isn't what you think it is

> 2. homebrew often wants to install things I already have, like python.

oh yeah "python" like it's just A Thing You have. nothing has versions and of course every version can execute every code that's ever been written, past and future.

> I don't understand why things are made harder than they should be.

You're just willfully ignoring or not understanding the complexities.


> Stopping where?

Where OS provides guarantees. If OS provides guarantee that libc will be there, do not ship libc. If OS provides guarantees that python will be there, do not ship python. If you do ship python, hide it very well, so I'd never even know about it, unless I go out of my way. And it'll never be shared by anything.

Those questions are easy and solved by every commercial software. They need to make those choices and they do make it.


I'm sure you're familiar with versions. What happens when your software depends on a libc that has a function that was removed on the newer version, or added since the previous one? Now older or newer versions of libc don't work with your software, even though they're "there".


Right now I'm downloading "sonoma_bottle" build through the homebrew. So they already build versions for every supported macos version. I'm pretty sure that macos libc will not remove functions in the minor upgrade, and having 2-3 builds for currently supported OSes is not a big deal.


> 2. homebrew often wants to install things I already have, like python.

Well macOS does ship with Python but it's a true pain in the rear to deal with version conflicts of global packages.


My biggest gripe: You can only have one homebrew user per machine, so on a multi-user, multi-admin Mac, only one of those is allowed to brew install.


Yeah, their shitty chown system folders process is a bit shit.

I have had a small amount of success installing in home, when I had a locked down machine for work but it really wants to fight it.


MacPorts is a great alternative, also way older but never has been as successful as Homebrew.


Second recommendation for MacPorts

It predates Homebrew by a bit and is under Apple's http://www.macosforge.org umbrella of OSS projects, so as close to 1st party support as you can get.


MacPorts rocks. No ports breaking when upgrading MacOS as can (or could,not sure if that's still an issue) with Brew.


Third one for macports. It's more sane, at least for multi users support (I create an user for each contract work)


It seems like Apple also closely work with Homebrew - https://news.ycombinator.com/item?id=41708046#41711168


MacPorts is great!

Another excellent alternative is pkgsrc: https://pkgsrc.smartos.org/install-on-macos/

I've used pkgsrc on SmartOS/illumos and NetBSD for many years, but this is my first time using it on macOS -- about 18 months now, and all experiences positive!


Homebrew is really the worst of all the options on macOS, I wonder how it became so popular in the first place.


In case it shouldn't work out, I never had any problems with MacPorts.

I will have a look at devbox. I always found Nix on macOS too much hassle, maybe the situation has gotten better?


I just switched over to devbox after reading this thread, I like it a lot so far, thanks.


What I have learned from Nix is that polluting the global (user or system) environment is going to spell disaster at some point. Its obviously fixable in an obvious way, but why subject yourself to this nonsense in the first place. The only stuff I have in my global env is stuff I could actually use from almost anywhere, even non-code directories.

Nix isn't the only tool that solves this, there is acme and even per-compiler tools like nvm or cargo.


cars have made us much less fit though...


It definitely fulfills the same itch as painting or photography for me weirdly. But that goes away the more “corporate” and tedious it is. But luckily my job gives me a lot of leeway. It can be and largely is a creative field I’d say. There’s room for traditional and formal engineering in it, but I don’t think that’s the majority


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: