Hacker News new | past | comments | ask | show | jobs | submit | more tuldia's comments login

The learned helplessness in the cloud is stupefying, so many outages and downtime that could have been avoided by a competent admin.


> Why hasn't the industry come up with an alternative?

We used to have that, some companies still have the capability and know-how to build and run infrastructure that is reliable, distributed across many hosting providers before "cloud" became the "norm", but it goes along with "use or lose it".


Gnome 41.1, it just works, has everything I need and is quite simple.


Am I the only one that is not having issues with python and distributions in general?

I get all my dependencies from Debian and they all work, when I need something that is not yet packaged, I use pip.

What are people doing to get all this issues? I don't understand...


Once upon a time, I did "apt upgrade python-pip3" or something like this (was it just "python-pip"? Or maybe it was "apt upgrade"? It was a couple years ago). Anyway, what I do remember is that it quite literally killed apt: invoking it with any command would lead to a dump of a stack trace with ImportError coming from pip. Apparently, apt uses system-wide pip internally so if you touch it, everything breaks? Don't know, don't much care: since it was just a VM so I simply rolled to the previous snapshot and forgot about the details.

Edit: Ah, apparently the steps to reproduce are: do "apt install python-pip3"; do "apt install python3.8"; when pip3 complains that it's outdated, update it with the command it itself suggests.


Looks like the "Works on my machine" syndrome.

What about working on a project with a team?

What about deploying the code on another machine?


I'm impressed how a simple message lead to all these assumptions but I still don't understand where the connection is.

Yes, that is in the context of a giant and very successful team using this approach, believe or not. :)


When deploying the developed application on some server, all the exact dependencies get installed there. The main reason for the existence of the server and its configuration is to run the application, so the server adapts to the needs of the application and gets the dependency versions preferred by the app, instead of the application trying to adapt to the server and trying to make do with the libraries already existing there.


Then I go with something like docker-compose


Docker is a very heavy-handed approach to mitigate more fundamental issues with Python's package management, don't you think?


I'd say it's a heavy-handed approach to mitigate more fundamental issues with how python packages are maintained, if everybody wants to pin different versions then we're going to have to install different versions of everything which is what npm does and I consider that heavier.

Again, it's all a question of point of view, what we see as a package manager problem and causes us to keep reinventing packages managers, might actually be a problem with how we maintain our packages, my point of view being the latter. But I'm digressing.

When it comes to installing on "another machine", you don't know what Python they have, you don't know what libc they have, and so on, that is exactly what containers attempt to mitigate, so that seems exactly like the tool to use for this problem.


I think it's a fundamental problem with managing dependencies. On one hand, any given application usually knows what version of dependencies it actually supports so it makes sense for the application to simply bundle those in: it most extreme cases it's statical linking/binary embedding, or (usually) putting the dependencies in subdirectories of the application's directory ― in cases where the application has a "directory where it lives in" instead of it being thinly spread all over the system (e.g. over /bin, /etc, /usr/bin/, /usr/lib/, etc.).

On the other hand, the users/sysadmins sometimes want to force the application to use different version of dependency, so the application may provide for that by somehow referencing the dependency from the ambient environment: usually it's done by either looking for a dependency in a well-known/hard-coded path, or getting that path from a well-known/hard-coded env var, or from a config file (which also you have to get from somewhere) or from some other injection/locator mechanism, thousands of those.

And all this stuff is bloody fractal: we have system-level packaging, then Python's own packaging on top of that, and then some particular Python application may decide to have its plugin-distribution system of sort (I've seen that), and that too goes on top of all of that, not to mention all the stuff happening in parallel (Qt sublibraries, GNOME modules, npm ecosystem)... well, you get the picture. It kinda reminds me of "Turing tarpit" and I doubt collapsing all this into one single layer of system-level packaging, and nothing on top, is really practical or even possible.


What are people doing to get all this issues?

Trying to get the script to run on other OS's than just Debian (or Linux).


Infrastructure are like roads, if you have to change infrastructure every time a new car model comes out then you are doing it wrong.

Global state is global state.

The enemy is your own mind.


If you are using your roads to do gps pathfinding, then you are indeed doing it wrong. Do not do in infrastructure what can be done in code. You can't version control your infrastructure, but you can version control your code.

And with cloud instances and VMs providing abstractions that don't map 1:1 to the hardware they're running on, all your infrastructure becomes code to create reproducible deployments, or serverless execution.

You don't build your roads immediately before you start driving, and destroy them after you're done, and build only as many miles of road as you need each time you go for a ride. They're not a great analogy to what we do with computers.


> Do not do in infrastructure what can be done in code.

While I agree, I just wanted to point out that Terraform allows defining infrastructure as code.


> ... a regular dev will do.

Oh, the classic "There is one administrator" falacy.

> kubernatis if you feel fancy

I hear this mindset about kubernetes very often from people who can do some stuff with, but don't get infrastructure at all.

EDIT: thinking that a regular dev can do what an admin does is simply naive.


Any disclaimers? This opinion coming from a dev or an administrator?


I bet he was using macos and with the Docker.app running at the same time...


Every software has bugs, and is easier to criticize than to help. Don't focus on the negativity of HN and keep it up!

Thanks for your hard work, Greg!


If people don't report bugs, we don't know they are there as it "works for me!".

This isn't "negativity", this is people not understanding how the process works :)

And you're welcome!


These are some attitude goals for me. It's so easy to take things personally. Being able to take things constructively even when they might be personal is a great skill.


As every patch nowadays means that yet another symbol gets his gpl-only tag, no I don't report bugs...


Seems that dracut upstream is hosted at kernel.org.

This is the regex used in the watch file for Debian to fetch new upstream tarballs: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut-([\...


Simply having this surveillance tool come out the door shows that there is little or no consideration for people's privacy during the planning and development process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: