Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: #!/usr/bin/env docker run (gist.github.com)
496 points by adtac on Jan 14, 2024 | hide | past | favorite | 173 comments



For an actually intentional, non-cursed version of this, see the nix-shell shebang [0]:

> #! /usr/bin/env nix-shell > #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor > > # scale image by 50% > import sys, PIL.Image, ansicolor > path = sys.argv[1] > image = PIL.Image.open(path) > factor = 0.5 > image = image.resize((round(image.width * factor), round(image.height * factor))) > path = path + ".s50.jpg" > image.save(path) > print(ansicolor.green(f"done {path}"))

Just `chmod +x` and you have an executable with all dependencies you specify!

[0] https://nixos.wiki/wiki/Nix-shell_shebang


There's a 256byte limit for #! so this shouldn't work at all.

EDIT: Now I see its badly formatted, Either way, be careful with #! size limits.


Ah. I think two leading spaces fix this? I'll try:

  #! /usr/bin/env nix-shell
  #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor
  
  # scale image by 50%
  import sys, PIL.Image, ansicolor
  path = sys.argv[1]
  image = PIL.Image.open(path)
  factor = 0.5
  image = image.resize((round(image.width \* factor), round(image.height \* factor)))
  path = path + ".s50.jpg"
  image.save(path)
  print(ansicolor.green(f"done {path}"))


I use nix-shell, and mostly I love it. But it’s important to be aware that the above means “Go get the latest(*) versions of python, pillow and ansicolor and run this code in an environment where they’re available.” It doesn’t do any version-pinning of your dependencies. That might be what you want, but maybe not: it’s frustrating when a script that worked yesterday won’t work today, or will only work after some big download.

My own rule of thumb is that nix-shell is great for quick one-offs and for sharing environments. For local tools and anything else I’m sharing with my future self, it’s usually better to write a nix expression and install it, which gives me access to Nix’s (excellent) rollback system, and lets me upgrade on my schedule, not upstream’s.

* - ‘Latest’ according to whatever Nix channel checkout currently applies. Which you can change, of course, but the point is it’s external to the script.


You can "pin" Nixpkgs with this style of invocation as well, see https://nixos.wiki/wiki/Nix-shell_shebang#Pinning_nixpkgs. But I agree that if you're writing a shell script (or small Python/Ruby scripts) that you'll be running often, it's better to package it (e.g. with writeShellScriptBin) and install to profile.


There are pip-run, pipx run, etc for Python-specific use-cases.


I'm pretty sure TFA is “intentional” too. Isn't this the whole point of shebang?


Totally, some practical use of that here as well:

https://dpc.pw/posts/nix-users-you-can-start-using-rust-scri...


I documented how to do it with mise-en-place: https://mise.jdx.dev/tips-and-tricks.html#shebang


I'm a huge fan of the Nix shebang, and we now have a variant of it for the new CLI. Feedback on it would be appreciated.


The readme compares it to the cross-architecture Cosmopolitan libc, but Docker is anything but cross-platform. On any other platform besides Linux it requires a Linux VM.

Linux containers are great (and I run Linux as my desktop OS), just pointing out the not-so-efficient nature of considering this cross-platform.


OCI image manifests can specify platforms and architectures. From the end user’s point of view it can be all the same invocation.

Docker natively supports Windows, and it is low lift to make native Windows images for many common programming environments.

Does anyone use it? No not really. It makes a lot of sense if you need Windows stack stuff that is superior to Linux, like DirectX, but maybe not so much for regular applications.

There is also macOS containers, a project that has a decent proof of concept of a containerd fork that runs macOS container images. In principle there is a shorter path of work for so called host process containers, but fully isolated exists for macOS, it could work with e.g. Kubernetes, and people want it and it makes sense, and it sort of does exist.

The difference between cross-platform and “cross-platform” as you’re talking about it is really having some absolutely gigantic company, like Amazon or Google, literally top 10 in the world, putting this stuff into a social media zeitgeist.


I really like what this script is doing - it's specifying system level dependencies, a database schema, an interpreter, the code that runs on that interpreter, the data (on disk!) required by that code, and an invocation to execute the code all in one script. That's amazing, and this is an excellent model for sharing a stand alone applications with non-trivial dependencies!

However, Docker is an OS-level virtualization. Docker natively supports Windows in the sense that there is a native app. That native app spins up Linux virtual machines, so the container is "native" to my Intel CPU with their virtualization extensions, but it is not native to Windows. I use it, which I say with no animus toward your original message.

edit: I was ignorant of native windows containers. I'm old and my brain still maps docker to lxc I guess. Apologies to OP - the DirectX line should have caught my attention.


No, Docker supports native Windows containers.

Docker Desktop aims to provide the same experience across Mac and Windows and as such those use Linux VM's, yes. However Docker most definitely supports Windows containers.


Sorry, that's right. You can probably guess that all of my Windows Docker use is with Linux images. This particular script wouldn't work as there is no node image for a native windows host (unless there is? again, I'm ignorant of native windows containers).


Windows Services for Linux can install a Ubuntu image for ready usage.


Also ignorant - I have WSL/DockerDesktop etc...

I run ubuntu desktop in a vbox VM.

If I run ubuntu desktop on docker, I have to RDP into it.

What type of container will WSL build? A desktop - or headless with CLI?

Finally - which is lighter-weight, Vbox VM, or a Docker container, or whatever WSL makes?

EDIT: NM - I understand the answer now.


> What type of container will WSL build?

Pretty sure WSL is installing the full OS as a guest, a la VirtualBox


chroot requires disabling SIP on MacOS, so any kind of "container" that shares the kernel but has a mostly isolated userspace is never going to happen on MacOS. If you want an isolated host environment on MacOS the bespoke approach is to use VZVirtualMachine. The whole point of containerization is to not require virtualization, so it kind of defeats the purpose.

I really think people who "want" containers on MacOS don't understand containers or what problem they solve, and if they think they need them should consider why they aren't already running their dev environment in Linux.


The main problem, I think, with Windows containers is that they are only really supported on Windows Server - which most developers don't have access to.

You can run them through Docker Desktop, but then why not just run the same containers you will be deploying on you server (which is most likely going to be linux based?).

I would love for MS to make containers the way to deploy programs to Windows, but that requires them to make the runtime part of the default install and to make it available on all the OSs.


Windows containers can be built on Windows 10 pro and windows 11 pro. All you need is the hypervisor from Microsoft installed under windows Settings->Apps and Features->Additional windows features.


Windows 2022 containers work on Windows 11. Docker Desktop uses a shim for Windows containers. “dockerd” a single binary for Windows statically compiled is all you need to run Windows containers with the familiar Docker commands, you could also use PowerShell.

They are supported all the same. IMO the main issue is that this feature is poorly marketed.


Its extremely poorly marketed, since I looked up the MS documentation when I wrote that comment and it only still only said windows server.

Still unless it works on Win10 home, it won't be the default way to install software for windows - which sucks, since its a better way than the current one.


Software delivered via Windows store, specially if packaged with MSIX already uses containers.

Windows containers are supported in Windows Professional as well.

Maybe it is because I spend most of my time as Windows developer, this wasn't hard to find,

> One physical computer system running Windows 10 or 11 Professional or Enterprise with Anniversary Update (version 1607) or later.

https://learn.microsoft.com/en-us/virtualization/windowscont...


It does further down say that you need a windows server even for development purposes.

What I missed was that it only applied to windows server images.

Also the exception only seems to apply for development and testing services and, for some reason, only a physical computer.

Regardless, I was clearly wrong: it is possible, just not well documented.


Isn't this still the Windows Server images only? Can I expect everything to run that would run on win 10, 11 and/or server?


Plenty of Windows shops use Windows containers, from my side you can count with 5 projects delivered into production using Windows Containers.

Many App deployments in Azure also use Windows containers.


Yes, "windows shops" are stupid, you got that right. Windows is a toy in the server space, always as been, always will be.


https://news.ycombinator.com/newsguidelines.html

> Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.


Plenty of big boys money goes through such "toy" servers.


Is that because Microsoft is good at selling it or because it is actually a good piece of tech? We recently had to set up some Microsoft platinum partner test automation software, and the money we spent on SQL Server and Windows instances (on Azure of course) alone could've funded a fleet of Linux servers or a junior dev writing Playwright scripts all day.

(Not to mention it produces unactionable output by default, and if I love one thing, it's "this page didn't work one out of 100 times, must be infra problem" incidents)


You would have spent similar amount of money in Red-Hat Licenses, or anyone else worth using with big boys support contracts.

Linux is only free when our time isn't worth money.

Playwright is developed by Microsoft, by the way.


Systems engineer here, I haven't worked at a company that pays for Linux support in 12 years and this was at scale (10K+ servers). You don't need IBM or Canonical to get patches or a heads up about major vulns. Several ways to go with this but I get up to date patches for free with Debian. And I can count on my hand the number of times that any org I've been part of needed a kernel engineer or access to one. Support contracts for OS AFAIK aren't worth the money any more unless you really don't have anyone who can do system support.


That is why so many freetards are angry with CentOS acquision.


Freetards? Do you make money from an OS, compiler or similar infrastructure? If not, then your employer would in my opinion be making a mistake to send $$$ to a vendor of same unless they're in a very special niche.

One of the only enduring lessons from IT history is that there's always going to come a time to move on from some technology or vendor. And IBM isn't doing wrong by trying to capture some cash but it's very late in this game and its a losing battle.

I'm guessing that it's going to be the 'legacy' cloud vendor's time soon. The markup is way out of whack.


It somehow never actually happens this way, but I would happily spend twice as much for any open source product simply because I get more control, predictability, and utility out of it.

If you want to pay rather a lot more than merely twice as much you can get source from MS too, and still not get all the utility because it comes with ndas and no ocean of other user hackers who want the same obvious things you do.

Spending time on an open tool is an investment that you do because it pays off. Spending your own time, or paying a developer (hired in house or consultant), or paying license fees for a closed product are all just things you spend to get the result.

It has nothing to do with your time being worthless. If your own time is too super valuable to spend directly building, then the choice is not "pay MS to do it or do it myself", it's pay an employee to do it one way or pay an employee to do it another way.

You pay an employee 100k and MS 100k, or you pay 2 employees. You get 10x more value out of two humans producing work that you 100% own and get to have every important detail exactly how you want it, and then it works for as long as you want it. Even with the churn from security updates and popular fads, anything you invested in building, you still get to use forever if you want. No serial number ever expires, no activation ever blocks your ability to make backups and hot spares and parallel extra capacity. And those humans actively solve new weird problems in a way no piece of software or software licence ever can.

The reason not to pay MS is not because it costs money, it's because you get shit for it.


A nice utopic world that I never experienced.


I do. Shrug.


> Linux is only free when our time isn't worth money.

that's funny, because having rotated through all 3 major cloud providers in the past 5 years now (at different places), Azure support is the most time-wasting not worth it even if it was free, and i'd much prefer if i could waste my time reading documentation that makes sense, but Azure doesn't have that either.

Azure doesn't happen to be an outlier in Microsoft products, right?

> Playwright is developed by Microsoft, by the way.

And I'm happy the people there get to make things that work outside the eldritch horror that is Windows Server.


I have found the first person that doesn't have issues with GCP, former employee with internal contacts?


I had issues with GCP too, including bricked projects, but I appreciate more about it than Azure, the permission model in particular is great if you want to limit the damage fully independent teams (with... an inventive spirit) can do while making resource sharing much easier across boundaries than AWS accounts or Azure subscriptions. And even though the first answer to a ticket is always useless, at least you get something useful within 24 hours, whereas Azure support feels like I'm talking to ChatGPT sometimes, inventing issues in services I didn't request support for that sound similar (VPN Gateway turned into API Gateway) and then going silent for a week.


> Linux is only free when our time isn't worth money.

Oh, man, not this shit.

Linux saves time. Windows servers are an endless time sink, that costs more on its hardware, and have added license costs. And license costs are also mostly the time you spend managing your licenses, the actual money you send to Microsoft is peanuts.

Windows only costs its price if your time is worthless.


This. When it comes to serious use I wouldn't trust anybody who doesn't bother to invest a lot of time to actually learn the system anyway. IMO Windows servers are not easier to learn than Linux servers, quite the contrary actually, but I'm a bit biased since my personal computer runs Linux too.

In my experience the choice is usually made by the familiarity rather than ideology. Windows server can definitely be a better choice if they already successfully use it. And that's the case even if a Linux based server would be actually better for their use case based on some arbitrary metrics. Some are so ideologically challenged that they use both and see no problem.


Linux saves time is an oxymoron.


Why is windows server a toy?


I guess because it gets all the games, to the point Valve needs to create a Linux distribution that pretends to be Windows to be able to sell any meaningfull games to GNU/Linux folks, not even Android/NDK games get ported over.


Yeah it's insane how downplayed windows is in some tech circles. I personally don't even like it, but that's mainly because of a skill issue on my part (I don't know how to use it well). Yet it is very clearly not a toy, and in some ways it's way more powerful than Linux due to things like active directory and everything. I wouldn't run a random server on it but that's completely irrelevant to it being a toy or not.

In reality as you said tons of effort are spent exactly trying to make Linux good at the things windows has been good at for 30 years. But there's that weird dissonance that makes people think that windows is inferior to Linux on a technical level just because it's inferior on a software license/freedom level. The two are completely unrelated. The funniest thing is people who argue that the Linux kernel is just the future compared to the "antiquated" NT kernel (lol, lmao)


Is directx superior to vulkan? Serious question from a graphics noob (who dislikes windows development)


DirectX is more than just Vulkan. It does sound, input, etc...

Vulkan is like Direct3D 12, a low level 3D API. Between the two, most seem to consider Vulkan the better option. However, Vulkan has the reputation of being verbose and very much not noob friendly. It is mostly geared towards advanced engine developers who want full control to make the most of the hardware.

Besides 3D, the rest of the multimedia API are a bit of a mess it seems. On Windows and elsewhere. I haven't look at it for many years though.


DirectX the API compared to Vulkan: whatever.

DirectX as a whole product: yes.

For the two middlewares Unity and Unreal, on real applications, DirectX 11 will have better latency (lower CPU time mostly), DirectX 12 performance will be higher throughput (greater FPS), but neither will be by very much. Like a single application on ordinary hardware, it won’t matter. But for the thing I measure, occupancy, you can get something like 3x as much efficiency with DirectX on Windows compared to the same application on Vulkan on Linux.


I explored the idea of using the scratch image with a cosmopolitan binary to get something more cross-architecture, but you need a shell to run those binaries. I'd love to see cross architecture Docker images, if someone else can figure out a trick to make it work.


Just use redbean and provide a init lua file. Or use a http://cosmo.zip provided interpreter (like python, maybe even bash).

Each ape file is also a valid zip file. Add your dependencies as if the ape was an archive:

    zip -ur myape.com mydependency.anything
Also add a `.args` file:

    zip -ur myape.com .args
For this .args file, put one argument per line. This will run on start. You can use `/zip/mydepencency.anything` to read from files, but if you have an executable dependency you'll need to extract it first (I use the host shell or host powershell for this).

You can do this with any software you can compile with comsocc, by adding a call to LoadZipArgs[1] in the main function.

It's easy to get started, your ideas will branch out as soon as you start playing with it.

[1]: https://github.com/jart/cosmopolitan/blob/master/tool/args/a...


I think parent was pointing out that you need Linux to run Docker (since it doesn't run natively on any other OS) which is different from what Cosmopolitan provides.

Edit: Ok, apparently it natively supports Windows for Windows containers and for everything else there's a Hyper-V integration. Not sure if you can write a portable Dockerfile script like that though.


You surely can, I have Dockerfiles that do it.

It is a matter of having build parameters for base images and using programming languages that are mostly OS agnostic.


Makes me wonder if containerization is even possible without a VM for non-Linux machines.



I do believe so but only for the host OS. Eg Mac containers work for Mac etc


Doesn’t Cosmopolitan rely on QEMU to emulate an x86_64 CPU when running on any other platform?


No, it doesn't. You're probably thinking of binfmt https://docs.kernel.org/admin-guide/binfmt-misc.html.


Was QEMU replaced with another emulator or some kind of translation layer to run on a non-x86_64 CPU? I’m going by https://justine.lol/ape.html:

> It'll be nice to know that any normal PC program we write will "just work" on Raspberry Pi and Apple ARM. All we have to do embed an ARM build of the emulator above within our x86 executables, and have them morph and re-exec appropriately, similar to how Cosmopolitan is already doing doing with qemu-x86_64, except that this wouldn't need to be installed beforehand.


No


Not to mention the non-standard -S flag to env which makes the shebang work.


Not on Windows when using Windows containers.


Doesn’t windows use WSL?


Docker Desktop runs either with Hyper-V or with WSL. https://docs.docker.com/desktop/install/windows-install/


Not for windows containers. But no one really uses those anyways.


We use them.

Many Windows products, e.g. Sitecore, only support Windows containers.

Microsoft Store software relies on Windows containers infrastructure.

Windows containers make use of Windows jobs APIs.


WSL is a Linux VM


WSL1 is an API shim to get Linux binaries running in windows natively. It is more akin to what wine does on Linux.


But WSL2 abandoned that and is a Linux VM.


that's not necessarily true


The -S / --split-string option[1] of /usr/bin/env is a relatively recent addition to GNU Coreutils. It's available starting from GNU Coreutils 8.30[2], released on 2018-07-01.

Beware of portability: it relies on a non-standard behavior from some operating systems. It only works on OSs that treat all the text after the first space as argument(s) to the shebanged executable; rather than just treating the whole string as an executable path (that can happen to contain spaces).

Fortunately this non-standard behavior is more the norm than the exception: it works at least on modern GNU/Linux, BSDs, and macOS.

[1] https://www.gnu.org/software/coreutils/manual/html_node/env-...

[2] https://github.com/coreutils/coreutils/blob/b09dc6306e7affaf...


There is some some ways of doing this in a more portable way on Unix like systems [0]

[0] https://unix.stackexchange.com/questions/399690/multiple-arg...


Not to be negative, but is this warning of non-standardness for like, AT&T unix or something? Beyond Linux, macos, and BSDs, I'm assuming you're running an ancient mainframe or something and are not worried about trying a cool docker shebang hack (probably because docker doesn't exist on your platform anyway)


This is genius and I love how this is a whole app meta-seed in a single file! I think I have docker trauma, why did we reach a point where we need computers inside our computers just for normal stuff to work?

Container packing is cool, but is it just a security thing preventing us from using our normal hardware? Or versioning (NixOS)? Is wasm capable of doing this and is wasm still alive? I just feel like needing to run tests inception style inside and outside docker gets complicated and annoying and always try to just use Linux directly these days.


There are many reasons, but the simple idea of "containing" is a big part of it. You could run several versions of Python, database systems, etc. on a single machine, but it rapidly becomes confusing in most cases with dependency clashes, losing track of where everything is, etc. Anyone who worked on multiple projects ~20 years ago and didn't use VMs might remember how it felt.

It's like if you have a workshop and you diligently organize all of the different parts into different trays in different units so it's easier to do all the types of work you need to do. You could just have a giant box in the corner where you chuck absolutely everything.. far less complex, but it'd make your day to day work a nightmare.


I'd argue that running all of those things inside docker containers also rapidly becomes confusing. The confusion is inherent to the complexity of the things you are running.

I don't hate docker, but I find that it's just not that useful until you reach a certain scale. I stopped using it for personal projects and am much happier for it.


What do you use for your personal projects now out of curiosity?


Nothing fancy. systemd for process management, Python for automation, and a whole bunch of shell scripts. IOW the way that I did things before Docker came along.

I still believe in containers in a multi-developer environment, but in my experience the disadvantages outweigh the advantages when your only coworker is future-you.


the computer i would run the container on. I just run my software on that. If it's complex to configure, it'll still be complex to configure in docker. But then I also need to configure docker.


Executable files (and OS processes) used to be that. Then came shared libraries, configuration files, multi-executable applications, and whatnot. It would have been nicer to extend the executable formats and OS process sandboxing, IMO.

Next thing we’ll define a new format and runtime to package and run a collection of docker images with associated configuration.


I chuckled reading this, as we're trying to do exactly what you described at my current startup, https://github.com/kurtosis-tech/kurtosis ! More seriously, I've been mulling over the idea that humanity is going through a continual process of modularization and unification:

First came machines, to perform simple "computation" tasks.

Then came the computer with instructions to represent the generalized notion of computational work.

Then we wanted a way to DRY instructions so we got functions.

Then we wanted a way to package collections of functions so we got libraries.

Then we wanted a way to manage collections of libraries so we got package managers.

Then we wanted a way to distribute collections of packages so we got containers.

Now we're in a world where instantiating and configuring a collection of containers is error-prone, burdensome, and rarely portable.

Each level adds something (containers have the benefit of being language-agnostic), but the price is complexity.


> we're trying to do exactly what you described

I was secretly assuming that something like that is already being done. ;)

Recursion is fine and useful. What’s detrimental is if each layer defines conceptually same things in slightly different ways and with different terminology. Make a recursive format (like a file system) and be done with it (and/or extend it so that all levels can profit from the extension).


Yep, I agree. I've been chatting with a friend about Nix (I'm a novice) and it sounds like it has the capacity to treat many things as Just Files connected in a dependency web, which is cool.


Docker containers (in practice) can be considered to be an extreme form of distributing static binaries (snaps, flat-packs, nix, fat go binaries, pyinstaller, etc).

It is less about security and more about having several applications on the same hardware without full blown VMs.


I know people who use Nix for this... May or may not be another level of confusing though. Also, I heard it's a bad choice for JS ecosystem.


It’s a great choice for the JS ecosystem for the same reason it’s a terrible choice for the JS ecosystem: JS dependencies are a lot, and they sometimes want to do strange things at install-time that Nix frowns upon. There’s definitely an upfront cost, and a maintenance burden as well. But the flexibility and the control over what code you’re actually running could still be worth it.


The world should frown on the strange things JS and other languages do at install time and not accept it.


The single file aspect is cool for distribution but of course not for editing.. a similar thing that is still maniacal/clever but somewhat easier to scale could use i.e. makeself


Many people share your concern. Hence users dislike of snap


Well snap has other problems too. For me a big one is that it is pushed heavily by a single company which may or may not still exist in 10 years. Or which might decide to capitalize on its investment once enough people are locked into its ecosystem.


Cute trick, but it's not actually what the title claims.

Since this is actually env calling bash first, not docker, this should just be a Bash script. You can still feed the Dockerfile to docker build via STDIN. But you'd gain the ability to shellcheck the Bash, the code would be easier to read, write, maintain, add comments to, etc. You could keep the filename the same, run it the same way, etc. The way they've done it here is just unnecessarily difficult.


> You can still feed the Dockerfile to docker build via STDIN.

but you'd then have to work out how to "filter out" the bash commands inside this bash script to make it a valid docker file.

Unless of course, you entirely store the docker file contents inside heredocs. That works fine, but it's not as "cool" as "executing" dockerfiles as a script.


You can say it is wrong without being insufferably condescending


Something like this should definitely exist, just not with Docker!

Podman is better but it's also a bit coupled to a distro - https://news.ycombinator.com/item?id=38981844

The problem is the Linux kernel container primitives are a bit of a mess

bubblewrap is a lot closer, although last I heard it's not in some distros for security reasons - https://news.ycombinator.com/item?id=30823164


i think the kernel primitives are fine, unshare and namespaces make perfect sense to me. docker, podman, buildah, buildx, whatever... all these things with cutesy names and fatal flaws seem like the mess to me.


the feature IS the fatal flaw. after unsharing namespace you still want your network to "just work". the "quality"of the solution is directly proportional to how bad the security is.

the scale is non virtualized qemu all the way to docker which will even screw your iptables rules for your convenience. hn crowd falling in the middle as the Goldie locks we all are.


another docker post filled with podman propaganda. despite it all, still no one uses it.


I haven't used docker since ~2017. My clusters run on cri-o, builds are with kaniko, and some of my systems just call runc with OCI container definitions. Docker (especially its API) is a giant mess, and the sooner it's replaced by smaller tools and clear standards the better.


Can attest from $Job that there are podman users. Podman is awesome for some of our RHEL-based systems and we will continue to use it. You are just gonna hear about it a lot, because it's just a runtime.


https://news.ycombinator.com/newsguidelines.html

> Please don't fulminate. Please don't sneer, including at the rest of the community.


Still a useful alternative to docker and can be packaged in distros


I mean I know several people who run their infra with podman. But it's for personal things, I don't know if there is any level of usage at the enterprise level.


DISCLAIMER: I work for Red Hat. I'm formerly an OpenShift Consultant and SA.

podman has underpinned our Kubernetes distribution, OpenShift, since 4.0 was released in 2019. OpenShift is a $1B+ USD business for us (https://www.newsobserver.com/news/business/article271678707....). You can search and see a sample of who uses it for Enterprise level business.


OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. Podman is only directly used for the openshift-installer, but as a container management tool uses the same underlying runtimes. This means they share the same long tenure in production when it comes to using runc. Is that what you meant?

https://docs.openshift.com/container-platform/4.14/nodes/con....

https://docs.podman.io/en/latest/#:~:text=Podman%20relies%20....

The answer is a little bit more nuanced, as the defaults differ. Podman uses crun by default: https://podman.io/docs/installation#:~:text=crun%20%2F%20run.... For OpenShift the use of crun is available as a Technology Preview: https://www.redhat.com/en/blog/whats-new-in-red-hat-openshif... since 4.12. The default for 4.14 is still runC.

Notwithstanding, Podman is gaining a lot of momentum, especially now with Podman Desktop. Disclaimer: I work for Red Hat on the Podman Machine and OpenShift Local/CRC teams to provide integration with Podman Desktop aiming at developer usecases


Not enterprise level, but I made a choice of deploying podman at $job rather than docker for a few reasons.


I'll chime in to say that I have started deploying podman over Docker where it's frictionless at $job as well. I'd say half (or more) of my new container deploys are podman.

At home I use only podman because my tinkering doesn't affect anyone but me.


Podman Desktop sees a lot of increased use in the last few months and the PM has spoken with many of our 'customers' about the future and how they are using podman.

Disclaimer: working at Red Hat as a (tech) manager of the OpenShift Local team, involved on the virtualization targets for Podman Machine and the integration of some of our extensions.


I use it and love it. YMMV.


No love for Apptainer/Singularity?


What's that? What's good about it? :)


This is cool hacking but I really don't get this obsession with "single file". Directories exist and can contain self-contained applications without the need to pack everything into some ugly script. They are into the slightest bit more difficult to ship around to different machines.


I think maybe it helps to think from the point of view of a developer for whom these single-file things are tools in their workshop.

- Easier to grep a collection of single files

- Easier to see what you've got in your collection in a directory listing (whether via a shell or in a web UI such as GitHib)

- Easier to view the contents quickly (`cat`)

- General philosophy that flat is better than nested


You can create this type of thing (a self-contained single-file project) for any language or infrastructure, with or without a clever shebang. All you need are heredocs.

For example, here's the same app but packaged as a regular bash script:

https://gist.github.com/lwneal/a24ba363d9cc9f7a02282c3621afa...


Of course! Bash script is Turing complete so it should be possible to implement everything in it :)

The only upside to having an executable Dockerfile is that it's still a valid Dockerfile that you can use with docker build, docker-compose, etc. in addition to being able to execute it.


Yes I love this approach, I use this exact format as a way to get ChatGPT to work with an entire multi file programming project in a single idempotent bootstrapping script. Then ask for changes to be given as the entire file again


Agree, nesting files with

    cat >Dockerfile <<'EOF'
and having a basic bash script seems way nicer than putting all the shell logic on the #! line.


Reminds me of the "self consuming script pattern". Seen in this super user answer.

https://superuser.com/a/440059

It embeds an awk (or any interpreter) script, and uses sed to cut out the script between tags in $0.

I agree with other comments that this kind of thing can get messy, but sometimes it makes a lot of sense and let's you share a single file.


The upgrade files for a product I used to work on was (and perhaps still is) a .tar.gz file with a shell script prepended to them, to make a self-extracting/self-executing archive. The archive wasn't even base64 encoded or anything; just binary data with some text in front that can find the beginning of the binary.


For those wanting to go down the self-extracting executable route, I recommend arx (it generates that sort of tarball-prepended-with-shell-script you describe) https://github.com/solidsnack/arx

The `nix bundle` command can generate an arx file, which includes all of an application's dependencies. As an example, we started getting issues with an EC2 server whose image was an accumulation of changes over several years; whilst we worked on migrating to a saner setup (containers defined using Nix), as a stop-gap we got the server working again by using `nix bundle` to create an arx executable containing working versions of all the application's dependencies, which we could copy to the existing server as a drop-in replacement of the existing (broken) command.


Oh yeah, true, I've seen this pattern very often. Can be annoying sometimes when you just want to extract the files rather than run an installer script and they don't give an option.


Can someone explain what this is and what it does? I have no idea. I use Windows and have never needed to use Docker for anything.


It’s a Docker shebang. Normally shebangs are used to define what shell to use when running a script, or in the case of a python script, to run it with “./myscript” instead of “python myscript.py”.

Here OP created a little hack for building and running a docker container by adding a shebang to a Dockerfile.

Usually it’s a two step process. You first use “docker build” to build the image and then “docker run” to create a container from it. With this little hack you just run “./Dockerfile” and it does both.

It’s cool, but not really useful for most people.


It turns a Dockerfile into an executable script, so that by executing the Dockerfile, the shebang invokes docker to build and run the file.

Pretty neat if you're using Dockerfiles, but also highly non-standard so you wouldn't use it in your company repo (unless you want to increase the "what-the-fuck" level of your repo).

It's more of a "look, this is cool" kind of a thing if you're a Linux and container user.


I can see this used to install dependencies (php composer?) to inspect code with references that resolve instead of having to spinup a whole toolchain just for that


There's also `guix shell` which can be used in shebang position. Example from the Guix manual:

    #!/usr/bin/env -S guix shell python python-numpy -- python3
    import numpy
    print("This is numpy", numpy.version.version)

It also works with manifest files specifying more complex environments.


This isn't POSIX compliant is it? I feel like I tried to do something different but trying to put arguments in a shebang and ran into trouble there a year or two ago.


It depends on the version of /usr/bin/env.


I believe it's compliant but only in the sense that the end result is unspecified by POSIX.

I.e. you can't rely on this working on a POSIX compliant system


Should be fine, you can even compile and run a C file using a shebang


As a rule, if you can write your code in a file, and run it as a script, it is better than writing scrolls between

    <<EOF 
    …
    EOF
The why is obvious.


It's not obvious to me that the benefits outweigh the benefits of a single, easily readable file.


Some would disagree that heredoc'ing your scripts makes them easily readable.


Some people like their personal full-programs inside a single-file, I think the appeal is that after opening you only have to keep scrolling to continue reading the other "files", or that if you need to attach it to an email or something similar you are sure it has no dependency on other files, but yeah the trade-off is not worth it.


And keeping it one file means you're reducing risk of a breaking change in the external script.


I did this in Nov 2021 - https://www.grepular.com/Self_Building_and_Executing_Dockerf...

    #!/usr/bin/env -S bash -c "podman run --rm -w /x -v "\$PWD:/x" \$(podman build -q - < \$0) \${@:1}"


Can we figure out a way to throw an exec into the shebang so the Docker process replaces the bash One?


Simple file based solution. That's not abuse that's unix.

Well; thats Unix before Pottering and Microsoft.


I feel comfortable with fenced code blocks. Using heredocs all the time, not so much.

    ```js title="/root/server.js"
    console.log('test')
    ```
or

    `/root/server.js`

    ```js
    console.log('test')
    ```
vs

    RUN <<EOF cat >/root/server.js
    console.log('test')
    EOF
However the Markdown one is better if the syntax highlighting theme makes the code fence a color that doesn't stick out - either monochrome or closer to the background color.


The file is a Dockerfile with a shebang line that ignores the comments with a regex. Code fences would not be valid.

The point of this isn't to share this code, it's a demo of the clever shebang line.


Brilliant idea. Single markdown file for a whole app stack?


Multiple markdown files, presenting code more like gists where you can read them top to bottom, plus with docs in between...


That’s cute - though typically the docker images I build need supporting infra around them anyhow - I’d have to forward to the build script.


I'm probably getting banned for committing this war crime but have you considered a

    #!/usr/bin/env -S bash -c "docker run --privileged ..."

    FROM docker
    
    RUN <<EOF cat >/other.Dockerfile
      #!/usr/bin/env -S bash -c "docker run ..."
      FROM debian:buster
    EOF
    RUN chmod +x /other.Dockerfile
    
    CMD bash -c "/other.Dockerfile &; ./main"


No, but that’s a fun idea. Most of my docker crimes involve working around the lack of REBASE or similar to transplant a layer from another stage. Instead I’m forced to abuse rsync.


I use `COPY --from=image` to move data between images. Do you need some more advanced features of rsync?


I've got that for rebasing: https://github.com/efrecon/docker-rebase


heck, why not toss in --net=host


... and privileged, then make the entrypoint 'nsenter' for PID 1


    -v /:/sysroot


could you make it curl a script as well


Using spaces in a shebang is not a standard thing and doesn’t work in most shells.


The spaces are being handled by `env`:

    $ env "-S echo hello world"
    hello world
https://www.gnu.org/software/coreutils/manual/html_node/env-...


Curious for what systems this does not work? I start a lot of my shebangs with `#!/usr/bin/env <app>` such that I can rely on PATH for resolving application locations.


Just plain #!app also works. Probably less portable, but it does work on linux and macOS. Not sure if POSIX has anything to say about shebangs.


Huh, I guess I've never given it too much thought. I was under the impression for some reason that the first argument was supposed to be an absolute path.


It might have had to be absolute on ancient Unixen ... Unices? Seems POSIX has all of this to say about shebangs:

    If the first line of a file of shell commands starts with the characters "#!", the results are unspecified.
So it's basically all down to convention, but one that's been followed long enough that you can rely on it. I still don't count on shebang taking more than one argument to the command though.


Is this webscale?


It has web scales on it for sure.


Is hyperscale.


hyperscale^3-8


Can someone explain what this is about?


Looks like docker everything is done in just one file?


what is the practical use of it?


I mean, apart from the hacker mindset of the thing, if you’re talking about the _outcome_: there is real value in being able to distribute something to a customer without having to worry about whether they have the dependencies or not, what version of an OS they have, whether they are running on ephemeral VMs or long running machines they don’t want to pollute etc etc.

At Google Cloud we did this on a team I was on. It was really the only way we could be sure of the environment we were handing off to the customer.


It's funny you mentioned Google's internal infra because my motivation for this was to hack together something to emulate the kind of static fat binaries deployed on Borg.


Why is that important?


I had no idea a shebang could be used like this! After all of these years…

Nice hack. Love it.


For added excitement, you could go the whole hog, and generate, build and run via docker compose. Apart from anything else, you wouldn't need the 2-step build&run.

I mean, you could. Whether you should, well ...


why not use docker build -q instead of that silly sha parsing?


As you can imagine, it wasn't a fun developer experience building this incrementally without build logs. This was the only way I could find to have the cake (logs) and it eat it too (sha).


I like how the last code line reads

   ctx.stroke()


Haha very clever. I like it. L




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: