I love the download and install section's lack of assumptions that someone looking to install this specific package doesn't already have gcc/git/make avaialbe. "Hi, I'm a brand new *nix user on a fresh install of my distro, and I want to install a command to recreate the Sneakers decryption effect" says no one, ever. =)
So many packages make these kinds of assumptions. I tend to find that in those situations, the likelihood of some other salient bit of information gets left out as well. Also bonus for not being an npm install type of something.
I hate this in documentation. Snippets of code or config that don't work on their own, and it may not even be clear where they're supposed to go or what they're nested in. May use imports the name of which cannot be found on the page. That kind of crap.
Please, at the very least, put a full-fat example or a link to it at the bottom of the page. Something.
The amount of “put this in your config file” for things they are part of build chains for JS without specifying where that file lives. Oh, the word “config” is a link to the configuration docs page but…that just lists the possible config options and not where they live.
C/++ programs are atrocious for this. On any other language you just run "___ install" and it grabs everything you need to make it run. While C devs expect you to first spend an hour searching for the packages they are too lazy to list. Which all have different names on each distro, and probably at least one of them doesn't have the version needed.
Needlessly wasted, because each person doing that could publish it as a spec and save everyone else the effort.
They are relatively easily converted between formats (ie: deb/rpm)... once any is made. Personally, I appreciate that this is generally available in distributions.
This is the entire purpose behind Linux distributions. Package up and distribute the software.
I don't begrudge anyone plugging away at development, but using the source as the distribution model drives me mad.
Seems like the author simply assume those who want to build his code already have ncurses installed. Thus, only a Makekefile provided. No configure, scons script etc etc.
It would be nice if the author states explicitely what the dependencies are so you don't have to spend hours installing the libraries. Yep I get it
I get that it's possible, as a developer, to have so many packages installed on your local that are used all of the time so that they are overlooked as an external package. However, it seems like a dependency tree should be able to be created by all of the import statements. From there, just list everything even if it is part of the stdlib. I'm not a C/C++ dev, so maybe that is possible and people just don't do it?
I'm not a C/C++ dev, and not familiar with how well C/C++ toolings improved.
But I frequently deal with C/C++ codebases as an undergraduate student many years ago. Some of them only provides a Makefile, which assume you already have the necessary libraries installed (or know how to install them and adjust the include paths). How to build the code? The usual cycle: make -> compile error - install lib A -> make -> another compile error -> install lib B -> .... repeat this until you got the thing built.
Usins build tool like SCons makes your life easier. If it cannot the detect the installation of required libraries, it may throw errors before even compiling your code.
Of course modern programming languages like Go, Rust, Nim etc typically have default package managers...
> This project provides a ncurses implementation for such cases. You will need the ncurses library installed. Install this library from your package manager.
THIS! I was a little disheartened when I saw I had to build it, but the repo is tiny, and the build is a second. This should be the norm, not a pleasant surprise!
I 100% agree with the frustration. Anything that involves make as described by a repo such is this is not something I want in any of our processes. It's going to fail.
But this is really a toy and not meant to be taken seriously.
But also, on the other hand, I'd like software to be easy to install, so it being included in package distributions is convenient. That said, being able to run from source without dependencies would be cool as well; that's probably why some tools are still built in pure shell code.
While I too wish for complete build instructions as a packager, I feel this is overstated.
The very first part of the 'Download and Install' section says to check the distribution package manager so that building isn't necessary
These users are better served not even knowing about builds. In this spirit, the bit saying 'if you aren't absolute latest, adopt the build from source pain' should be removed.
Toys like this don't demand the latest and greatest. Save the [complete] build instructions, including the libraries (ideally their package names), for developers/contributors
True fact: there's a subset of real cryptanalysis attacks --- the CBC padding oracle, Bleichenbacher's RSA padding oracle, the ECB byte-at-a-time attack --- that (I think?) Juliano Rizzo coined the term "Hollywood Attack" for, because when you run the exploits for them, they look like decryption in Sneakers.
It's a good term, everybody should use it, it's even a little useful for getting the intuition behind the attack (a lot of cryptanalytic attacks work by finding creative ways to isolate a single byte of ciphertext instead of having to work against the entire 128 or 2048 bit message).
I absolutely love this movie. I grew up watching it on VHS and coincidentally bike by the setting of one of its major scenes daily. There is that brick courtyard that currently sits between Google and Wharton. In the movie its a outdoor dining area, and Sidney Poitier says, holding the car phone in Martin's Karman Ghia, "Martin, It's your mother". And then they realize they're not with the NSA. Anyway, they've added a floating firehouse there now, and the road looks a bit different, but I love how similar the view is there now as to when the movie was filmed.
I really do love this movie, it's really well done all around.
> Wait, you're telling me I can just download these 3-5 source files, type `make`, and it will _just build_?
No. You need make and a C toolchain installed.
If you have those installed, you almost certainly also have Python, Perl, and possibly even a reasonably recent Java (not on MacOS, yes on most Linux distros, i think) installed, and all of those are also capable of doing this with zero dependencies.
Embedded projects can be more complicated to build than the Linux kernel in two respects: Embedded projects are usually cross-compiled; whereas, one is ordinarily building the Linux kernel for that machine or one like it. And, hardware vendor code is usually bad.
I had one embedded project that I handrolled the makefile for, and it was all in pure C, no OS. That was fine, but ensuring that the versions of the hardware abstraction layer, and other drivers I wanted to use from the same vendor was a source of annoyance. Keeping track of which source files were to be enabled and disabled in the makefile was also annoying.
Another project I am working on uses platformio on an RTOS (and a much bigger micro). Platformio is written in Python to handle all of that 'seamlessly' and it does work pretty well from my perspective. And I don't have to manually install separate toolchains to /opt (or wherever) and keep track of them; it 'just works.'
I like the makefile approach. With only a little work, it's possible to make clean build directories without .o files littered around like mouse droppings. I know exactly what code is being built, and, because I usually have to write the linker script, where it is being put. But once it gets more complicated with different toolchains and compatibility problems (and it usually does if you have an RTOS), it sure is nice to have something like Platformio.
There's also the aspect that a lot of the really good packages for dealing with e.g. text or the OS don't translate well to the constraints of embedded systems.
So I could go to all the trouble to have a cascading compilation all in pure c or c++, so the git SHA is written into a given source file, which is in turn built and flashed onto my target device. Or I could use an interpreted language for all the off-the-device operations, completely skip over configuring a separate development environment, and do it all in a fraction of the time.
The only reason I want to be compiling c or c++ for the host is because I'm also writing a driver at the same time, and even then...
I'll see your "Python and Ruby" And raise you to "A windows batch file that calls perl, python, nmake, and gnumake" It's like each person who contributed to the build system just used whatever tool was closest.
I've got a (bash) build script that that uses perl (with lots of regular expressions), python (apparently for no other reason than that the version of perl it started with didn't have `say`), dc (to calculate absolute memory addresses, because python and perl don't have a convenient equivalent of `P`), and runs a windows batch file under wine to build the .exe version with MinGW (although that will print a warning and continue if wine or MinGW is missing or out-of-PATH).
The cherry on top: visiting the https://cryptography.io/en/latest/ landing page, a Python outsider wouldn't be able to tell that this is for Python.
A full text search for "python" reveals only a single match hidden at the bottom within the FAQs section; ironically within the context "Why are there no wheels for my Python3.x version?".
Most of the other FAQs add to it:
- I cannot suppress the deprecation warning that cryptography emits on import
- cryptography failed to install!
- Why does cryptography require Rust?
- Installing cryptography produces a fatal error: 'openssl/opensslv.h' file not found error
- cryptography raised an InternalError and I’m not sure what to do?
- Installing cryptography fails with error: Can not find Rust compiler
- I’m getting errors installing or importing cryptography on AWS Lambda
- Why can’t I import my PEM file?
What happened to the backend argument?
- Will you upload wheels for my non-x86 non-ARM64 CPU architecture?
Python is by far the most platform-dependent, brittle programming environment, and the prime example that the "interpreted languages run everywhere!" trope is increasingly dead.
Why is this python's problem? You are choosing to use a c/rust accelerated library. If your project were pure c or rust, you would still have to build everything.
I meant: "Python *has by far the most platform-dependent, brittle programming environment".
> You are choosing to use a c/rust accelerated library.
Especially with something as fundamental as `cryptography`, I don't choose it - it is a given through transitive transitive dependencies.
> If your project were 𝗽𝘂𝗿𝗲 c or rust, you would still have to build everything.
That's the whole point: when developing in programming language XYZ, then stay within XYZ's environment: As much as possible, implement stuff by writing in XYZ, and import other stuff following this principle.
Whereas, the moment you or one of your dependencies delegate to other environments, your program inherits and requires additional environments.
So many projects and libraries in Python World have taken the latter approach, resulting in so many Python applications being a cobbled-together Frankenstein diva. It will refuse to work on your users' system, until they have the correct rubyenv, and autoconf version, and npm.
"Huh, why does the user report a problem with npm when installing my Python application?". Well, that could be because unbeknownst to you, whenever you yourself developed, updated, and ran your program, there was always this tiny dependency of a dependency that needs it - you just didn't know about it until this issue report, because your system already had Node and npm set up correctly. That reporting user's not!
Linux, being a kernel, has no external dependencies (except for its menuconfig, which requires ncurses). That sort of isolated build tree is very rare in almost every other software project, even something like gcc has a dozen external dependencies IIRC.
It does, including at least a shell, a C++ compiler, make, and a bunch of other UNIX/GNU tools to run the whole thing. And where do you get those programs from, and can you trust them? If you look really seriously, a scary problem emerges. (See: Trusting Trust Attack)
The Guix project recently hit a massive milestone in untangling this, by creating a package set which starts by building a 357-byte program and works its way up to the utilities you'd expect: https://guix.gnu.org/blog/2023/the-full-source-bootstrap-bui...
Like, a ton of our ops-scripting, things have 2-3 direct dependencies and maybe 10 transitive dependencies. And the latter is an actual flask application with pydantic, so a bit outside of "scripting". We've settled on a set of these versions, put them into a venv on a system and that's it. Update every few weeks, get accustomed to the libraries. Nice and stable base to work with.
And then we have some of our python-based ML teams. I'm not joking, but I have entire classes of servers which - including their dataset - are smaller in storage footprint than some of their python containers including tensorflow and models and panda and all manner of things. Feel free to accuse me of comparing apples and oranges, but that feels very, very strange.
Yeah the ML envs are insanely bloated, but honestly it works well enough for us doing that work. It's nice having lots of battery-included stuff. It really sucks having to manage the primary dependency chains (mostly centered around CUDA versions and GPU drivers), but it's slowly getting slightly better. The overall bloat of packages generally doesn't feel like it gets in my way.
I come from an embedded systems / microcontroller background though, so the bloat does feel completely insane.
For me things like Django feel "worse" overall in terms of developer QoL than the ML universe does, but maybe just because I don't know it as well and my last real webdev experience was before PHP4/HTML5.
I mean it's entirely fine, at least that's what the company decided.
The snark is mostly arising because I recently wrote a little tool to do an inventory of our private docker registry, because storage was getting difficult to manage. After a whole bunch of messy data collection and setup, it sets up a graph of the layers in the registry, assigns images and tags to owners. From there it backpropagates "ownership" of layers - if a layer is only referenced by images attributed to one team, that layer belongs to the team, else it's marked as mixed.
Before doing that, people were enthusiastic about making image builds more efficient, tagging smarter, collecting faster. A great atmosphere I'm finding myself enjoying more and more in the company, which is great.
However, doing the analysis pretty much showed us that 90% - 95% of the storage used in the private registry was used by these python/ml docker images. We found several projects whose CD images were not being garbage collected by the setup for 2 years or so. This wasn't great, but the entirety of their images took up less storage than like 2 base images of the ML stack.
It's the thing we have to do, but I reserve my right to make fun of it.
My entire system is built on a staggering amount of C libraries.
Until recently, literally the only way to have your code reused from different applications in different languages was to use C, or use C++ with the interface butchered to be C compatible.
Other languages code reuse is "well, first you have to go all in into our platform", which is great from the POV of someone already on that platform, but not so great if you just want to reuse a small bit of code from that other language.
When I set up a Linux system, it only installs what I ask it. Perl and python are popular enough that they'll probably get pulled in by stuff I want to use, including things like the distro's package manager. But Java? No way.
Aah this brought me back memories! Waaaaay back in the day (90's) I would implement my own methods I stead of say including stdlib.h or other libraries because I had the (wrong) idea that including it would bloat my C programs (didn't know how a like worked uh?) .
12 year old me would choke seeing NPM, pip, or bundle these days... or Electron!
I helped interview a front end developer who bragged about and insisted on hand-writing every bit of JavaScript so that we’d have the leanest possible website.
He did not make it past me.
Not that we don’t care about those things, but we have better uses for our time and money than implementing an in-house version of React. Also, our other engineers would probably have strangled him.
GP was talking about stdlib, you are talking about react. Not the same thing at all.
Using stdlib is almost free on most systems. It is usually preloaded so it doesn't even use up ram. Unlike functions you write yourself. And with static linking, linkers will only take the functions you actually use, not the whole library.
A js library needs to be downloaded and compiled in its entirety by the user browser. There are techniques to make it less bad, none of them ideal.
The way I interpret GP post is that 12 year old didn't want to bloat his code but had a misguided idea of what causes bloat. You are saying that bloat is fine if it saves on dev costs. Two very different ideas.
Please give me a little credit! It wasn't that I'm against writing code in-house. That's my favorite part of my job! But someone insisting that they want to write everything themselves is a huge red flag. I mentioned React to give it a contemporary context. The interview was a few years ago, and they were ranting against the bloat in jQuery.
That also wasn't the only reason I said no, and I won't go into detail about the rest.
Oh god, fun related memory. When I was 10 or 11 or so, I programmed in Turbo C on my PC. At some point, I messed up some pointers that I passed to strcpy(), so young, smart me decided that "string.h must be broken" (I actually also thought it contains the actual definitions), and moved the file away so that I don't accidentally used it.
Yeah sure, that's what must be broken. Not my code.
A while later I got smarter, and progressed enough to hook into DOS interrupt handlers with my C code and some assembly glue code. I decided to make a program that would click the PC speaker every time the hard drive is accessed (not a new idea, but a good exercise), by hooking into INT 13h (in DOS/x86 in general, sw interrupts were basically used as system calls).
To my delight, it worked, and it suddenly seemed like I had a very loud hard drive. But DOS would also spit out strange I/O errors when trying to list the current directory.
Huh, weird, I wonder if the same happens in the root directory? So, "cd \", followed by "dir", and, yep, I/O error.
Okay, let's reboot and... doesn't boot?
It dawned on me pretty much immediately what must have happened: I didn't restore all registers correctly before calling the old hard drive sw interrupt handler (again, system call in modern parlance). It must have been unlucky enough to convert some reads into writes. And by trying the same thing on the root directory, I probably converted some reads to the sectors making up that root directory into destructive writes... oops.
Great stuff! Back in the msdos/win3.11 days I experimented with TSRs and viruses (following tutorials and the like) . At one point i borked my (Dad's) computer running some stupid "virus" I created that infected COM files... had to basically delete the whole disk.
Yes, and it boggles my mind why C/C++ hasn't come up with a better built-in build system already.
I shouldn't have to specify a ton of flags to g++ something, it should find them within the project directory and pull in dependencies automatically.
Header files should be done with. Poof. Gone. There is zero reason for them to exist. I know exactly why they exist, but it's an artifact of a system where preprocessor, compiler, linker are separate. I should be able to do "import<foo/bar>" multiple times and it should pull in all the classes and functions in "./foo/bar.cpp" or "./foo/bar.so" (whichever is available), just like Python, and cache the ".so" files just like Python caches ".pyc" files.
Improving C++'s build experience would have absolutely zero impact on execution speed.
Yet they're busy adding some stupid spaceship operators and other things people almost never use.
I’m annoyed too, because I love C and back in the 90s languages like Turbo C and Microsoft C were quite intuitive to use from the command line and using whatever old editor you wanted. I would still very happily be using modern C if the overall experience were like Go.
You played this on C, but there is nothing about C that makes this special. You can do this with python, javascript, rust, go... except java maybe. Just imply that the compiler and build systems are installed. There is nothing special in C about implementing some terminal escape sequences that couldn't be done elsewhere.
Actually, C is worse: it could all be any compiler/version, make version, or ncurses lib version - nothing is fixed, not even documented. At least in other language there is some standard way of pinning dependencies.
> You played this on C, but there is nothing about C that makes this special.
Yes there is something special about C (on Unix/Linux). C is the mother tongue of Unix so it's more intertwined with the system than anything else (other than shell scripting also). You'll nearly always have cc installed. The libraries you need are often in /usr/lib as part of the OS install.
Python, no way. Now you have to start pip installing dozens of packages and then you run into dependency mess so then you reach for some tools to manage multiple environments and suddenly you're a few hours into what's become a project when all you wanted was to run a script.
It's definitely worse with C. Most projects just give you a general "you need X library". Unknown version. Check with your distro which of multiple (or none) package that might be. Then also the X-devel for the headers.
Aaaaand then there's all the arcane configure script nightmares.
You've never run into a problem where 2 C projects require a some shared system library but require incompatible versions? At least with python you can isolate those through venvs.
> You've never run into a problem where 2 C projects require a some shared system library but require incompatible versions?
I have to say ... no. And I've worked on some very large C projects at times.
Part of it is cultural. In the systems library space, there is a much stronger (nothing is perfect but I'll say much stronger) culture of compatibility awareness. Libraries don't break compatibility randomly without any planning unlike in less mature ecosystems (coughnpmcough). So you don't need to depend on libfoo 3.17.42 because neither 3.17.41 nor 3.17.43 work for you. That's not a thing.
Part of it is how the system works. If you look under /usr/lib you'll notice libraries are versioned and have symlinks from major/minor version to them. Different versions coexist and you can link accordingly. Even if you have (rare) badly-behaved library authors who break compatibility, it's not an issue to have multiple versions and each program links to the one it needs.
Heh, python is stock now on the EL distros since dnf became the package manager. Kinda insane for the base of your os but, hey runs on servers wo who cares.
A lot of pain comes from OS apis which are not supported in the standard library, like graphics, audio, and media. These inevitably need libraries and frameworks to work cross-platform.
That's not what you said though, you said i could just download it and type make and it would just work.
That's not what happened for me, and it seems to be a very odd way of presenting a very different message.
Turns out the minimal dependencies needed involve several OS packages and also the special "dev" versions of those packages installed, and I need to use one of the right subset of terminals.
Having the implicit knowledge to get the "magic", doesn't have anything to do with minimizing dependencies. Sounds like the real thing being said is: "It's neat when people use a toolchain and environment I'm familiar with". I don't think that extrapolates to "minimal dependencies are what we want".
Consider the set of dependencies required to recreate this in language / framework X.
I assert that this set of dependencies is larger (in some cases much larger) for probably all languages and frameworks, except those that are built using the same language, compiler, and libraries used by, and shipped with, the target operating system. I would also assert that the speed to build, install, and deploy the package would be worse. Larger here means "includes more lines of code", as well as "requires more libraries" and "requires more executables, probably from interpreters". "Worse" here means "more time required to resolve", "requires more downloads", and/or "requires more CPU cycles to solve the multiple levels of indirection from interpretation".
It just so happens I have npm, nodejs, cargo, g++, python3, pip, and gcc floating around, and I almost always find that 'make' with `gcc` is the fastest gun in the west, as long as you stick to some basic OS libraries, as this project does. Cargo is real, real close, and probably equal if you ignore the long dependency chains that result for almost any project.
But that's much more verbose, and I think the original message implied this for most readers.
If we are talking about source code distribution, then Python-with-stdlib-only would be have smaller overall dependencies (measured in bytes or executables), easier to build (no build step) and faster to build (0 seconds). The runtime speed will be much lower but you don't need high performance for terminal graphics.
But if you want a really lightweight language, you'll want something like Lua. It's like 0.2MB in a single binary which has a (very small) stdlib already built in it.
There is a reasons old-style routers/embedded devices with megabytes of flash came with lua (and rarely python/perl), and almost never with gcc or other languages.
The compiler and libraries aren't shipped with most operating systems. They are available as additional dowloads for most operating systems, just like any other language and environment. But the biggest linux distros, OSX, Windows, and ChromeOS don't ship with a compiler in the default install.
I would assert, that this could be done in C, python, rust, and go - no libraries just whatever ships with installing the compiler package. Probably others too but I don't know them well enough to say this about them with the same level of confidence.
I will concede that some of the packages to install those languages may be bigger (in the more bytes sense) than gcc + dev versions of the c libraries.
Literally it just does a few syscalls to read from input and write to output, all those syscalls are available in the stdlib of the languages I mentioned. Everything else is just a few simple loops and some table lookups (there is the ncurses version that does it a bit different, but it includes the raw vt100 codes too - I'm ignoring the ncurses version to be kind to your assertions about dependencies).
For the languages I listed you need the same number of binaries, or fewer. For go - you just need the go binary. For python, the same. For rust you need cargo, rustc and ld (vs C's make, gcc, ld). (note ld here is shorthand for all the tools ld invokes too).
As for cpu cycles, you may be right but that is also unrelated to the minimizing dependencies message - although I don't know that the binaries created by rust or go would actually use more CPU cycles, and likely not enough to matter on any computer made in my lifetime.
As for compile times - I don't think python or go would have a noticable difference (python is compiled to bytecode to run on a vm at startup, its not a true interpreter so what I mean is that the python text -> bytecode part will not result in a noticeable startup delay).
Looking at the project code, I can see several ways to make it have lots of additional dependencies in C or any other language. For instance we could consider the ncurses version in the repo itself, that adds a dependency which must be downloaded (ncurses is installed often by default on linux, but ncurses-dev or whatever is an extra install). But why stop at ncurses, we could make it use glib and getopt and so on.
At best, the dependency thing seems orthogonal to the language.
As for packaging - well I just don't believe it would be worse.
It would be the same set of tools to create the package for each of them - for go and rust you wouldn't even need additional OS dependencies to install the package - really it's just a change of the build command and the binary path to copy into the package, though you would for python - fortunately that is included as part of the default for many OSes that don't include make and C.
I love glorifying doing things the hard way as much as the next guy, but the fact is dependencies are a way of life.
We build on the shoulders of giants. We pull in other peoples code because we don't want to duplicate work, ours is likely not to be as good, any code is a liability, and sometimes _we just aint got time for that._
It's because programmers nowadays think efficiency is a one-dimensional concept that only applies to the code.
They don't think on the big picture the efficiency on the resulting binary size and build time (implied by the number of dependencies).
Another thing that is ignored is the external dependency that the user would need to run the program (whether none with a native compiled binaries or a whole runtime environment to run your python program)
Don't advertise these languages, else they will become like the wretched python is now.
Python used to be fantastic. It actually used to just work. That stopped once everyone caught on to the language, Nvidia got involved, and tensorflow / pytorch /ML ruined everything with so many script kiddies writing nonsense that had no interest in making their packages compatible with anything. The community is the best, but also very very much the worst part of the language.
It already happened with autotools, bsd ports, cmakelist list lists and static-y linked apps like “snaps” and its analogs.
C is not special in this regard. Special part is an OS itself that takes the role of a single system-wide virtualenv or node_modules/.. with all common trouble packaged in.
Python and others didn’t create this trouble, they ejected it from your system into a single rm-able folder.
From personal experience, the number of times that I've typed `make` and it worked are so few, that now I basically think "cool, but there's no way I can compile that" and close the tab whenever I see a C project.
Agreed. 90+% of the times that I've wanted to compile something, I couldn't. I suspect that it was mostly because of missing or outdated dependencies. The only things that I can reliably compile are image encoders.
Once you have gcc installed, yes it just works. It's only 184MB and 2153 files (tested on random VM).
Python is actually smaller, 82MB and 1461 files, and has a much better stdlib.
If you never wrote zero-dependency python scripts, give it a try! It's a very nice experience: download file / clone repo, run a file and that's it. Not even a build step.
This (and learning stdlib better) is precisely why on a Python project of mine, the only external dependency is a C shared library to run some parts faster than Python can (making giant arrays of ints and shuffling them). I bundle precompiled libraries for the architectures I have available, so you don’t have to make if you don’t want to.
It would compile quickly -- on a relative basis -- it would just take 2 - 10x longer to write than something in a language more suited to web development...
And probably also contain a bunch of RCE vulnerabilities.
I've deployed microservices in C! It's a PITA. Not to build, but to get the microservice to accept C. It builds real fast. Then zip, and `aws lambda blah blah` works fine.
It's funny, C++ projects either have the best or the worst setup experience.
Took me hours to get Audacity building on Windows. Actually, I might have given up and installed Linux instead, and built it there. That's usually way easier!
But the other day I built an old MP3 encoder (BladeEnc) in a split second. It brought a tear to my eye. That's with Visual Studio 6. Which somehow builds an entire project before the new VS even begins compiling!
One well-maintained C project in isolation. I've also seen nightmares that are difficult to build on certain platforms or require undocumented build dependencies. Let's not forget how horrible the automake/Cmake/Scons/etc. ecosystem can be.
Everything is awful when there's no attention to the dev experience.
Even more incredible is that the project readme actually explains everything and you don't have to be a member of some cult to get the secret knowledge that makes everything klick.
> make: The term 'make' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
I am not talking about terminal support or something like that, but I actually find this example quite telling too, but from totally different perspective.
An effect that I could write in high school using Pascal and that would require only builtin library and few loops, in proposed source code requires so much *stuff*. Given that this is a fun demo and not something that one needs to maintain as enterprise software, I totally would accept such trade off and remove dust of something simpler.
To be fair it's a very old language, but what's crazy is that decades later engineer still have not come up with a way to execute the C directly, but have to rely on a process called "compilation". I mean how hard is it to make a CPU that understands C? /s
My favorite example of this is `xneko`, which is a cute cat that chases your mouse pointer.
It's a C program running under UNIX/X11 that was last updated in 1993. Last I tried, it still compiled with absolutely minimal changes (I think I had to change an #include directive or something) after typing `xmkmf; make` (which was the way to compile X11 programs at the time), and runs, on macOS. Provided you have Xquartz installed.
First, make assumes a C compiler is already installed.
Example on Debian: do apt-get install build-essential
Second: usually docker, npm, etc are not needed. If the code use 3rd party C libraries, you need to install it. On Debian, again you can use `apt` to make life easier.
But it would probably be better as a containerized microservice with 100 RPC endpoints. Just think how flexible this would be. Even better, 1000 containerized microservices with failover and autoscaling.
To be honest, I must admit to believe that whoever else did this same tool with JavaScript, no doubt would have used a package.json that needed to download at least a couple hundreds of dependencies.
And unless a person is using linux with the right tools installed, they have to go download an entire OS, get it set up, install all the right libraries and then, maybe (if they did it right) they can type make and pretend to "not have to install a whole bunch of dependencies".
Your comparison is a bit disingenuous. I thought that the reasonable point of comparison was obvious, so let me state it in case it's not:
* For this C program: Let's say a default install of a Debian system, with GCC and Make installed. Nothing else. Just clone and run make && ./bin/nms.
* For an hypothetical JavaScript implementation: Same Debian system, with Node.js installed. Nothing else. Just clone and run node ./src/nms.js.
Now tell me that the second point would ever happen, of course without the obvious trick of vendoring tens or hundreds of dependencies in the repo itself. Given the current trends and ecosystem incentives in the JS development world, I highly doubt it.
These trends only favor mindless composition, of which the latter is good, but the former is bad. IMHO most devs would probably not even consider the idea of writing a compact, self-contained piece of code and have their own termio [1] or charset [2] implementations, to begin with.
So your assertion is that if those exact same devs would write C, they would somehow magically stop seeking out libraries to solve the problem for them? Because there are thousands and thousands of C libraries, just like there are libraries in every other language. That seems a bit of a stretch - my "disingenuous" assertion is that library use is orthogonal to language.
Also, there are plenty of non-C languages (js may be one of them, but it's been half a decade since I touched it and longer since I used it in any meaningful way) that you could transliterate this program into - its a handful of lookup tables, a couple loops, and a couple syscalls available in every stdlib I'm familiar with. It seems disingenuous to assume that there are only C and JS (or being generous that those extremes are the only options).
Not so much ignoring other languages, as just talking about the one which coincidentally (or maybe not) is the one that usually attracts most conversations about dependency ballooning, at least around here on HN.
> if those exact same devs would write C, they would somehow magically stop seeking out libraries to solve the problem for them?
I'd posit that an ecosystem which doesn't encourage to add a third party library for the tiniest of needs, yes, does indeed discourage from using libraries except for the most egregious needs.
E.g. you wouldn't implement a whole object-based oo-style programming paradigm in C, it would be wiser to just use GLib with its GObject implementation.
But it would be uncommon to use a stupid library like is-even. Something that is very common to do in JS.
And actually this leaves the space of being opinion based and is more supported by evidence. You'd hard pressed to find this kind of mindless lazy usage of libraries in well consolidated and popular C-based software. But I bet you wouldn't need even 30 seconds to find some popular JS package that in its transitive dependency tree ends up using a left-pad [1] level of library.
Bell used to actually have this feature which The Phone Losers of America exploited. The hack was directly inspired by the movie. PLA showcased this in one of their podcast episodes
I missed Mr Robot's first season when it originally aired. I watched the first 2 seasons, and then somehow life got in the way. I recently went back to this earlier this year, and holy cow was that a mind bender of a show. I also just happened to have had a recent friend go through a hard drug enhanced psychotic episode a few months prior, so the timing of watching the complete series and seeing Elliot's journey was much more affecting than I would ever expect. The whole time, I was wondering if this was anything like what my friend's experience was like (minus all of the hacking). Reading about the show online, there is a lot of commentary on just how realistic people thought the portrayal was.
I have no earthly idea about anything like that. I have nothing resembling training/learning/experience of what goes on in someone's head. I can't even fully grasp what's in my own noggin let alone to assume I could for someone else.
I know this is quite off topic, but does anyone else think that Mr. Robot was originally suppose to be a movie (or maybe a mini series)?
First season is good, it has some fluffy bits, but mostly it has a story it wants to tell and the technical aspect is pretty good. However it all falls apart in following seasons.
Season two is so slow and boring. It doesn't move to plot forward. It doesn't have (m)any technical things (just some hand wave-y "hacking"). The show just goes further and further from what attracted me to it in season one. To a point where I quit watching somewhere mid season three during the original run and only after the whole thing had concluded (and I think I was sick at home) I dredged through the rest of it.
Feels like these days everything has to be a series with multiple seasons. Writers aren't allowed to just tell a story, but instead everything is milked and it almost immediately becomes shit. Another example is Stranger Things. Season one was good and the last episode left me wanting more, but when season two rolled around they took such a big step back that I couldn't get through the first two episodes, I just didn't care.
Wow that amber screen brings back memories, thank you for sharing. My first helpdesk job included mainframe support, and I got to use one of the old IBM quad screen plasma terminals they still had there. Our ticketing system was still on the mainframe so it was actually useful in my helpdesk duties.
So crazy that at the news of Kevin Mitnick's passing, I immediately started thinking of this movie. Probably a lot of folks did. Thanks for sharing this.
Always fun :) Years ago I wrote a similar single file python script “decrypt.py” which does roughly the same thing. Sadly it’s still my most popular GitHub project: https://github.com/jtwaleson/decrypt
I really want a modern version of Sneakers where someone works out that a quantum computer has probably been built (maybe years ago) and they have to steal it if it exists.
Totally fits in with the original Sneakers "no more secrets" meme.
I personally could have been on board with a remake with the original cast (though the original was already about aging and maturity, so I don’t think the passage of yet another generation would have been all that compelling of a story), but RIP River Phoenix and Sidney Poitier (had to check and see if Ben Kingsley was still alive, though).
I actually think the original holds up shockingly well, for a movie that is at least tangentially about technology. It in part got lucky that it was juuuuust pre-WWW and that critical systems running on mainframes is still a thing in 2023.
I wish programs included stuff like this in their demo modes. Beeps with keypresses, animations, and the like.
Back when there were trade shows this would have been great on the show floor (fun to look at and give the salesperson more time to talk). Beeps on keypresses, noises while computing, and of course cool graphics like this.
That stuff would be super annoying when actually using the software for real. Perhaps it could be enabled in the free version and suppressed when you bought the license.
Hey, has anyone used an llm to “enhance” the output of terminal programs and make them a little bit more hackerish or Tolkien or something? So we have a shell for mundane work that makes you feel like you’re hacking or exploring or whatever? :)
Made me think of cool retro term, which I haven't checked out in I want to say 10 years...lo and behold someone already applied this to the green version haha
Should no one on HN share their ideas? Must they implement them all themselves to satisfy you? Do you think you're improving HN by discouraging people from sharing ideas they don't have the time to implement?
Do you think you're improving HN by discouraging people from sharing ideas they don't have the time to implement?
I'm with you. I think the constant "Do it yourself, everything is trivial!" meme has a chilling effect on people who would like to contribute their ideas.
I also think it's part of the reason that some people get turned off by Linux.
Someone will write, "I wish Linux did x," and the response they'll get is "Well, the beauty of Linux is that you can write your own drivers and compile your own kernel all by yourself!"
No, not everyone can.
Some people have families and jobs and other obligations, responsibilities, and restrictions that prevent them. That doesn't mean their ideas aren't good ideas. The fact that their ideas are not welcomed by the core Linux community because they can't roll their own is one of the things keeping Linux back.
Some people are "idea people" and some people are "execution people." There are entire industries built around both of these things. Very few people are good at both.
I'm not the one making the suggestion that someone else do something on a hacker board where the common thread between readers is that hacker spirit. A hacker with an idea is one of the most [useful|dangerous] combinations. Sure this isn't SO where you say here's what I tried, here's what it didn't do, help? And yes, I'm sure there are plenty of examples of someone saying "oh, cool idea, stand back and hold my beer" then 2 specific days later "Show HN"
Ideas are cheap though, and I can see how people get miffed at people coming up with ideas, if the reader assumes whoever came up with the idea that someone else should put in the work.
But seriously, for every 100 startups that post their Show HN on here, there's been 10.000 ideas but only 1 that will eventually translate into a product.
This takes me back to my BBS days. ANSI animations were all the rage. I can't for the life of me remember the name of the software I used, but in the mid 90s I created numerous ANSI animations that looked very much like the movie.
I love stuff like this. I actually recreated the basic program that Sean Astin's character used to "hack" the exits access code in an episode of Stranger Things a while back using the modern port QB64.
Is there any reason this would not work on a Mac and iterm2? I get the initial encryption effect, but sadly, no decryption :). I tried different fonts and iterm2 settings to no avail. I guess I could try the ncurses option, but curious to see this not work natively.
Edit: as helpful comments below me explained, you either have to press another key to decrypt, or launch it with the `-a` option.
"The key meeting took place on July the 3rd, 1958, when the Air Force brought the space visitor to the White House for an interview with President Eisenhower. And Ike said, "hey look, give us your technology, we'll give you all the cow lips you want."
Because the web is getting more and more boring... If you embed this toy in any terminal browser so that every opened page is "decrypted", you get something very very funny :)
The thing about NCIS is that they had these sorts of jokes coming with actual technical background. This wasn't just a thing some writers thought people wouldn't notice, I'm pretty convinced NCIS did this on purpose.
The "I looked at the source code and 1 out of every 337 payments went to <name>", or the "The printer is coded using assembly" while HTML is shown (two opposite ends of the spectrum) really convinced me that the tech was supposed to be so obviously fake to tech people that it allowed them to still be entertained by it.
Compare this to CSI:Cyber which wasn't taking the piss out of IT and simply made stupid, ridiculous things up ("cyber nuke") or Numb3rs which described IRC as "two boats passing in the ocean communicating through the ripples of the water" or "thankfully I speak leet".
NCIS seemed to actually try to either get things right, or make them purposefully outlandish. I respect the writers for that.
I have a soft spot for that film. Great cast, good script, and a surprisingly reasonable take on encryption -- even for today, let alone for 30+ years ago.
So many packages make these kinds of assumptions. I tend to find that in those situations, the likelihood of some other salient bit of information gets left out as well. Also bonus for not being an npm install type of something.