Reminds me of David Beazley's talk about being locked in a hidden vault to analyse 1.5 TBytes of C++ source code. No tools or internet connection, but a Python interpreter.
https://pyvideo.org/pycon-us-2014/discovering-python.html
DavidB is generally awesome, that said I just watched this its kinda of meh, maybe 5-9m of good stuff, if haven't seen his other talks, I'd recommend them first, unless you have particular interest in patent legal suits, and indexing code for basic patterns via a library of functions (grep/glob/etc), ie its basically using stdlib collections w/ files to build a vfs grep/stats lib, w/ py 2.3 while coding on a desert island so to speak. his other talks cover a wide range of subjects, their probably all better imo.
Such an amazing talk !
Someone linked to this on HN a while back, this allowed me to discover David Beazley.
I then got the Python Cookbook, and watched a bunch of other very cool talks. He's an excellent speaker and python wizard!
This is basically what early Gentoo releases were like... A hilariously minimal tool chain and a txt guide to compiling and configuring every piece between basically nothing and a functioning web browser. A typical speed run took a day because compilation times were long. And if you realized late that you'd made a fatal mistake, starting over took a loooong time.
I remember spending a lot of time with that as a teenager. A stage0 (or was it stage1?) installation took about a weekend, I recall. I was a bit surprised recently realizing that there's still a significant Gentoo community, with influx of new users. Didn't really occur to me that the installation experience is completely different these days, but that makes sense.
> Didn't really occur to me that the installation experience is completely different these days, but that makes sense.
I used Gentoo as my main OS for multiple years over a decade ago, installed it a couple of times, and helped with the localization of several documents, but I still remember the installation being basically the same as it is now.
I’m honestly wondering what is meant by this statement?
In the past you used to be able to choose from various stages, which went down to _very low_ level. Now you must start from stage 3, which is already a (slim, but) fully bootstrapped/running installation, which you customize from there.
This was pretty much my experience exactly... I was dual booting, though, so also had a lot of downloaded docs that I could read on the other drive partition... And then reboot to go back to trying to get things running on the gentoo side. And inevitably have to wait to go into the university computer labs on Monday to look for answers in the forums.
Incidentally, I had actually picked gentoo because the forums looked fantastic... My first linux attempt was with suse, but all the "interesting" answers were in German. So next time I tried the one with the liveliest forums. Turns out the forums were lively for a some of the wrong reasons (everything was hard), but the people were the right combination of friendly and incredibly helpful and knowledgeable. I learned a huge amount from the process and the people. Though, indeed, the whole from source distro thing was foolish, in retrospect.
Gentoo will forever be my "uphill through the snow both ways" story.
Not necessarily a myth anymore, strangely. If you apt/rpm install something nowadays, it probably wasn't built with support for newer CPU instructions (AVX, AES-NI, sometimes even SSE4). -march=native is gonna have a much bigger effect now than stage 1 Gentoo installs did in the mid-2000s.
(and, yes, I do remember waiting a full day for KDE to compile)
> -march=native is gonna have a much bigger effect now than stage 1 Gentoo installs did in the mid-2000s.
Actually, probably quite the contrary. All x86-64 chips are required to support SSE2, which lets you use SSE for floating-point instead of x87 floating-point, which is a big speed win. But the newer extensions are specialized SSE instructions, which generally require manual use of instructions to utilize; specialized crypto instructions, which definitely require manual use; and AVX instructions, which doubles the width of vectors you can use. The first two is not going to be improved with -march; the code that uses it is almost certainly going to be compiled in a way that lets it dynamically use these instructions when available.
As for vectorized code that could use AVX instead, it's dubious how much of an effect it would have, since the biggest improvement in vectorization will be enabled with the 128-bit vectors, with 256-bit vectors offering at most a 2x speedup in the vectorized code, the effect being reduced by some code only being 128-bit-vectorizable (and not receiving any speedup), and also by Amdahl's Law reducing the benefit of further speedups in that code. Furthermore, vectorization tends to be much less relevant in the "integer" code that is typical of most consumer software, outside of a few hot loops that are already manually specified as above.
Most of the SSE3, SSSE3, SSE4.1, and SSE4.2 (there is no SSE5 in any released processor) instructions are not particularly feasible to be used by automatic vectorization, being mostly horizontal vector optimizations or some oddball instructions that are pretty task-specific (hi, PCMPESTRI). You might see them come up in SLP vectorization, but my last experience with LLVM's SLP vectorizer is that it does a poor job of taking advantage of these kinds of instructions anyways.
For hot kernels (say, memcpy), it is definitely the case that many projects have implementations of several different varieties of these, and use the version best suited for your current architecture. See https://sourceware.org/git/?p=glibc.git;a=tree;f=sysdeps/x86... for the different variants of common functions in glibc.
I did stage 0 installs a few times. While it took a long time, there was fairly little work on my side. Most of the grunt stuff had been automated.[0] I think the only thing the user had to do was config.
[0] Which is the point of Gentoo, after all. It's what made it a lot better than LFS/BFS. You get everything from source with little effort. 17 years later, I'm still using Gentoo. Nothing like it.
Nah, Slackware install was pretty fast. Although it did install a ton of packages and removing the extra cruft is a pain in the ass without a decent package manager that handles dependencies.
"STEPS Toward the Reinvention of Programming, 2012 Final Report"
> Submitted to the National ScienceFoundation (NSF) October 2012
> (In random order) Yoshiki Ohshima, Dan Amelang,Ted Kaehler, Bert Freudenberg, Aran Lunzer, Alan Kay,Ian Piumarta, Takashi Yamamiya, Alan Borning,Hesam Samimi, Bret Victor, Kim Rose
I think an important category has to be: no outside servers designed to help with this speedrunning challenge. Without such a rule, one can get an outside computer to do much of the work. One would merely write a small program that connects to the known IP and pulls bootstrap code straight into memory. (Granted, still a fair piece of work, but without such a rule the finish line for the speedrun would become "establish a TCP connection").
EDIT: I wonder if including the Linux kernel + C library is just too much. How minimal could one go with this challenge, yet have it still be fun/doable in a reasonable duration? You start with just MS-DOS on the disk? Or a Forth interpreter? Or maybe you start with a blank disk, but you get to twiddle bits one at a time before you first use it, in the vein of the Altair 8800?
EDIT 2: An even more entertaining idea than some sort of 8800-style toggle switch interface: you start with an actual Altair 8800, then get a few "stepping stone" computers that have just enough hardware compatibility that you can transfer data from one to the next. The final computer is a modern PC with a network connection.
Generally, speedrunning communities (for games and other things) already bar tricks and strats that require setup on an external file/system.
For instance, you can't carry hearts/stamina from one file to another in a Breath of the Wild speedrun, or use wrong warping to instantly go to a location saved on an alternate save file.
There are exceptions in certain scenes (in speedruns for Banjo-Kazooie, a glitch is used on another save file to fix the RNG for a quiz section later in the game), but generally, everything has to be done from an empty file/system with no external resources already setup.
There are different categories for everything. There's getting to the end of the game and defeating the end boss ASAP. For games with a completion percentage, there's speedruns to get to 100% completion rather than just getting to the end. There's categories for not using "warps", and others that allow using "warps" to skip levels.
Tool Assisted Speedruns (TAS) is a category unto itself. This involves running game on an emulator and making human-impossible precise set of inputs, often disassembling the ROM to find bugs to take advantage of. The Arbitrary Code Execution 'stunt' category includes one where someone programed Super Mario World by picking up items in a precise order (and time) to specify bytes, managing to write tetris and jmp to it.
Competitions are made up, so we can make up whatever point A and point B is, and the rules along the way. Timing going from a Ubuntu ISO to desktop browser with google.com loaded is one A, B pair, but there's so many different possibilities out there!
Would this rule also invalidate the OP’s approach of getting to the point where you can download wget, then using that to download other programs? wget, and the things you’d download with it, would live on external systems.
This is how I used to live my life. Sort of. I mean early 2000s when I had left home and moved to a far away city for a job in IT. All I had was a stolen 256M RAM laptop with FreeBSD. No internet and no TV at home because I lived in a shitty apartment building.
At work I had all the bandwidth and resources I could use.
So I downloaded all I could during the weekdays and during evenings and weekends I'd just browse manuals and try things on my laptop, in my own little virtual LAN.
Around 1995 I had my own PC but only my dads PC had internet access. Every other day I had half an hour of internet to download stuff. Then lots of time playing around on my PC.
That's how I learned web development in the 2000s. I downloaded stuff at school on floppy disks and took them home to inspect them. I had cool gifs, midi files, html files and I had Home Site 4.0 which had a great documentation about HTML and some basic JS.
Sadly, for me in the late 90s, it was learning the ins and outs of how to modify Win95 UI. Turns out a whole lot of those skills were total dead-ends. But I can help you make your Win95 look and feel however you like!
Need your cursor to be animated but change animations depending on what you're doing? You got it! Find that editing .ini files lacks a certain flair? No problem! Prefer a fuchsia screen of death, or a "fortune" style "safe to turn off your computer" message? I got your back.
I recall that custom animated bootup screen was something that had a different variation of the quirky hack in different subversions through 95-98.
Also made a point of reskinning every last pixel. I still get a similar feeling tweaking my Linux environment (albeit stronger focus on productivity these days). Some inspiration in /r/unixporn, maybe you can relive that again ;)
Cool times anyway. Windows 95 brought back the fun I was missing so much since when I reluctantly moved from my beloved Amiga 4000 to an anonymous Pentium PC circa 1994 - I had been programming under Windows 3.11 for a while when Commodore went down the drain.
Ditto, plus ROMs. SNES/MD/GB ones could be compressed very well, even the N64 ones could be splitted with split(1) and merged with cat.
PC games were expensive and except Max Payne, I'd get bored of them very fast.
But the moment I (re)discovered interactive fiction (I seldom played it under an Spectrmum emulator) for the Z-Machine and Nethack from a distro CD, I was hooked.
I bought the computer, installed Mandrake Linux on it via CD but didn't know how to configure the modem. So I would make notes/print out manuals regarding modem configuration and try them at home daily - until the sweet sweet line noise of connection happened and I was online.
I'd like to suggest "Linux From Scratch" (http://www.linuxfromscratch.org/lfs/) category. It might be easier for beginners, as it has definite start and finish states and instructions. It would be interesting to do a blind run to see how long it takes and later try a "real" speed run to optimize it as much as possible.
Start with some hardware. If you are lucky, you get a board support package with a Linux kernel. Otherwise you need to write some drivers to talk to the hardware. Progressively get it to the point that it will boot up and get on the network. Then get the application running.
Microcontroller development is even worse. When you don't have any space, you may actually end up writing everything in C from scratch.
Or have some drinks with EEs and get them to value software development ease of use: things like using top of the line, or at least midrange MCU with enough flash for debug builds, populating debug connectors on EVT builds etc
In one memorable project the hardware guys grudgingly agreed to waste money and give the system enough resources to run Linux.
But they chose a cheaper ARM CPU without a MMU or hardware floating point. That meant it would not run shared libraries, everything was statically linked. We ran out of space in the Flash, and the project was in crisis. The solution was using a lua interpreter.
Are you affiliated in any way to the nerves project? I skimmed the home page and couldn't find out what it is exactly. Is it an elixer framework and related bootstrap code targeting towards running on mmu-less arm cortex m microcontrollers? Or on an embedded liux system? Or is this just a rest endpoint somehow?
> I realized a few years later that this was just an exercise in flexing the fact that I could probably bootstrap my way up from this with the C compiler to write a terrible editor, then write a terrible TCP client, find my way out to ftp.gnu.org, get wget, and keep going from there.
I assume by "a terrible TCP client" she means a TCP stack, so you don't have a TCP stack yet, so `/dev/tcp` wouldn't exist. Otherwise just opening a socket and sending the request would do the trick. Or maybe that's what she meant by "TCP client"? But writing an editor for that would be overkill I think.
It is unclear to me whether the C libs include TCP.
If not then that seems like the main part of the work because once you have that then you can download everything else.
Unless it was already in BIOS or some network hardware remote admin Intel thing and there was a way to hack into that to use it. Then you would not have to implement TCP IP.
To me the question is what is the minimal part of TCP
/IP that you would need. I guess also you need ethernet if that's not in the C libs.
To cheat maybe your computer has a modem and there is a BBS - simpler protocol.
> It is unclear to me whether the C libs include TCP.
The exercise states that you have a Linux kernel and there is a network connection. If the kernel does not have TCP, but only offers a raw socket, well, then you have to write down a very simple TCP stack.
But if the kernel does have TCP, the exercise boils down to writing a very simple HTTP client, a very simple DNS resolver and a bare bones XZ decompressor and tar reader. From there I'd grab myself https://alpha.de.repo.voidlinux.org/static/xbps-static-lates... then execute
Accesign the rest of the files is the same, just redirect the nc command to a file.
If tar is missing, you can just get for sure an static build and compile it. Or try to find an sltar mirror to github ;).
But that busybox has ustar, so look how easy it was with just netcat and Gopher, so writting an nc clone (openbsd's netcat portable version is really easy, altough there are another ones many more times dumbed down) would be the easiest win.
> Or just get a static tar and xz static build, they must be somewhere.
That is, that you can find them in an uncompressed form. If you fees adventurous you can probably coax the compressed data somehow through the bootloader decompressor, or the xz or deflate implementation found in the Linux kernel. Unfortunately these days modules tend to be compressed as well, but it should be possible to convice libdl to open a .ko file. If not you can always mmap it PROT_READ|PROT_EXEC and follow the symbols entries through the TLD yourself.
Busybox has these in binary form, in either HTTP/FTP/Gopher servers or who knows which other protos. Pick your poison. Once you set a barebones connection to a gopher server from stdin using a little of C socket establishing code, you can just send the commands to fetch the binary, redirect it to a file and chmod(2) +x it with ease.
Getting it from a Gopher mirror would be even easier, the protocol is barebones. But bash is cheating, you can write an IRC/FTP/Gopher client in seconds and fetch all the info and documentation from here.
You can browse the wikipedia, read an HN mirror, play IF games and even seek Gutenberg from Bash as I can see by just connecting to the port 70 and doing simple Gopher queries.
That's too easy then.
If the shell is the one from Busybox or ash, the fun begins.
I guess that the task is really underspecified (maybe intentionally to have different speed runs). It makes a huge difference whether you have a kernel with a TCP/IP stack, a C compiler, a C library with the socket API, etc.
The point of the shell example is, that once you have some way to easily open a socket, you can bootstrap a more complete system very quickly. You download a more complete root image from your favorite distribution and override / with that, etc.
True. Or as I said, a busybox from a Gopher mirror to an FTP server which mirrors Ibiblio itself. Chmod'ing +x it is dumb easy from C.
OpenBSD's netcat without socks and proxy support can be very barebones and understandable and writable in an hour. With that tool you could fetch busybox in a second.
Writting a shell is easy, but it's already done (do'h). You could be restricted to a really basic one, with no cd having to reimplenent that yourself. A simple "cd" command taking ARGV as the path would be around 10 lines in C.
Reimplementing echo would be the obvious second thing to do, and later, a barebones cat a la plan9/OpenBSD. Finally the 3rd basic tool would be an ed(1) clone without regex support.
ed(1) can work as a simple "more" like pager, too. Add add readonly flag to argc/argv so it fopens the file with just
read permissions and you have the best setting to start.
ed(1) is much easier to write than a visual editor.
Any TCP/IP stack related would be hard, but it the libc/kernel has a basic implementation bundled, I'd write a gopher client and I'd declare the problem solved, as I can fetch everything from here.
Gopher is much easier to implement than banging the FTP ports. Also you can do a barebones IRC client with few lines. Nothing too complex, but usable enough.
Or better: write a netcat clone, https://git.2f30.org/openbsd-nc/ connect to gopher, fetch the specs, write a dumb gopher client, (or fetch sacc(1), you can compile it without ncurses by editing the Makefile), connect to gpoherpedia/gutenberg Gopher proxies to fetch all the documentation.
But the netcat clone combined with the ed editor could serve as a basic IRC client with FIFO files, also as a basic gopher client if you don't want to fetch sacc(1). Once you get connected to an IRC channel in order to seek help, among Gopher, you'll have tons of information.
Also it's the most Unix-y way to solve a problem, by far.
Note: The “cd” command must be a built-in command in the shell, since it must call chdir(2) inside the shell process; it must affect the “current directory” internal state of the shell process. Therefore, “cd” cannot be a separate external binary.
You could sort of do it via Bernstein chaining. If we call this substitute command 'notcd' then you'd invoke it like `exec notcd /some/path sh` and then continue your work in the subshell.
The shell would then have to also have a mechanism to pass all local state such as command history, local non-exported variables (including shell functions), open file descriptors, currently running jobs, etc. to the subshell.
We're talking about bootstrapping to more usable tools, not about whether this hack fully replaces the standard feature. 'cd' certainly comes before command history, for example.
Incidentally you could take the same approach to some other shell built-ins, like redirection. Instead of `cat foo >outfile`, define a program named '>' and then say `> outfile cat foo`. (Assuming the initial bare-bones shell parses '>' as just another ordinary character, not special.) Use these until you've written a more featureful shell.
It might be interesting to figure out a design for a shell where this decentralized approach works well and does not feel like a kludge -- I imagine you'd have to change some deeper aspects of the OS design to really make it work.
There are a couple of problems with that design. Firstly, if the program crashes for any reason, you don’t have a shell to fall back on, unless you run the debugger as your shell, like ITS did back in the 1960s. Secondly, you can’t have more than one job running at the same time, or even any suspended jobs. And even ITS (again) has subjobs.
If your shell crashes, same problem. And the shell is a bigger program. This issue seems pretty orthogonal. Again, I was never saying this is a better way to design a shell for everyday use, unless maybe with a lot of complementary design decisions changed to go with it.
> Secondly
Why couldn't you spawn jobs like `& do-the-job` in this style? Admittedly I don't remember much about Unix job handling.
You're asking on Hacker News if people still use IRC? In this same discussion thread where people are talking about writing Gopher clients?
Yes, it's still in very wide use. The numbers might be declining as greybeards retire and noobs pick Slack instead, but there's still plenty of us out here.
It's the default method for technical channels such as programming or OS projects. Please, use IRC with TLS if you can.
Also, Usenet is still good for Slackware and C users, you'll have a high level there. comp.lang.c.moderated and alt.os.linux.slackware are really good. Also, the ones for Interactive Fiction and Nethack, I can't remember the groups. rec.games.roguelike.nethack, maybe.
Freenode is where I turn when I'm completely stumped by a tough problem. Some helpful person usually points me in the right way quickly. Sometimes I can even point someone in the right direction myself.
While installing Arch can be a challenge for a lot of different use-cases, bootstrapping everything from the level described in the article is a little harder.
> ... just install Arch Linux to a working desktop ...
How do you do that when you have:
> ... a box that has a couple of hard drives and a working network connection. HD #1 is blank. HD #2 has ... bootloader, kernel, C library and compiler, that sort of thing. There's a network connection of some sort, and that's about it.
> There are no editors and nothing more advanced than 'cat' to read files. You don't have jed, joe, emacs, pico, vi, or ed (eat flaming death). Don't even think about X. telnet, nc, ftp, ncftp, lftp, wget, curl, lynx, links? Luxury! Gone. Perl, Python and Ruby? Nope.
That wasn't clear to me, but in the light of your comment, I see that it's plausible. Thank you.
It always throws me when someone suggests an exercise, and someone, in reply, says, "Or instead, do this other thing."
I'm oddly reminded of when we were recruiting and asked people to write some code to solve a specific task and bring it to the interview for discussion. Someone said "This spec is obvious nonsense" and proceeded to write a completely different spec, implement that, and get it horribly wrong.
I've thought about a variant of this with Baking Pi [1].
That takes you through assembly to the point of having a Raspberry Pi with a minimal terminal (with just an external USB driver the author wrote).
I think it would be interesting to see how quickly you could get from there to a useful system, say one where you could download the USB driver from github and compile it. But I definitely don't know enough to do that.
This is a very cool idea. I have thought similarly, but for a "create current civilization from scratch" game (edit: this is a real-life challenge, not on a computer, and maybe not even as a competition): Start in an empty open area, with nothing but nature and the low-tech clothing you wear and means of communication with outside. The only things you can exchange with outside are: any information at all (so, maybe use a smartphone but only to read/research, or a "border" for q&a with anyone), and "traded" goods from similarly-managed areas. Find and record the shortest (reasonable? by some definition?) path to building a smartphone (or such) with its supporting infrastructure.
> We've done our best to keep Lindsay "alive"
by including his original catalog descriptions.
Due to the large image size, they are featured on the individual product page.
(If viewing a list, simply click the title link or "more info" link for full details...)
> Whether you're a former Lindsay customer or new to these rare publications,
Your Old Time Bookstore
promises fascinating and educational material for your enjoyment!
Anyway, that's were you get the manuals to rebuild your tech stack from stone age on up.
still reminds me of the game. gather wood. start a fire. gather wood. build a shelter. gather wood. build some tools. gather wood.
i'd suggest watching lots of YouTube videos before embarking on this in real life though. my "i must be crazy" idea is living in a cabin in the mountains during winter. given the fall to stock up on supplies.
Wholly agreed. I have no current ambition to actually do it in the foreseeable future. But if someone does I would love to hear about it. And that they do it safely & wisely, with honesty and the Golden Rule etc etc :) Maybe we could learn some good things from it, and it seems almost as interesting to me as space travel.
I've spent many hours doing similar things to this--my most recent was building Virtualbox VM's of the more interesting abandoned and ancient OSs from WinWorldPC. So far I've got working versions of OpenSTEP, BeOS, SyllableOS, Plan9, Solaris, ReactOS, Memphis/Neptune/Chicago, OS/2 Warp and ECommStation, and another dozen or so. When I say working, I mean bootable with a network connection of some type, though mine are usually a little higher up the stack than raw UDP.
Thanks for a great read, love ideas like this if for nothing else but to keep one's nerdiness and thirst for obscure trivia alive.
:D Actually, I thought of Gentoo when reading the post. LFS is probably closer to the task at hand, but the stage 1 Gentoo installations aren't that far either.
That’s because the problem is hard and deep, and most open source contributors work for free and are shallow, working to satisfy their egos and need for intellectual stimulation. Paid expertise is needed here.
Imagine if all doctors and surgeons were volunteers; that’s the situation you have now with a lot of OSS.
Given the large majority of commits to the Linux kernel are done by people employed by companies to work on it, your starting premise couldn't be further from the truth, let alone the conclusions you draw from it.
Given that they said most open source contributors I think they are right. Most contributors are not contributing to the Linux kernel. Most are contributing to (https://githut.info/) JavaScript packages
I think that's being somewhat generous of an interpretation given the thread of the conversation, which is specifically about drivers. For it to be talking about Open Source developers in general would be a serious pivot.
It's a concept that basically boils down to "people would not be smart enough to understand the code unaided". This is plainly wrong. I speak from experience of diving into some horrible code base.
It also assumes that reproducing the functionality from scratch would be hard. Looking at all the reimplementations of A where A is one of unix, X, web browser, network stack, SQL database and other complex techno, it is also clearly not true.
I also don't see bus factor the same way you do. People can be smart enough to understand the code, but unless someone actually goes and actively takes up code maintenance the project will slowly fade and disappear (for example the synaptics driver that libinput replaced https://github.com/freedesktop/xorg-xf86-input-synaptics/com...).
Disagree. Learning a codebase from scratch is tricky; it's better if you have someone to teach you. So if the one contributor gets bus'd, then it'll take time (=money) to get up to speed.
I wonder if any of the commercial distro developers care enough about Linux on the desktop to fix this. Or, does Chromium OS use libinput? If so, then Google could fix this.
What's wrong with the synaptics driver? It's the first thing I installed over stock Ubuntu, to make it even tolerable. Whereas with libinput, after only minutes of use, I noticed I actively avoid using the touchpad, subconsciously bracing for pain and frustration in advance. Is there a reason that libinput is so horrible? I'm not even sure if it's only gnome's/tweak tool's anemic settings dialog, or fundamental lack of kinetic scroll and sensitivity params in libinput itself. It's completely unusable for me.
> The xf86-input-evdev driver is in maintenance mode at this point. The last commit was in May 2018 and, since the 2.10.0 release four years ago, there have been a total of 19 commits. It is still shipped in RHEL 8 in order to support "crazy devices" that don't work otherwise. Similarly, xf86-input-synaptics had a 1.9.0 release in 2016 and has had nine commits since. It is effectively dead and all touchpads should be working with libinput at this point. Since libinput took over from xf86-input-synaptics three years ago, no one has stepped up to say they want to continue maintaining it, Hutterer said.
> In the old synaptics driver, we added options whenever something new came up and we tried to make those options generic. This was a big mistake. The driver now has over 70 configuration options resulting in a test matrix with a googolplex of combinations. In other words, it's completely untestable. To make a device work users often have to find the right combination of options from somewhere, write out a root-owned config file and then hope this works. Why do we still think this is acceptable? Even worse: some options are very specific to hardware but still spread in user forum examples like an STD during spring break.
Meh, libinput works fine for me. I suspect it's pretty hardware-dependent.
From reading around, most of the distros started deprecating synaptics because it wasn't actively being worked on, didn't work with Wayland, required root for a bunch of stuff unnecessarily etc.
My laptop has a Synaptics trackpad and came with Windows pre-installed. Trackpad acceleration was really comfortable on Windows. It all changed when I installed Linux. Tried adjusting the Synaptics driver settings but couldn't get it to match the Windows experience.
So what causes this difference? Is the Windows driver just better than the Linux driver?
Haven't had a new laptop in a while, but as far as I'm aware, most touchpads are not Precision Touchpads and instead continue to emulate stuff in the driver without Windows knowing, so you're still at the mercy of the (usually shoddy) driver implementation.
All Windows laptops are now required to have Precision touchpads and for older laptops, you can simply install the precision driver onto them and they work the same way.
As someone who can't be bothered to pay the Apple tax, what does the Macbook trackpad do that others don't? My experience with different mouse acceleration configs (Windows, Synaptics, Linux) has only been negative.
I can answer this a little, for sensible reasons (the team I run at new job was entirely on Macs) I gave in and chose the 16" Macbook Pro over the Thinkpad/Fedora I'd have defaulted to normally (since my personal laptop is a nicely specced T series I still use that for personal stuff at home).
Simply put the touchpad on the 16" Mac feels like its from the future, recognition is perfect, multi-finger gestures are consistent and always recorded so you can flip around between workspaces and after a day or two it becomes so ingrained that it gets out the way and you stop even thinking about it, it becomes fluent in a way other touchpads don't.
On a recent Thinkpad with Linux I just use the TrackPoint, I've never gotten the touchpad to be more than merely acceptable.
Frankly I still prefer working on my Thinkpad at home on my stuff but the touchpad on the Mac has ruined every other touchpad I interact with entirely.
Generally I like the Macbook and OSX as someone who hadn't ever touched it til a few months ago is a pleasant OS for development, there isn't anything in there I really hate, it feels like a linux distro with a nice DE/UI mostly.
Oh and I really like iTerm2, that's a really nice piece of software.
Nothing. macOS simply implements touchpad gestures in continuous way as opposed to binary triggers, does it consistently across applications since most of it is handled by toolkit and comes with sane defaults that are harder to accidentally misconfigure than on other systems. Those things contribute to perceived user experience, but there's no real reason other than lack of software polish that it couldn't be like that on other platforms as well. GNU/Linux is slowly getting there, but there's still plenty of work left across toolkits and input stack.
Yeah, "nothing" is a really poor term to summarize "Apple made fundamentally superior choices in terms of hardware development, software integration and general OS input architecture in the past 15 years".
I use both Windows and Fedora enthusiastically on desktop machines, but recognize that even high-end Windows laptops have trouble matching my 2012 Macbook in terms of input. Apple has been miles ahead of the competition in this specific regard for years. I would love to use Linux on a laptop, but although the combination of libinput and modern Qt and Gtk are helping here they're still ways off [1].
Many people who have only used Linux and Windows on laptops for years assume that a touchpad is only there as an emergency. That you cannot seriously use a laptop without a peripheral.
On Macbooks, that's simply not the case - I never use a mouse with my Macbook. It's easy and probably deserved to discount Macbooks in other areas (even in terms of e.g. keyboard input, looking at Apple's 2016-2019 lineup), but you don't know how good an input tool a touchpad can be unless you've spent some time familiarizing yourself with a Mac touchpad.
Apple sells desktop versions of the touchpad [2]. It might be difficult to believe, but they do so unironically. I bought one for when I connect my Macbook to a big screen.
[1]: I found that Ubuntu has a better laptop input experience when running in a VMWare Fusion than running natively on a Macbook, despite all hardware working out of the box...
IMO the best way to control Apple devices is KB + mouse + magic trackpad. Mouse is of course superior for precise movement such as highlighting text, but the trackpad itself is worth it just for the gestures.
I have a mouse and keyboard attached to my 2015 MBP....and I still reach for that track pad. I really need to replace my mouse with their Magic Trackpad. It's just that good.
Yes, they do, I'm using one. Aside of haptic feedback which allows it to go really quiet, it's really nothing special - just a regular touchpad. I'm also using the internal one in my Dell XPS - if it was slightly bigger it would be just as good. In the past I've used Lenovo Yoga one and it was just as good either.
I'm using touchpads exclusively on GNU/Linux for years now. It's a myth that Apple touchpads are "magical" - it's just macOS which comes with good defaults, but if you put some effort in it you can get it just as comfortable on GNU/Linux as on macOS (sans continuous gestures, I have to admit that's the one thing I envy them).
> even high-end Windows laptops have trouble matching my 2012 Macbook in terms of input
That's only because vendor drivers on Windows are notoriously bad. My girlfriend is using a mid-2012 13" MBP and its touchpad is rather meh. Not bad, but I've used much better ones. Current Macbook touchpads are better than the 2012 ones though.
>Many people who have only used Linux and Windows on laptops for years assume that a touchpad is only there as an emergency.
I never use mice, only trackpads, on Windows laptops and Windows laptops from 2013 with Linux on them, and it's fine. c: And by fine I mean bad, but that's just laptops, as an ontological concept, to me.
Well that's what I'm trying to say :) Touchpads on Windows and Linux are "fine" at best (high end Dells are good enough to get out of the way I've heard) and "terrible" at worst. On Macbooks they're consistently great, and can make it legitimately enjoyable to use a laptop as a, uh, laptop.
Worth noting that there are still usage scenarios where a mouse beats a touchpad (i.e. image editing, pointer centric games), but overall touchpads don't have to be the inferior option if your computer is built for it.
My Huawei Matebook actually has an excellent touchpad interface as far as I can tell. The only real drawback is excessive pinch sensitivity, which seems to be application-specific (happens all the time in Firefox, less in Office). Scrolling and selecting are continuous. Minesweeper is still hard to right-click 100% of the time, but that didn't work well on OS X either. In fairness, my most recent Mac was from 2013.
Unfortunately Huawei's reputation is mud these days thanks to the Chinese government. They made pretty nice laptops, though the aesthetics are obviously imitating somebody.
Not when you want speed and precision. the area of touchpad is far greater and it shows when you want to draw something, or move cursor between edges of the screen. that's the same as playing first person shooter on console controller vs the mouse.
Not even talking about the haptic feedback, pressure sensitivity (fun for drawing, handy for quicklook and dozens of other things). Precision? Oh yes. Moving per pixel is bliss.
It might sound elitist, but if someone tells me they have a better input interface than one of apple's trackpad, my mind goes somewhere else. I even replaced the mouse at my work with their trackpad.
I remember when I was big at CSGO and wanted to play a few matches on the go one time. I still ranked pretty high, with a TRACKPAD.
> It might sound elitist, but if someone tells me they have a better input interface than one of apple's trackpad, my mind goes somewhere else.
I don't think it sounds elitist, but just presumptuous. I don't actually think a TrackPoint is superior in every case, and I think Apple has done a great job for a trackpad, but at the same time I'd hate to go back to a trackpad for general use.
Most touchpads have just as good precision - well, unless you count really crappy low end ones of course. Haptic feedback is a nice thing and it's the reason why I own a Magic Trackpad 2 (I have even hacked the kernel driver to get it more quiet than what macOS allows), but that's pretty much its only differentiating factor.
My old correctly calibrated trackpoint (on Linux, even) was better at precision movement than any track I’ve used since (including all the major vendors).
Macbooks do literally nothing, it's macOS that does. And with a bit of effort and understanding of the input stack, you can get it pretty close on GNU/Linux anyway - at least close enough for really comfortable usage. I haven't been using mice for years now and I'm not a Macbook user.
You’re missing the forest for the trees. Nobody saying this can’t be replicated. But Apple does it out of the box, and better than any alternative I’ve tried. That’s why people buy Macs. Time is money.
...but that's a completely different argument to make. I was answering the question about what does the hardware do differently, and the correct answer is: nothing (well, aside of haptic feedback to be exact, but unless you care about silent clicks it's just a gimmick).
Exactly. I like how Professor Galloway (professor at NYU, investor, podcaster, etc...) puts it - successful businesses create time machines. People will pay large sums of money to gain more time.
I don’t have a clue about _why_, but I know from using macOS and Linux on mbp that whereas the touchpad basically works in macOS, it totally doesn’t function in Linux! :(
In Linux, wrists keep triggering touch events as you type. Screens start scrolling, focus goes to random things, and everyone swears. You end up having to hold your hands right over the keyboard in a very unnatural way to avoid accidental mouse interaction.
I’ve had this problem on my current hp laptop that replaced my mbp too. It’s a Linux concepts problem. Linux simply doesn’t cope with hands near the touchpad.
Lots of coders I know use Linux on laptops. They swap recipes for how to disable the touchpad completely.
The trackpad in my laptop (Synaptics branded) has never had trouble like that in my experience.
Then again, I'm pretty sure Synaptics has wrist detection built into the firmware of the trackpad itself. I suppose Apple and Windows do this in software for models that don't support this and that's why it's going wrong.
I'm not saying that people don't use Linux on laptops by the way, I meant that the overlap between people who run Linux and use a MBP isn't very large (because Linux on Apple can be anything from annoying to install to near impossible).
But, as far as I know, Apple still releases the source code for Darwin, right? So shouldn't it be possible to port the trackpad functionality from there? Or did Apple hide the trackpad driver in a closed source blob?
> I'm not saying that people don't use Linux on laptops by the way, I meant that the overlap between people who run Linux and use a MBP isn't very large (because Linux on Apple can be anything from annoying to install to near impossible).
Maybe I'm in the tiny minority here, but I much prefer the way touchpad handling on Linux behaves compared to both Windows and macOS. At least on sway, it feels more 'accurate' than either of those two - you move your finger and the cursor actually goes to where you want it.
> But, as far as I know, Apple still releases the source code for Darwin, right? So shouldn't it be possible to port the trackpad functionality from there?
Unfortunately not! I worked on reverse engineering a driver for the SPI touchpad found in newer (but not too new) Macbooks and based most of it on packet captures from Windows' Event Log [1]
> In Linux, wrists keep triggering touch events as you type. Screens start scrolling, focus goes to random things, and everyone swears. You end up having to hold your hands right over the keyboard in a very unnatural way to avoid accidental mouse interaction.
I have this problem as well. The Synaptics driver seems to have some palm detection logic but it does nothing. Perhaps my trackpad has no support for it.
> They swap recipes for how to disable the touchpad completely.
Sadly the surface I usually place my laptop on doesn't have space for a mouse...
> In Linux, wrists keep triggering touch events as you type. Screens start scrolling, focus goes to random things, and everyone swears. You end up having to hold your hands right over the keyboard in a very unnatural way to avoid accidental mouse interaction. […] Linux simply doesn’t cope with hands near the touchpad.
Enable the option to sleep the touchpad when the user starts typing.
I recently discovered after reinstalling Linux Mint on my Dell XPS (that shipped with Ubuntu) that is not an option "out of the box". On my previous interest I used to have that option. Now I'm not sure where it's gone so I do (very sadly) only have the option to "disable touchpad when mouse is connected." Please help!
I have exactly the same laptop and don't have any issue with the touchpad either and never really understood what's wrong with non Mac touchpad. Maybe it's just a cheap laptop vs expensive laptop thing than Mac vs Linux.
At least KDE offers the option to disable the touchpad while typing. I'd expect other desktop environments to do the same. Isn't that what you're looking for?
hear hear, i'm on dell xps 9370 with Ubuntu and I've been having the same issues since the beginning. Yes, I also learned to hover my hands further above the keyboard to compensate. It's not very ergonomic.
It's a bit better when i disable touch click, but then i have to physically click the touchpad which is harder on the fingers as well !
To make stuff worse, Elantech/Synaptics feature-gate their drivers... I have had laptops where I put a trackpad driver from some totally different laptop and suddenly I'd have a lot more gestures.
But no matter what, two finger scrolling and zooming won't ever be as smooth and "step free" under Windows/Linux... once experienced, cannot be missed.
I'm still using Firefox on XWayland (because of the subtle bugs in the Wayland backend), but you can also get continuous scrolling on X using XInput2 as well (set the environment variable MOZ_USE_XINPUT2=1).
Smooth scrolling isn't impossible, but it does need to be specially implemented. Firefox does so if you set the environment variable MOZ_USE_XINPUT2 to 1. Of course, on Mac OS X, it's built-in to the UI toolkit, so every application has it.
You need to experience it to understand just how much more calibrated it is to the human experience, feels natural, like air. :) How about visiting a store that sells macbooks?
I went to an Apple store to try just this and came away unimpressed. Apart from the much better integration of gestures with the OS, I didn't feel anything special about the touchpad or its acceleration curve. In fact the touchpad was quite annoying to use because there was a kind of block on movement after a click that was far to long, making it difficult to click multiple different things quickly in a row. Overall I preferred my own laptop using libinput at the time :)
> kind of block on movement after a click that was far to long
There are two modes of operation, what you dislike is the default; the other is called tap-to-click, which doesn’t give you haptic feedback and you can click however fast you want. Opinion is divided on this, with many diehards swearing by the haptic feedback. I prefer tap-to-click.
Tap-to-click is slower though, if I understand correctly. Imagine clicking three adjacent buttons in a row as fast as possible. If you tap each one you have to lift your finger off the trackpad 5 times.
In your situation, you have to tap, lift finger, place finger to move, lift finger, tap, etc.
In the other situation, you have to place finger, press lightly, move finger, press lightly, move finger, etc.
Using tap-to-click can't be faster than the other situation.
That said, I use both. Sometimes I just tap, sometimes I press, sometimes I press harder.
I guess if it's a "feel" thing it should be possible to replicate it with lots of tweaking, but I don't think many people who run Linux on laptops use the Apple trackpad much so the overlap between the people who want this and the people who care to write such drivers probably isn't big enough to come up with a Macbook style driver.
If you’re thinking tuning as in artist sensibility and acceleration then you’re quite off, the experience goes way beyond this.
For example rest two fingers and press to click, that’s a right click. But rest two fingers and click with the thumb, that’s still a right click, not a three finger click. Same for two finger scrolling. That’s a couple of simple examples but there are a lot of subtle corner cases like that all around the experience which are not just about tuning some values.
Yeah, I’m not sure why there’s such a massive contingent that admit to having never owned an Apple product (some snarky comment about the Apple tax) and dismiss the hordes of serious engineers who use these products. Can you get 90% of the way there with your favorite flavor of Linux? Yup. Are there some things that Linux does far better? Definitely (looking at you containers). But it’s that 10% sugar that make most (not all) people more productive. But if I can make my expensive engineers more productive by spending a few hundred $ premium on the single most important piece of kit I’m giving them, well that’s insanely good ROI.
Because closed source software is worse for everyone. Regardless of any other feature, being closed source makes Apple products a non-starter for a sizeable subset of the population.
"sizeable subset of the population" is a bit of an overstatement. Most people are fine with running Windows or macOS. There's just a vocal minority that wants open source stuff.
I'm a huge linux fan and run both Ubuntu and Arch Linux on a daily basis, but I used to be on OSX and Apple hardware between something like 2012 -> 2015 before switching back to Ubuntu/Arch. The only thing Apple does better is touchpad input.
And I also cannot put the difference into words, but if you have Ubuntu (even with synaptics drivers) and MacBook side by side, there is a noticeable difference in how they behave and feel but even by having them side-by-side, I can't consciously tell the difference, even if I feel it.
One of the main differences is that MacOS has visibly lower latency when scrolling / moving the mouse pointer. This is immediately visible when you scroll the same webpage on Safari/Mac vs Firefox or Chrome on Linux.
I don't know exactly why this is happening, and there is no in-browser option that can overcome this difference, e.g. disabling smooth scrolling and enabling XInput2 helps but does not overcome the difference. It seems to be something more fundamental in how the drivers work and deliver input to the user interface.
Interesting, just tried this, where I compared my laptop (Ubuntu) and desktop (Arch) with my Macbook, and the latency actually seems lower on Linux than OSX, seems OSX has some smoothing that seems to make things look/feel slower. Scrolling is clearly faster on my laptop and desktop than the Macbook (didn't even had to record a video to see the difference), tried Firefox on both Linuxes and Safari/Chrome/Firefox on Macbook with the same result.
I'm switching between mbp and xps15 and honestly couldn't describe the difference. They're slightly different, but I wouldn't call either one "better".
I switch regularly between Windows, Linux, and Mac systems with roughly equal time between each. I've never found anything magical about the trackpad, though everyone told me I would. It's a good trackpad, that's about it. Maybe I notice it less because I tend to primarily use the keyboard and avoid the mouse where possible.
Trying to install Logitech MX Master 2S on Ubuntu 16 and it detects the device on bluetooth but not as a mouse device. So it doesn't move, it's dead. Windows and Mac work just fine. Apparently Logitech can't be bothered to have drivers for Linux or Linux doesn't know Bluetooth well enough to handle all mice.
Use the Unifying receiver, that's what I've been doing for 10 years with multiple devices. It costs like 15 bucks, it's tiny and works flawlessly for any OS/device combo I throw at it, including the Ubuntu 16 + MX Master 2S + K800 that I'm using right now. There's an app called [Solaar](https://github.com/pwr-Solaar/Solaar) that helps with the pairing and such but I don't think it's strictly necessary (can't remember the last time I ran it).
Yes I can imagine it to be annoying for some use cases. My Carbon has, among others, 2 USB-A ports, 2 USB-C ports plus a trackpoint and a pretty good trackpad. My older Thinkpads have more USB-A ports than I'll ever need. So the unifying receivers are only used when sitting at a desk, where they're permanently plugged to USB-C hubs.
It's been a while since I tried the default setup, but I believe it is set up so mouse buttons are simulated by clicking on the bottom left and bottom right of the touchpad. Additionally the touchpad sometimes causes stray input while your hands are above it typing.
Seems to be like the obvious initial way to up the ante would be to get rid of the compiler. How would you go about bootstrapping your way out if all you had was the Linux kernel, glibc, and the Bourne shell?
It starts with a system without firmware and an OS, just a <=1KB hex monitor that assembles special hex0 programs. In the next stage it bootstraps a more comfortable assembler. Then it uses this assembler to bootstrap a small C compiler written in assembly. And then it uses this C compiler to compile a C compiler written in C. All the way up to gcc.
You basically just have to know the tricks of inputting binary data through the shell. Learn how the ASCII table works, what the ctrl and shift keys do (setting/unsetting bits 5-7, in particular), and how the Alt key enables the 8-bit.
You could even go a little further. the printf command should be a built-in with just about any shell, so even lacking the `cat` program, you can output arbitrary data to a file.
Almost seems cheating to just hand-craft an ELF executable from the keyboard, but who's making the rules?
If you have a shell or standard POSIX tools you'll have printf, which supports hex and octal formatting, and often C modifiers, so you can encode it as one long-ass integer and save keystrokes.
Minimal Linux Live has a nice small script where you see what is being set up. It does include Busybox though, which might already have too many tools.
> How would you go about bootstrapping your way out if all you had was the Linux kernel, glibc, and the Bourne shell?
You're fully aware that you're describing a complete programming environment and then some, correct?
Bourne's shell (assuming you mean the original, not the GNU 'remake', but GNU bash would practically make this effortless as well although for a different reason) is fantastic for almost any purpose. You can do virtually anything you'd like with it.
Well yes, I know it can be done, but the point is that it's considerably harder than the original challenge because you can't just output a simple program using the Linux kernel APIs or libc without having a compiler.
These sound like things that used to be regular part of install/setup on BSD and Linux. It's so much nicer now not having to setup the network, GUI or disks. Nevermind the week of downloading all the floppy images.
A basic shell with just forks and execs binaries from $PATH and nothing more. Maybe, just maybe, file redirection. You won't have neither echo(1) or cat(1), you must reimplement them. Echo(1) is damn easy.
Better would be to have a scheduled execution of some bits on the disk, every 5 minutes or so, and you are only allowed to write to disk, not execute anything on your own.
To avoid memorizing certain things to win speed running Linux could be broken down into categories so that it can be a fair comparison. An example would be no memorization or no networking . The current speeedrin community does this already.
If you have echo and the source code having an ed clone would be very convenient as it would double as a pager.
Also, it would be much faster to edit the files.
> We had to bootstrap from nothing originally too.
I doubt that the original Unix was written without an editor, using only cat. Of course some earlier interactive system at one point did have to come up without an interactive editor... but still I'd think it's much more likely that the editor was written off-line and fed into the system on punched cards or tape rather than cat'ed from the terminal.
The original post only mentions the network and implicitly the keyboard as possible inputs, but of course the challenge would be much easier if you allowed a DVD, CD, or even floppy drive with drivers (which are part of the kernel). I think pretty much any historical bootstrapping of an interactive OS is much closer to this setting.
it had 'ed' the precursor to ex and vi editors. The terminal they used was paper so not quite ready for a visually oriented editor! (https://en.wikipedia.org/wiki/Teletype_Model_33) If you're stuck on a basic system or rescue shell, cat and echo are considerably more straightforward if you don't know how to use ed already.
sorry I was unclear. I assumed those who created ed knew how to use it - but regular people not familiar with ed would tend to use cat and echo when stuck in a broken / bare system.
Of course they knew how to use it. But I don't think they wrote the first lines of ed in ed-on-the-live-system. It is much much more likely that they wrote them offline and imported the code from punched cards or via tape or whatever from another system.
For whatever it's worth, Wikipedia (https://en.wikipedia.org/wiki/History_of_Unix) says: "In about a month's time, in August 1969, Thompson had implemented a self-hosting operating system with an assembler, editor and shell, using a GECOS machine for bootstrapping." I'm sure there are more details to be had somewhere.
This would be a really fun competition in a maddening way. If somebody put up a VM to start with, I'd bite. Or at least, if there was some kind of criteria for a starting point, putting together a bare bones distro wouldn't be too hard. I can see bootstrapping a basic editor using cat and working all the way through a TCP stack, DNS, etc. but my mind blanks at implementing TLS. Seems like it would be a requirement to do anything meaningful on the internet at this point, but I wouldn't even know where to begin.
>" About 15 years ago, I mused about the idea of having a "desert island machine". This is where I'd put someone in a room with a box that has a couple of hard drives and a working network connection. HD #1 is blank. HD #2 has a few scraps of a (Linux) OS on it: bootloader, kernel, C library and compiler, that sort of thing. There's a network connection of some sort, and that's about it.
There are no editors and nothing more advanced than 'cat' to read files. You don't have jed, joe, emacs, pico, vi, or ed (eat flaming death). Don't even think about X. telnet, nc, ftp, ncftp, lftp, wget, curl, lynx, links? Luxury! Gone. Perl, Python and Ruby? Nope.
There's your situation. What do you do? "
I would swim, do some sunbathing, eat coconuts, do some fishing and generally enjoy the situation. No work, no stress, nobody to bother. :)
It’s fantastic. I was there in 2016. Have visited Greece at least 15 times but Samos is still my favourite. Make sure you rent a bike and drive around the island.
Heh nice, I don’t know if you like sports but climbing is really fun. Each climbing problem is pretty similar to figuring out a programming problem: breaking something down into sub problems, figuring out how to do each one, and then knitting it all together! This is something I like to do for fun :)
>" So here's the pitch: Linux speedruns. By that, I don't mean "speedrunning a game on a Linux box" (like emulation, or something). Nope.
I mean speedrunning the Linux situation itself. Start with a minimal system and get yourself to the point where you can do something meaningful (like reading cat pictures on Reddit). "
I'd rather do something useful or at least pleasant. I don't have an infinite amount of time and I don't like to waste it.
Looking at your comment history I fear this may not be the community for you. Constructive criticism is valued, but negativity and sarcasm bordering on racism are not. Please try to enhance the community rather than detract from it.
Tbh I think its something that's fun once and can be educating, but after doing it once it's mostly just annoying
In fact, I pretty much exclusively use Ubuntu or Ubuntu flavors because in lazy and things mostly work out of the box and if I want, it's still customizable