Yes, this took me back 25 years! My favorite use of these was by Upper Deck, the company that sells (sold?) baseball trading cards. Their 1999-2000 PowerDeck series was these baseball card size CD-ROMs that presented something like a DVD movie menu and could play a couple of highlight reels of the player.
I still have a stack of these lying around. I used them in high school for carrying a live Linux distro in my wallet to use on the school computers since the BIOSs were too old to support booting from USB. Needless to say, the school district IT department was not happy with me.
Nice! When I was in high school I made myself a duct tape wallet that could hold a few floppy diskettes so I could carry Floppix with me (a very limited distro on two floppy disks). I think at one point I also had a Win98 repair floppy crammed in there too with some extra DOS utilities on it. Four megabytes in my pocket felt a lot more useful back then.
While I am thankful for the increased local storage space available today I do not care to use it for storing giant executables and libraries. I prefer to use it for storing data.
Not a fan of storing data on someone else's computers, otherwise known as "the cloud".
Or unnecessarily running software from someone else's computers where I could just as easily run it locally, with better speed and reliability, otherwise known as "software as a service".
I feel like slot loading drives were kind of a later thing, though— I only ever saw them in cars until after the Nintendo Wii in 2006. Other than the iMac, computer drives always had trays.
The whole thing is dated; that looks like more of a retrocomputing project (mostly) than an updated version. I mean, busybox 1.00 is probably fine for what it is, but it's not exactly new. (Note that this is a clarification but not a criticism; having played with things like "how old of a distro can I shove in docker and run on a current kernel", I certainly support retrocomputing, I just think we should acknowledge that that's what we're doing)
A lot of it would hinge on how much hardware you were trying to support. Router images can get nice and small because they build a kernel with the exact drivers in the target. Trying to support all the insane variety in laptop wifi, input devices, and power management is probably where a lot of the bloat comes from.
I don't think you can use the old toolchain; newer kernel+busybox are unlikely to build with that old of a compiler. Although, following the build steps with modern sources and toolchain would be an interesting exercise.
Around 2009-10s when keeping a dedicated usb drive for live images was relatively expensive (for me), I had a small partition with tinycore installed as a recovery os alongside windows & another full distro.
I never had to use tinycore for recovery but it gave me enough confidence to keep messing with new packages and drivers.
Due to its small footprint the boot times almost felt like instant on.
Pico is a microcontroller whose Cortex-M0+ cores lack a memory management unit for virtual memory (considered essential for a full-fledged OS like Linux). But can run FreeRTOS on it...memory usages are 236 bytes for the scheduler, 76 bytes + queue storage area for each queue, and 64 bytes plus task stack size for each task, plus 5 to 10 KBytes of ROM.[1]
The popular way to shorehorn modern linux onto a MMU-less microcontroller is to build a a RISC V system emulator and run uClinux on that; you can also emulate the MMU and run regular kernels if you have sufficient resources. It has been done on ESP32 with sufficient RAM; Pico would need additional hardware though in the form of something like QSPI RAM, and of course it would be very slow.
Yeah I figured there might be a rub somewhere otherwise it would already be a thing, but since it's technically an ARM it sounded vaguely promising. What about a 32 bit build? I think those used to be able to work with without virtual addresses.
aha, and this reminds me about μClinux [1] which targets microcontrollers without a MMU. I installing it on 2005 iPod Classic 5G, and was able to then put a gameboy emulator on it.
Though μClinux project seems dead, however the key component of that is a ELF to bFLT (binary flat) converter [1] for no-mmu Linux targets, which is alive on github [2].
No one, unless you’re using a CPU emulator or a JIT with an SPI RAM. RP2040 doesn't have memory protection / virtual memory capability and has only 264 kB (or so) RAM.
Fantastic to have another option with modern tools! great work that will be appreciated by many. Between this, Puppy, and Tiny Core Linux so much old hardware can be put to potential use. I’d also mention Finnix as an excellent rescue image solution. Any other awesome projects for limited hardware and Linux use that should be more well known?
I've been quite happy with Alpine Linux. You can build it up to suit your needs for desktop, server, embedded or containers, but will run quite speedily on any supported arch from a few tens of MB of memory. The APK package manager is pleasant and quick, and the package list is quite extensive.
Wow I've never heard of someone running Alpine as a desktop OS before. How is the experience without glibc? What are you using for a DE? I'd thought X relied on glibc
I've run musk based distro for a couple of years with no trouble (KISS Linux and now some hacked up Alpine monster I put together). I don't do streaming video that requires DRM -- which will be a non-starter due to the widevine/whatever plugins being compiled against glibc.
But yeah, full Wayland desktop (well, sway) and Firefox -- no problem. I occasionally use a debian chroot to pull up gnucash (accounting program) which works as a backup but it's rare. My debian chroot is mostly to run a 10 year old printer driver from Epson that's compiled against glibc, but doing a little trickery with a small C program works just fine with CUPS still running in alpine (print filters operate on stdin and stdout, so you can launch them in a chroot by themselves no problem).
You can run gcompat and some AppImage for that DRM crap. Ditto with the CUPS driver. Not an appimage, you might be able to set up that CUPS binary with gcompat.
I've used Alpine Linux also but I found it very unintuitive for a general Linux distro (I used to like configuring things like Alpine but I've lost the spark for it and now I just want something light that works. DSL used to be like that.)
Not so much, actually. When you aim for anything that resembles modern “desktop computing” (maybe with at least some “web browsing”), you are limited to decent hardware configurations from last 15 years or so. Yes, you can show to your grand-grand-grandkids how it really was back in the days once, but you are not going to study the splash screens while programs initialize, or wait for each image to appear for a couple of seconds when skimming trough an archive, or watch page load progress bars move in the browser. But with that decent hardware, you almost always can install bog standard modern Debian with an ascetic desktop, and have much less support issues than with specialized system. It'll be the same Linux anyway.
Although it is possible that it won't work for some top performance purely 32 bit CPUs, because non 64 bit builds are certainly out of fashion today, even though some 32 bit distributions still exist.
The computer I use most of the time is a 19 year old (2005) laptop. I run Debian with LXDE and Firefox on it and, although you have to be a little bit patient with some websites, I am generally still very satisfied with it.
I suppose it's a desktop replacement model with desktop Pentium 4 and whole 2 GB of memory which cost thousands of dollars? Regular Pentium Ms of the era get dangerously close to netbook Atoms in performance, which is certainly the bottom of the barrel.
Linux From Scratch (LFS)[1] is well known but doesn't get a lot of fanfare. It was designed as a learning tool, but the avenues for exploration are endless.
I've always felt Gentoo was a decent cross between LFS and a "real" distro like Debian - much of the install is similar to LFS with some hand-holding, and the end result is a system that has package management tools.
Absolutely! I recommend Gentoo in a separate thread below.
LFS has the topic of package management covered quite nicely I think[1]. They describe the contraints and approaches that might be possible, and what the real world solutions to those are (PRM, DEB, et al).
There have even been some package managers designed (or at least discussions of what the design would look like) for LFS explicitly over the years, but none seemed to have come to fruition, and I can't find any links to them.
Woah, now that's a name I haven't heard in a long time!
It says that it fits on a CD, but how about the other requirements? e.g. how much RAM is needed, and what kind of instruction set does the CPU need to support? is a 386 enough or do you need 486/586/686 level instructions?
I got it running as a VM guest on QEMU+KVM, exposing only a 486 CPU profile with 256 MB RAM, and it was still usable (with terminal, file manager, and some light web browsing). Looking at the system's memory usage though, it appears that going much lower than about 200 MB RAM would probably make it quite difficult to use (at least without relying on swap, which could make it even more miserable depending on the device being used there).
Yeah, that's a good point. As a further experiment, I tried with various "hardware" combinations (from 486's to Pentium II's) in 86Box, which actually performs such emulation, but unfortunately I haven't been able to get it to boot properly (kernel panics during initialization, if it gets that far at all).
It was an abandoned project for a long time, that's why the memory is old[0]. I remember because I was looking for something very light to boot up an old chunky armada laptop, and I ended up on Puppy Linux, even though I wanted to use DSL because of the cool name.
Wow, this is crazy. I came across the DSL website last night while trying to figure out how to compile a minimal Linux kernel myself, and now here it is on HN! I used DSL back in high school when it was new.
As a side note, why does compiling Linux have to be so... obtuse? It just stops for me after several minutes of building out objects with no explanation.
> As a side note, why does compiling Linux have to be so... obtuse? It just stops for me after several minutes of building out objects with no explanation.
That's... odd. Does it break if you just use the default `make defconfig` configuration? Because pruning what's built in without breaking it is hard-ish IME but it shouldn't just fail silently. Or... when you say "It just stops" you don't by any chance mean that it finished and you just need to find the actual binar(y|ies) it produced?
Are you looking to build just a minimal kernel or also a minimal distribution? (Which is what I happened to be thinking about last night :)) In the latter case, do you know any good resources about that topic?
Depending on how minimal a distribution you want, a few years ago I had a way to take a single ELF binary created by my computing stack built up from machine code (https://github.com/akkartik/mu) and package it up with just a linux kernel and syslinux (whatever _that_ is) to create a bootable disk image I could then ship to a cloud server (https://akkartik.name/post/iso-on-linode, though I don't use Linode anymore these days) and run on a VPS to create a truly minimal webserver. If this seems at all relevant I'd be happy to answer questions or help out.
I want to build a minimal kernel that I can virtualize (QEMU) for a variety of purposes across my arm64 Macbooks at home; ideally, it would be optimized for that hardware. One version of the kernel I want just for command line purposes, nothing involving graphics or sound. I want to also build a similar kernel that has just enough to run Firefox in a Wayland compositor (probably just Weston) along with sound.
No, I don't need to go this far, but I want to.
Unfortunately, I don't really have any resources to share. I just know how to boot a vmlinuz with an initramfs using QEMU, and decided to download the Linux kernel source code and try compiling it.
Its user interface is Docker-like, using containers.
For full desktop, I've only used the commercial app "Parallels", which can set up an Ubuntu desktop for you. Also Fedora and Alpine and Debian I believe.
But
> I don't really have any resources to share. I just know how to boot a vmlinuz with an initramfs using QEMU, and decided to download the Linux kernel source code and try compiling it.
I highly recommend working through Linux from Scratch and possibly the Gentoo Handbook. It's a journey.
> As a side note, why does compiling Linux have to be so... obtuse? It just stops for me after several minutes of building out objects with no explanation.
The Linux kernel compilation scripts use the lowest-common-denominator toolset: make/sed/awk. It would be awesome to rewrite them to use Python or some other higher-level language, but then it wouldn't run on a Japanese supercomputer built in 1986 and long-ago mothballed, and you never know when you'll need that!
It shouldn't ever stop for more than maybe 10 seconds. Try using a task manager to see what it's running. I think some steps take a few GB of RAM, so is it possible you exhausted your memory?
I used DSL back around when it was released (and 64 MiB flash drives were common) to get around my school's network filtering. I think this was one of the reasons they hired new IT staff the following year, because the technique caught on even with the non-nerdy crowd.
Yes, this brings back such memories for me too! I also used to boot from removable media to use linux on school computers. The librarian assumed I was some kind of computer hacker and reported me to the schools IT admin. I thought I was in trouble. Instead he took me under his wing and had me work with him after school on some fun projects! Really helped me understand that the skills I was learning were valuable and that I had an aptitude for it.
Kudos to the admin, that’s a much better way to handle such a situation than what I’ve sadly read many stories about, where the IT dept seemingly takes mild cleverness by students as a personal insult and punishes them.
Reminds me of cramming an installed QuakeII on a 64MB USB stick, and quickly booting it up on a few library computers when we had an hour off in high school. They blocked installers, but not 'portable' executables or network access.
I remember DSL fondly. It was a marvel then, and maybe now in retrospect even moreso -- that so much functionality could be packed into such a small footprint.
Conceptually the need still exists today, even if the whole landscape has changed in the meantime. I'll look forward to trying this out!
I just checked Wikipedia, and was surprised to see the original DSL only had releases for about 3.5 years.
> Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002
If you go all the way back to 2002, 50 MB for an old computer wasn't that small. I bought a new computer with 192 MB of RAM as late as 2005. My 32-bit, $400 discount laptop from 2009 has 4 GB of RAM, so 700 MB is reasonable.
I remember running Linux Router Project [0] on a 1.44mb floppy disk back in the late 90s! Of course, it didn't have a GUI, but I don't think you could even fit the linux kernel on a single floppy disk today.
Technically, you can compile the 6.8 kernel using "make tinyconfig" (which results in a 509kb image). Of course, this isn't usable on actual hardware, but it is a good baseline to build off.
In 2000, I was using Yellow Dog Linux on a Power Mac 7500 for this.
I even had it set up to use "dial knocking" to force it to connect remotely and send me an email with its IP address so I didn't need dynamic DNS.
In addition to NAT for my Ethernet (10BASE2) connected devices, it provided an Internet connection to my Telnet-accessible PDP-11/73 (15.2 MHz CPU, 4MB RAM, 456 MB hard drive [14" / 36 cm platters, 148 lb / 67 kg]) running 2.11BSD via SLIP.
50MB for a pocket OS was perfecly small. Compare it to a 6-7-8 CD release of SuSE, Debian or Mandrake. Or the 700MB Knoppix CD back in the day. You could download DSL in reasonable time.
Back in 2002, DSL was already a reaction to the CD-ROM based distributions which had bloated so much compared to the early days.
One of the first "approachable" Linux distributions circa 1993 was the Soft Landing Systems (SLS) 2-floppy disk set. One held the bootloader and kernel, the other the root filesystem. The kernel disk was swapped out during the boot process, so after that you only needed to leave the root system floppy in. Then, you could use a not uncommon second floppy drive for removable data disks. The SLS system was text console only, but I think (?) had an editor and gcc.
My first persistent installation, Slackware, was on a system with about 8 MB RAM and a 40 MB HDD dedicated to Linux. This had X Windows, Emacs, multiple dev tools, and modem based internet.
Iirc DSL used to fit on mini/business card CDs. No real motivation to get much smaller - unless fitting on a floppy. Then for a while there were small usb drives that were interesting, and with better options for persistent user data than r/w CDs.
On games, sgt-puzzles and bsdgames fit well, among nethack/slashem, DCSS and OFC Frotz too plus a few libre games (Spiritwrak and such).
Slashem+BSDGames+3 adventures for Frotz would weight less than 20MB I think. Compressed, about 7.
BTW, I'd ditch XMMS for Audacious; a Pentium 3/4 today would be more than enough to run it.
BTW Visidata it's huge, use sc-im+Gnuplot.
On browsers, felinks supports Gopher and Gemini too. Gopher has nice stuff as gopher://magical.fish, Gemini has similar places too.
Hopefully that magnet URI works. This is the first time I've tried to create one. Hopefully the tracker works too, I seem to be getting intermittent "Connection failed" errors from it. If anyone already knows how to properly serve torrents, please school me. :-)
I also raised an eyebrow at the three web browsers, but then I was thinking that it's quite likely that none of them are reliable for opening a modern webpage, so users might have to routinely try more than one.
I'm more amused by the inclusion of a GUI application for SCP and FTP. As someone who uses a full-featured desktop Linux distro as my daily driver, and uses SCP on a daily basis, I've never felt the need for anything but the CLI for that.
Also a number of games?
To be fair, the page states that "The new goal of DSL is to pack as much usable desktop distribution into an image small enough to fit on a single CD," so it is explicitly more about showcasing a collection of lightweight applications than it is about providing the smallest distro.
> All the applications are chosen for their functionality, small size, and low dependencies
I wouldn't mind a few extra mb of software if it improves my overall user experience. The nice thing about DSL is that its slim despite having a pretty comprehensive app suite.
The original DSL had a similar redundancies in its available applications. Even more so for DSL-N, which was free to break the "under 50MB" rule but still stayed remarkably tiny and efficient. That was one of the things that made DSL so cool: "I can get multiple browsers, a full office suite, multimedia tools, and even games on a bootable disk that fits in my wallet? Hell yeah!".
Program sizes have ballooned by an order of magnitude or more, so unfortunately so must DSL's target size if it expects to retain feature-parity, but it's still a lot of bang for one's disk-space buck by the looks of it.
Not really; the 3 browsers are dillo, links2, and badwolf. Of those, dillo and links2 are <10MB each including dependencies, while badwolf uses webkit which AFAICT is on the order of ~100MB. It's not practical to only have dillo or links2, so what it "could" take is >100MB for one browser, or >120MB for 3, and 120 is less than 3x100.
I had the Linux kernel and some simple user-space tools like busybox running on an embedded platform with 512 kB RAM and 2MB flash back in 1999! Those were fun times. To be honest 512 kb was possible but very on the limit, I think the product we launched with it had a few megs of RAM eventually. We had to invent a journalling flash filesystem as well in order to make it work in practice, something that didn't exist back then either. But Linux then was really a breakthrough compared to the horrible mess of embedded OSes that were needed otherwise to handle TCP/IP, filesystems and multitasking.
Yeah the 2/8 combo was probably what we went with in the product as well. The 512k was more like a shoehorned concept demo in an existing product.
The next thing we did was make a version of our CPU with an MMU, designed to work optimally with Linux (the first version was on the uClinux concept, with a kernel without MMU support and user-space programs that couldn't rely on fork() or mmap() fully). After a year or 2 with MMU-less Linux, it was like heaven to be able to run on an MMU :)
I remember this being one of my very first distros when I started using Linux, probably because of its very attractive name like many did here.
However what moved me away from it was the sudden abandoment due to the fallout between a primary contributor and the project's leader, to which the former made the focal point on his distrowatch interview, he would later create his own distro called TinyCoreLinux.
In my opinion this a fruitless attempt to restore any credibility that the project lead has lost after over a decade of negligence and abandoment.
Also by using Musl and Alpine as a base the amount of software you can put in 700MB it's huge.
With 700MB, IceWM and ZZZFM you could fit half a CD even with all the X.org drivers installed and you could fit Abiword, Gnumeric, Seamonkey, Dillo and so on with ease.
Offering an alternative with Linux-Libre will be interesting too, as often the Libre kernel works faster than the vanilla one, and legacy computers have all the drivers working. Propietary drivers won't work anymore such as Nvidia which some of the older ones might not even compile with DKMS.
I don't know if Musl is worth the hassle if one is only interested in the reduced size. I see the point if you have everything statically linked, but with dynamic linking, you have glibc sitting there just once with a few MB, and you don't have to tackle all the issues that can arise when using Musl (e.g., DNS).
Alpine's compressed container image is nowadays something like 3 MB, okay, that's very small, but I wish they had an 8 MB glibc version. On the other hand, there is debian-slim, but it's not as good as Alpine when it comes down to stripping down the size, it still weighs in at around 30 MB. I'm still using it, though, although I think it could be smaller.
I run Alpine on a desktop and a couple of laptops (low-end ex-chromebooks, one of them via postmarketos), and it's not really a hassle IME. Granted, I'm not doing a lot of building things from source (or if I do it's in docker and distro is easy to change) so maybe I'm just avoiding the pain, but if your uses are covered by officially packaged software it Just Works™.
Damn small Linux was the first Linux I could actually use as a child/adolescent because the downloaded zip file included a copy of QEMU.exe (and a .BAT file to boot DSL) that I could use to get a taste of Linux with no prior experience, using a Windows computer. Growing up in Redmond WA, home of Microsoft, it felt very subversive to young me to use a non-windows operating system. I'm forever thankful for that seemingly random include in the download; I probably wouldn't have become the person I am without it.
> DSL 2024 currently only ships with two window managers: Fluxbox and JWM. Both are lightweight, fairly intuitive, and easy to use.
I wonder what they will move to once they have to start using Wayland. Is there a lightweight, user-friend, and stable compositor ? (My experience is that you can choose two, but not three).
If I had to wager a guess, probably something like labwc would end up in that role for a distro of DSL's scope and philosophy. However, I think we're still a while away from that, and for the targeted machines, X is still completely fine and will remain so for a long while.
Great memories of the original DSL. New one looks great at a glance, really nice collection of applications. Blown away they managed to fit all that on a CD. From the page, it sounds like they did a lot of work to make it happen.
Really cool stuff. Might stick this on an old laptop when I get home.
I was looking for a lightweight OS to run on old Asus Eee PC 1005 HA, which uses a 32-bit Intel Atom N270 processor. I installed Void Linux (https://voidlinux.org/).
I may give DSL 2024 a try and see how it compares.
Void Linux is great for minimal installs. Gentoo fits the bill nicely too. Both allow for small init systems and, at least in the case of Gentoo, multiple bootloaders and initramfs tools.
Doubtful. TED's last stable release was in 2013, and that's the lightweight one I'm familiar with.
There's KWrite, but I'd be surprised if that were less bloated than AbiWord.
Markdown text might be fine, but I wouldn't expect markdown to PDF via pandoc to be particuarly "lightweight." There's the range of typesetting or desktop publishing stuff like TeXmacs, groff, or LyX, but I don't expect those to be particularly light, either. There's WordGrinder, a terminal based word processor, but I've never used that.
Love this distro, its the only one that loads fast on web x86 emulation. Sad that they're upping the size but 700mb is still leagues smaller then most other distros.
Damn Small Linux was my first introduction to Linux because it was the only thing I could download in ~4 hours on dialup without hogging the phones all day long.
I will have to fire this up and have a damn good time.
> keeping otherwise usable hardware out of landfills
while i like this idea in theory, in practice the energy efficiency and lower electricity costs of newer hardware mean that in terms of both cost and environmental impact it would probably be better to recycle the old hardware and buy something new in most cases.
>recycle the old hardware and buy something new in most cases.
Completely agree, other than nobody is willing to recycle the hardware in any environmentally friendly way. So "recycle" pretty much just means "send it to some poor country who is perfectly fine polluting their ecosystem to pull anything valuable from the junk".
It really depends. Computers have got very efficient in the last ten years.
Throwing away a five years old chromebook because google decided they don’t want to support it is very different than throwing away a Pentium4 (more of a heating machine than a processor)
We did work with Jabber/Email and 512MB/1GB of RAM running similar chat clients, desktop environments (XFCE 4.6 was much faster than 4.16), video players and office suites.
Nowadays to do the same today you need 10X the resources just for a chat application.
And by 'chat' I don't mean 'irc'. Jabber, embedded Youtube URL's, inline LaTeX documents...
>for the sake of "the environment," the solution is to go backwards?
Well, it's your environment, you would probably have to figure that out for yourself.
I don't think you would have to go fully retro to be more environmentally responsible.
When it comes to Reduce, Reuse, Recycle, this is a proven hierarchy where lots of times it's like an order of magnitude better if you can reduce compared to merely reuse. And once again better to reuse as much as you can, before you finally recycle (which can require so much reprocessing beyond that needed for simple reuse) to extract any worthwhile components to be used in a circular way, that can preferebly displace the need for brand new raw materials or ingredients in freshly manufactured new products.
It's quite possible for freshly manufactured new products to be more environmentally friendly then ever and as high-technology as you would like, you just have to make the committment and step up to the plate.
Everybody's situation is different, but I do think there is a good reason why people say "think local" so much more, the deeper they do the math.
You shouldn't fail to figure, how much money does it cost just to work, and further how much of that is just to get to work?
How much pollution do you have to create just to earn the money to be able to work in the first place?
For everything you consume, or even worse waste, how much potentially environmentally damaging work did you (plus everyone else in the chain) have to do just to earn the money needed, and that's before the actual consumption could even be paid for? Whether consumption takes place before or after it ever gets paid for.
Then you can more accurately decide the degree of balance you are going to try and maintain, between consumption and conservation.
It's all so personal so you shouldn't let it bug you, just do the math for yourself and take action accordingly.
Most people can easily find some low-hanging room for improvement, sometimes really obvious stuff but it's nothing to get embarassed about.
Don't get me started on the way different currencies have different degrees of toxicity, and not only dependent on their current relative exchange rates.
But you can only imagine that for two workers doing identical work, each earning "equivalent value" but in different denominations, when there is any difference in their environmental impact it could only be due to the difference in impact between the currencies themselves. Naturally including bitcoin and things like that along with "regular" money.
> Well, it's your environment, you would probably have to figure that out for yourself.
No. Actually. It's not my environment at all. I could just leave and go somewhere else. Like Mars, Heaven, or somewhere else.
And I'll be taking three gigajoules per second of power production with me.
> It's good to minimize consumption of Earth's resources to maximize the displacement of the need for extraterrestrial raw materials and ingredients for the manufacture of new things.
What you're trying to sell me is something that is physically impossible. Entropy will still eventually—literally kill you if you actually did what you're advocating for (which is degrowth and primitivism). So, you don't actually believe what you're trying to sell to people as an "environmental conservative." Unless you're a zealous fanatic or something.
Unfortunately, Earth was meant to be used up into a big void of nothingness with this exploitation of a planet followed by the next complete consumption of a giant world. Mars is in our sights. As is the rest of the Solar system.
And we may as well go to infinity and beyond.
After all, time and entropy aren't really on our side. Solving the problem of our scheduled annihilation is the real action to think about as a responsible person who isn't afraid of objective mathematics and analysis that doesn't miss crucial variables in global dynamics.
Well when I try to be as zealous a fanatic as possible it still isn't working, so it must be something else;) Good catch.
Sorry about having nothing to sell, it was all sold out by more well-informed and more persuasive geeks than me, way before I had a chance to get near any soapboxes.
Nothing wrong with a little reminder that some people's fruit hangs a lot lower than others.
The universe and unfriendly monsters are eating up all your accessible apples.
And that pleases me.
Because you're forced to turn into a monster yourself. If you don't want to starve and die.
So a little skilled zealotry is kinda essential. ;) So is probably persuading and selling others to your side. So that you can form a competent party of universe survival enthusiasts.
People are way ahead of me on that, with skills that may only be valuable on my home planet, I'm sure to be left behind in the dust :(
Some things stand the test of time better than others.
And some ideas are not that new but maybe the earlier the concept the better. Not that long ago back at the beginning of 1970 Neil Young had something to say about how things might work out in the decade to come, from "After the Gold Rush":
"Look at Mother Nature on the run in the nineteen seventies."
"Well I dreamed I saw the silver spaceships . . ."
"The loading had begun, flying Mother Nature's silver seed to a new home in the sun."