It's still a valid form of data transport today. AWS will (literally) ship you a 45-foot container with 100PB of storage for shifting bits around: https://aws.amazon.com/snowmobile/.
I see no evidence that the amazing increase in data transfer rates has increased people's happiness. That is because happiness is a measure of something else, more like how well you're using the opportunities available at the time.
I'm pretty sure I was happier with 9600 baud back in 1991. Everything felt so new and exciting. The upgrade from 2400 to 9600 bps felt huge. The upgrade from 300 megabits to a gigabit? I hardly noticed.
I could have written this post, mwint... From the feels of not getting the need, to the current ~speed you're getting (vdsl2). But, in about a month we'll have a 300 symmetric connection (fttp).
The new FTTP connection is much cheaper was my main motivation. Can't see it making my simple plaything webpages upload noticeably quicker, or the terraform/ansible scripts I play with. I'm also not bothered about how long linux updates take etc, so can't see it being few seconds/min quicker adding anything meaningful. (Disclosure - in my late 40's, perhaps it shows!)
...Boy child and xbox updates, tho - more likely a smile will be raised there.
I have gigabit fiber so 1gbit up/1gbit down but I really only benefit from the upload speeds with hosting. Most people never use the bandwidth. I only use a fraction of the download but it was only $20 more a month so for me it's worth it especially when sailing the high seas for something new and being able to watch it nearly right away.
My old connection was 300 megabits down/20 up. I switched to fiber to get (almost) symmetrical gigabit. It mostly helps with backups and pushing docker containers. But I could probably get away with a half gigabit fiber and not notice the difference.
> An order form at the back of the catalog could be filled in, stuffed in an envelope with a postal order or cheque, then poked into the nearest post-box.
Or indeed with stamps, in a few cases, which was a God-send for an 11 year old me who didn’t have easy access to postal orders or cheques but could purchase a book of stamps anywhere.
Part of my evolution as a modem owner was downloading small files I was pretty sure I could download before getting kicked off, running out of small files that were interesting, and having to progressively ramp up to zmodem and late nights to just barely finish downloading a file before timing out if I logged straight in and didn't do anything else, and then nobody in my family picked up the phone to make a phone call.
Even for shorter files, if they picked up the phone at 80% that was it for the day (and at the time restarting a transfer either didn't exist or was too esoteric for me)
Almost positive that zmodem could continue an interrupted file transfer but there were quirks that you may have encountered that prevented it from doing it automatically.
I suspect my stumbling block was 'can you get the other side to play along'.
It's been so long that I have probably forgotten the finer details. For all I recall now, I may have spent 3 days downloading a single file and decided that was enough effort for one program.
On those BBSes or other services there was always something else you'd want to be doing with your allotted time. Skipping one day was one thing, skipping half a week was something else entirely.
I too used the "disks by post" service (in the UK) that the article describes, I've mentioned it before on HN too, but my investigation hasn't turned up the name of the company/catalog either, but I am fairly sure that it was a franchise operation based on the work of the ASP - "Association of Shareware Professionals" which was a US based organisation formed by a collection of shareware authors in 1987, and produced such a mail order catalog of software that you could order bulk shareware from.
The author's scp at the end of the article hits 10 MB/s, which is about what I cap out at when transferring over SSH from my MacBook. That's because it maxes out a single core just trying to encrypt all the data without hardware acceleration. Straight FTP or even HTTPS upload (which is hardware accelerated) should be even faster if they have more bandwidth.
Unless your MacBook is 20 years old, something else is going on. Are you perhaps on 100Mb/s link? With overhead 10MB/s is about what one would expect on a 100Mb/s link.
Even without hardware acceleration my 2018 Mac Mini, which is much slower than my 2020 MBA using ARM, does much better than that--64MB/s, but that's a floor as the file itself is only 64MB. I tested by forcing chacha20-poly1305, which wouldn't be 6x faster than software-only aes128-gcm or aes256-gcm. Also, though /usr/bin/ssh on macOS is compiled with libressl, and there was a brief post-fork period when libressl lacked accelerated cipher implementations, it has supported AES-NI for quite awhile.
I'm sensitive to these things as I still run a few PC Engines APUs and force aes256-gcm in sshd_config; by default OpenSSH prefers chacha20-poly1305. To an APU I get 30MB/s with AES-NI aes256-gcm, and 10MB/s with chacha20-poly1305. The APU has an AMD GX-412TC, which came out in 2013 but as a low-powered embedded part has the performance of something 5 or 10 years older.
I misspoke, not a MacBook but a MacMini. It's on a wired 1G connection transferring across local LAN. I'll have to rerun the experiment when I'm at work again and investigate, but iirc when we did this previously we were seeing one maxed core and low scp perf.
I remember working for hours and hours, failing, to get a 32kb transfer to work between two Commodore 64's with 1650 Automodems. I wish I could remember the app, it would be a nostalgia overload to see a screenshot of it again.