Not saying HTTP is bad. It just seems like we have given up on possibilities. I remember, almost a decade ago, Nokia had a mobile web server for Symbian devices which basically hosted HTTP server on the phone. You could message the owner of phone directly through a URL. The request would be handled by server on phone!
No one makes anything like that anymore. Everyone is just building on top of APIs and services provided by MANGA who would obviously not put any effort in such projects.
I remember when I was earlier in my career and more specialized on implementing weirdo protocols hearing that HTTP was going to replace all the existing protocols. I was appalled; it seemed absurd, like suggesting Word .DOC was going to replace all text files.
But for the most part, the people saying that were right, and we are better off for it. The thing about a lot of those purpose-built protocols, even the "important" ones like DNS and most especially infrastructure stuff like SNMP, is that they are pretty dumb, the product of their time and thus, by construction, deprived of several decades of systems learning.
The moral of my story is: those old protocols were bad, practically all of them.
That's not really why it failed though, since those binaries are in separate newsgroups that most servers simply don't carry.
The issue is more about spam and lack of moderation, which makes it somewhat unfriendly to newcomers, since one has to do the spam/troll filtering locally.
This over time lead to most newsgroups slowly dying out.
I consider binary newsgroups and the rest to be two almost unrelated things. Plenty of non-binary usenet servers were doing just fine, but of course no one would really pay for them.
I'm not saying I can refute what you're saying, just that I think my claim about "binary vs. plaintext" being problematic with respect to NNTP is well-founded.
(I've implemented producton-grade HTTP libraries many times. HTTP being a one-size-fits-all protocol means the existing libraries are ridiculously specialized and overspecced.)
HTTP is the universal protocol because the mental model of "verb-metadata-payload" fits almost everything you can imagine, and the socket handshake/framing parts are obvious and very performant. This is why HTTP is used everywhere from pushing stock quotes to industrial automation.
HTTP 2 and 3 aren't universal protocols, they solve a very specific problem of serving static content to browsers for Google-scale websites.
In effect, HTTP 2 and 3 are exactly the kind of niche one-problem protocol that HTTP was supposed to replace.
HTTP3, more the same! The performance wins of HTTP3 won't matter at all for most typical web applications. But typical web applications aren't the point of HTTP3.
> ...more flexible,
> ...and more reliable than HTTP1
Like I said in another comment - I'd wager the majority of HTTP use cases have nothing to do with a browser or HTML/CSS/JS.
HTTP2 might be nicer when you're serving static content to a browser; debatable, but I'll give the benefit of the doubt.
In all the other vast domains of HTTP usage HTTP2 does nothing good.
Your entire post amounts to "I like HTTP" and "old stuff bad."
Email is another clusterfuck of a protocol (well, several protocols) that barely functions despite dozens of modern pseudo-standards plastered over it. I’ve written extensively about that too.
DNS is frequently the source of amplification DDoS attacks. It’s another protocol that made sense once upon a time but has struggled to keep pace with modern advancements in technology
IRC is probably the best of the bunch here but even that has struggled to keep pace and can be subject to undocumented behaviours (like line length).
…and these are the protocols still in use. The ones replaced by HTTP were either crazier or over simplistic that they offered nothing over HTTP.
I’ve written my own clients for every one of those examples and had to deal with the pains of their protocols. I’ve also written my own web browser. And while HTTP has some warts too, I’d take that over FTP and SMTP any day of the week.
SMTP is one of last of decentralized open communication protocols that is still widely used by business. It evolved over time, gained some additions and stayed alive. The biggest issue I have with e-mail nowadays is companies like Microsoft and Google acting like they go out of their way to break protocols and deliver less and less messages from perfectly well working but decentralized sources.
Microsoft is especially bad of the two, with years-long tradition of acting against standards (Outlook Express connecting to recipients' MX, cloud offering accepting messages for delivery and never delivering them  etc). Google, I believe, as soon as they find a better way to get hold on user's invoices and receipts, will teach their users that they should use something else instead.
Stating that standard barely functions just because anti-privacy corporations only pretend to use standards in way there were intended, but concentrate on breaking them, is not how I would describe current state of e-mail-related stuff.
It started "not being anymore" with corporations - and once again I bow before Microsoft and Google - using less and less lube over time when telling their own clients what their role is at the ecosystem.
I will absolutely fight any attempt at calling e-mail protocols broken just because a corporations can't figure out their revenue around it.
And this isn’t even touching on the problems with IMAP and the insanity that POP3 is even still a thing.
Last time I attempted setting up messaging accounts with beforementioned companies, it wasn't possible to use Mutt or bare Thunderbird - one had to use client software allowing some kind of RCE to set up access to those services. Add Google's bubbling and Microsoft's repeated mail losing, and we no longer really have globally functional e-mail based on standards.
When some of the biggest actors don't follow rules describing delivery without proposing changes - yes - e-mail is being broken but not because protocols underneath are broken. It's because people trust these companies and possibly don't know that they may be victims of careful information filtering.
I have done some e-mail - related work for hosting companies in the past. For some years now, POP3 is not really a thing. It exists, it is being set up by mistake from time to time but the number of POP3 users compared to IMAP users was barely noticeable and I don't think it grew. I'm afraid to ask what your issues with IMAP are...
 I suspect that Google bubbles its e-mail customers just like its search users. Most non-technical people I know treat "spam" folder like it would literally burn their fingers upon touching. They act similarly, like trained to only look inside there when not seeing awaited messages in the inbox. Google delivering perfectly fine messages straight into "spam" folder has comparable results to Microsoft losing/destroying their customers' mail.
Email dates from an era when it was safe to trust people connecting to your server because everyone knew those individuals personally.
That methodology hasn’t scaled in 30 years.
I see arguments for "non-modern" not "broken". Some wines aging better doesn't necessarily mean that others deteriorate.
> What would you suggest to replace e-mail while retaining its flexibility?
There’s no reason why we cannot redesign the email paradigm around a totally new protocol. The problem isn’t that’s it’s technically difficult, it’s that SMTP is too prevalent now. It would take someone like Google abusing their market share so bring in a successor.
Also any replacement would need to be at a protocol level. A lot of the attempts I’ve seen have also tried to modernise the experience as well (like Google Wave) but the reason email is successful is because it is familiar.
Granted it's already nicer for clients not to need to configure SMTP to begin with.
Acessing port 25 of server is usually blocked by ISPs as a way to prevent spam.
Authentication with mail is separate, usually to allow for relaying whilst anyone can usually drop emails IFF your server is the destination.
Confusing and needlessly complex? Yep. Natural result as of uncontrolled evolution? Yep.
finger was conceived in a time and environment where you would reasonably assume a lot of things which stopped being true a long time ago.
many users per machine/users actually logged in to that machine/users being in walkable distance or in the same building/no compartmentalization, i.e. your demon has access to every users' home directory
and that's just the top of my head.
if you see finger as "everyone has a place to store a message and people can read it" then yes, you might say it wasn't worse than HTTP - but I think the plan feature wasn't even the original intention, it was more "is person X at their desk right now?".
So all features aside, it has so many assumptions baked in, I'd have to think hard how to replicate it in a modern way for a company and still fit the protocol.
I'm not sure I 100% agree with the protocol being silly (merely not great, BUT it's been years since I read the RFC - it's short, you should), it's kinda simply plain text with some wonky hostname shenanigans, but the whole concept hasn't aged well. But that is if you completely ignore anything about security (see what I wrote above) and privacy.
We now all go about our daily bakery shopping in the fancy sports car instead of a small rusty bike.
And people have differing opinions whether this is the best timeline to have...
Finger on the other hand is would be a very narrow API to for a certain service without ANY of the flexibility of HTTP. No custom headers, no Basic Auth, not even the difference between GET and POST.
So yeah, maybe the original comparison between finger and http is already flawed, but unless HTTP/2 gives you something that HTTP/1.1 can't do then HTTP/1.1 is still perfectly valid, and probably will be in 10 years, at least for low-traffic situations. (finger should be reaaally low-traffic in comparison).
That is the point. The flexibility is not free. Every conditional doubles the number of possible execution flows. This brings complexity. To some extent it is mitigated by the economies of scale because now everyone uses HTTP for something, so collectively we get that more complex code more polished. But there is no such thing as the bug free code - so every participant will have to deal with patch cycle and generally preventing bitrot.
For a small well bounded custom protocol which solves a well defined specific use case, one can hope to write a dependency-free implementation that can be tested and work well enough and left alone.
I recently was at an event with a few thousand wifi devices.
About a third of the internet traffic was updates..
I think I like the idea of a "spec" inside the same "protocol" more. For example if you understand HTTP you can quickly reason about any spec of a REST API that's done with JSON payloads without caring for the HTTP wrapper layer, just as you don't care for TCP around it.
Yeah, reusability and layering of engineering knowledge is useful - it makes dealing with complexity easier.
But it also makes it easier to build complexity without spending thinking of a simpler solutions. Because time to market.
And thus we have exhibits at https://mobile.twitter.com/internetofshit
That's essentially where every other protocol is. HTTP gets all the energy, active development, updates regularly rolled out. Meanwhile if you tried to make, say, FTP more rational by transmitting data over the control connection, nobody would be able to use it since all the servers are still running 25 year old wuftpd with the minimal patches to not get pwned and they have no interest in updating (if they even remember the servers exist).
Personally I think we've lost something valuable when only one protocol exists, but I'm one of those Luddites who still reads email with Thunderbird so what do I know.
But I still would like to see efforts happening in applications like Nokia's mobile web server. Another such effort from a decade ago was Opera Unite https://linkdekho.in/1e2vAy
Finger on the other hand does one thing, and does it well.
I would argue that we need more smaller protocols and less kitchen sink protocols.
The drivers for kitchen sink protocols are not necessarily technical, they could be financial. As a bigger protocol gets more name recognition it becomes a less risky pitch in the eyes of management to adapt the protocol instead of going with a smaller one, or inventing your own. This adaptation sometimes leads to the bigger protocol getting extensions and growing even larger. The other reason HTTP is often used is security policies - the port is open, so no additional ports are needed, and security is already set up to scan HTTP traffic.
Outside of the tech world, it pretty much did.
Honestly, i avoid .doc and .docx as well as most of MS Office formats whenever possible, at this point i just have LibreOffice installed and use all of those native formats: https://en.wikipedia.org/wiki/OpenDocument
Not only that, but most of these formats are problematic on a technical level - when compared with something like simpler Markdown files or any other text based format, looking things up is needlessly hard, so you can forget about easily searching for some text within a directory of 100s of such files on a server without some niche tool.
 Not a compliment
Most recently, HTTP/2 has become a concern that HTTP servers and clients have required substantial work to implement, but it is easy to say this is a separate protocol from HTTP, because even though it shares the same ports, a client that doesn't speak HTTP/2 when speaking to an HTTP server will not have to deal with them.
HTTP/1.1 wasn't like that, and neither was HTTP/1.0.
The original HTTP "0.9" was really a lot like finger: You would open a port, send a single line identifying the resource you wanted, and then the content would come back, and the connection would close. HTTP/1.0 added headers and some (text-based) framing to this, and fortunately there weren't many clients to upgrade.
Sometime in HTTP/1.0 people started talking about "pipelining" and the need to change the protocol to support this. The "Connection" header was introduced to identify this change - no other header had ever before meant anything to the web server (except when acting in some capacity as an application header), and misunderstanding the Connection header led to hung clients and slow response. This was made more annoying when the defaults changed for HTTP/1.1 -- now the "new" protocol was the default, and thus hung even more clients. I personally find this very funny because there is absolutely no need for a "pipelining" protocol- sockets are actually quite cheap, but most of the http server implementations and most of the http client implementations were badly written, and it may have been difficult to do better (assuming they knew how to do better) -- and so regardless, what was once an HTTP-compliant implementation was suddenly not.
HTTP/1.1 also introduced an "Upgrade" header, which was a kind of "trap door" to add extensions-- hopefully to avoid this kind of problem in the future, but it is complex, and many HTTP implementations simply added support for the "Connection" header and were find for a couple decades where today we are still shaking out clients that don't support Upgrade properly (and never noticed because servers vary on when they use it).
These "extensions" are the sort that everyone had to cope with- and because the protocol was carelessly defined, it was easy for implementations to get it wrong in a subtle way. Most of the other extensions (e.g. DAV, CONNECT, etc) are much easier to ignore simply because they're more "obviously" an extension.
> I think HTTP only really won because of those (HTML/CSS/JS).
HTTP won for a lot of reasons, and being easy to implement "mostly (or sufficiently) right" is a huge factor that I don't think should be ignored: Yes, many clients got it wrong and noticed years later, but "fixing" those broken clients was pretty easy, and the fact that people don't have to start over to gain increased compatibility or features is attractive in a way that should be studied by protocol designers trying to invent the next amazing thing.
this can't be stressed enough. Even many well known sites have a setup where their internet facing sever talks HTTP/2 but the backend is HTTP/1.1.
This protocal downgrade into the backend opens you up to a world of pain like cache poisoning and request smuggling which are also really hard to detect unless you know what you're looking for. And seeing how common it is, I wonder if it wouldn't have been safer to not call it HTTP/2 but a totally different name, just so people understand the danger they are in by thinking that there is any kind of safe interoperability between them.
Only a few of those existed in the 1.0 spec. The rest evolved in practice. You can add your own without asking anyone else. Libraries don't have to be changed. Your payloads are your business. Encodings are flexible and negotiable.
> i think HTTP only really won because of those.
It's proven to be a predictable, stable, extensible, compatible, generic information exchange protocol. If it's won, it's because of that.
For example, WebDAV is a set of extensions to HTTP.
And in my opinion cross origin resource sharing headers could be considered an extension too maybe.
it is a protocol for a specific purpose, and because it has that specific purpose, it was used for that, and desired for that. http can do it, but without the constraints, of course nobody will use it for reinventing finger.
From an earlier post:
NBS TIP: 301-948-3850
I dialed it enough times that I still remember it. Much thanks to Bruce of "Bruce's NorthStar" BBS in Virginia for that phone number. 
MIT-MC: @L 236
MIT-AI: @L 134
MIT-DM: @L 70
MIT-ML: @L 198
Anyone remember how to do a TIP-to-TIP link, as documented on page 5-4 of the "Users Guide to the Terminal IMP" , by connecting an input and output socket of one TIP to an input and output socket of another TIP, through an unsuspecting host, so you could chat back and forth directly between two TIP dial-ups, without actually logging into the host?
It went something like @HOST #, @SEND TO SOCKET #, @RECEIVE FROM SOCKET #, @PROTOCOL BOTH, making sure the sockets were different parity so as not to violate the Anita Bryant clause with homosocketuality. 
You could also add the octal device port number of any other TIP user on your same TIP after the @ and before the command, to execute those commands on their session. (See page 5-7, "Setting Another Terminal's Parameters".) BBN wrote such great documentation and would mail copies of it for free to anyone who asked (that's how I got mine), you couldn't even call it security by obscurity!
The "ARPANET" episode of "The Americans" really missed the boat about how easy it was to break into the ARPANET. I didn't even have to kill anyone!   Makes me wonder about the part about squeezing your anus... 
What was your uname?
True, most finger implementations are trivial. But there is nothing stopping to create a finger daemon which does something dynamic based on the query the user sends. The same set of vulnerabilities is possible. It is also possible to write a simple web server for a single purpose without scripting, which is similarly secure to a finger daemon. Also finger clients are relatively secure as they don't do any interpretation of the data, (which might mean that they don't do any validation, which would allow console injections with escape sequences....) but there isn't anything stopping to send the response in HTML. Also you can write an as secure HTTP client for that tight use case.
In addition we have tons of tools to work with HTTP for debugging, proxying, caching, filtering, ... none of those for finger (which of course is a responds to nobody using finger and everybody using HTTP) which allow better handling.
So we went through a dark age where "Just open a socket and have at it" couldn't fly over WAN, which means there wasn't much point doing it at all.
QUIC will fix this. You can treat it like a bunch of TCP streams and UDP datagrams that are Just Encrypted.
I'm thinking about doing a toy IRC knock-off with QUIC. Having TLS standardized in the transport layer means less work for the app, and having multiple streams and datagrams means that odd stuff like file transfers or even voice chat could be tacked on without opening new ports or new TCP streams. Matrix is cool and all, but I want something you can just throw down for a few friends and some bots with a shared password. Matrix homeservers are too much work for one-off.
My old New Year's resolution was always "I'm finally gonna get into web dev". But I don't like web browsers. My new resolution will be "I'm gonna do web stuff, without web browsers."
I see at least 2 here which have commits from 2021.
To be blunt though, I don't like C. It does low-level better than many languages, but C's idea of high-level is too low.
If I had to use QUIC from C, I would pick a Rust or C++ library, write a wrapper that makes it basically into a single-threaded epoll knock-off, and then call that. If I had to do it in pure C, I'd give up.
For my pet projects, I want to use the tools that make me most comfortable, where I can slide between low and high exactly when I want. Web browsers struggle to go low, and C struggles to go high.
I hate C as a language, but I think it makes for a great API. Any language under the sun can bind to C. To bind to Rust or C++, you'd need to basically be a Rust/C++ compiler. From that perspective, I don't have a problem with having a Rust library with a C interface, except it might be harder to maintain for distros. gcc/g++ are everywhere at least.
What're your pain points in browsers?
tl;dr: I think web browsers have a Pareto problem. Building inside a web browser is pretty all-or-nothing. Their interfaces make the most common 90% of cases easy but the other 10% of interesting niche stuff totally impossible, or too slow to be useful. Just like how old PC games would play music by triggering the CD drive's "Just play this track" feature, browsers are fine for doing super-high-level stuff exactly the way most people want to do it. But if you want to do anything with that audio other than stream it unmodified straight to the speakers, suddenly the APIs let you down. And the whole time, you're taking on some of the biggest dependencies in history. There are two companies that make full-sized web browsers. One is a non-profit constantly struggling for funding while making awful PR gaffs and being hypocrites about privacy. The other is an openly evil advertising company.
I've done really basic stuff, like I learned how HTTP works, I wrote a few web apps with Rust, I made a game with TypeScript and WebGL. But it just never clicked for me.
My comment is missing a little context. There's basically two different niches:
1. If I want more than 2 or 3 people to use it, it has to run in a web browser. I don't mind doing WebGL and putting it on a static site. I can always do a native port if I feel like it. All the games I've made can be modelled as "Read keyboard input and run OpenGL commands", and browsers are enough for that.
2. If I want to really have fun with something, just for myself, web browsers are too big of a dependency and the restrictions are too tight. Sure, they'll get QUIC as WebTransport soon (IIRC), but I'm always gonna be limited by the dependencies.
I actually like local web UIs. I think because that offers flexibility. If I want to send a video stream from a browser, sure I "just" have to use WebRTC. But how do the WebRTC servers work? I haven't found satisfying documentation. What if I want to start with a webcam stream and then compose graphics into it before encoding? I know browsers have Skia, but is that exposed to me? Or is it like so many bad "Play an audio file" APIs where it breaks down as soon as I want to play a _remote_ audio file or _stream_ an audio file or _transcode_ an audio file.
So (sorry for the meandering) back to my toy IRC idea.
I can do that with HTTP and long-polling and it would kinda work. But it would just be a crappy Matrix clone. What I really want is to show off "Look, I think QUIC is going to bring back custom protocols, QUIC has not to come to abolish the word of TCP but to fulfill it, and here's how it looks."
And I could do that with Electron, but like the "Let me do everything for you and make the 10% of niche cases impossible" API that can only play audio or send a webcam stream without any compositing, Electron presumes I'm going to have a GUI, and I'm also going to run it on the same computer.
Whereas if I make the first prototype UI with curses or a local web UI, I can forward it over SSH easily or run it when a GUI isn't available.
It sounds a lot like Gemini, but I think Gemini is a little misguided. It sounds like most of its proponents think that you can control a protocol by just having very noble goals. And it sounds like they are opposed to HTTP and QUIC not because the protocols are bad or even hard to implement (In the case of HTTP. QUIC actually is hard to implement), but just because bad entities use them. I think it's dangerous to believe that powerful tools are only for bad purposes. It will leave good people de-powered.
Any thoughts on fast experimental protocols like warp data transfer  or fast and secure protocol  ? I know they're not exactly the most open things or wellsupported in terms of what you're looking for but I've been really wondering when we're going to start seeing pressure to relieve network congestion using stuff like this. I get that part of the idea of QUIC is generally to shift the optimization of network traffic from kernel-space (for example fq-codel or CAKE) into user space, but does it offer wider improvements on bandwidth usage outside of that?
And the phone thing was that on modern processors listening to the network is a serious battery sink.
Chris Torek had hacked our version of fingerd (running on mimsy.umd.edu and its other Vax friends brillig, tove, and gyre) to implement logging, and while he was doing that, he noticed the fixed size buffer, and thoughtfully increased the size of the buffer a bit. Still a fixed size buffer using gets, but at least it was a big enough buffer to mitigate the attack, although the worm got in via sendmail anyway. And we had a nice log of all the attempted fingerd attacks!
The sendmail attack simply sent the "DEBUG" command to sendmail, which, being enabled by default, let you right in to where you could escape to a shell.
Immediately after the attack, "some random guy on the internet" suggested mitigating the sendmail DEBUG attack by editing your sendmail binary (Emacs hackers can do that easily of course, but vi losers had to suck eggs!), searching for the string "DEBUG", and replacing the "D" with a null character, thus disabling the "DEBUG" command.
But unfortunately that cute little hack didn't actually disable the "DEBUG" command: it just renamed the "DEBUG" command to the "" command! Which stopped the Morris worm on purpose, but not me by accident:
I found that out the day after the worm hit, when I routinely needed to check some bouncing email addresses on a mailing list I ran, so I went "telnet sun.com 80" and hit return a couple times like I usually do to clear out the telnet protocol negotiation characters, before sending an "EXPN" command. And the response to the "EXPN" command was a whole flurry of debugging information, since the second newline I sent activated debug mode by entering a blank line!
So I sent a friendly email to email@example.com reporting the enormous security hole they had introduced by patching the other enormous security hole.
You'd think that the Long Haired Dope Smoking Unix Wizards running the email system at sun.com wouldn't just apply random security patches from "some random guy on the internet" without thinking about the implications, but they did!
1988 – Released the Morris worm (when he was a graduate student at Cornell University)
2005 – Cofounded Y Combinator"
Time to create IP over Push Notifications (:
I know it's not like a web server on the phone or anything, and likely questionable to mention it at all (since I made it), but I made a thing that lets you send notifications to a phone (or desktop) via curl  via a simple PUT or POST. It's definitely not a cool protocol since it's simple HTTP, but it's in the spirit of other Unix tools since it's just one thing to do one job.
In practice (2 SSE clients): all clients are notified.
Yes you are correct: It's the nature of pub-sub that all subscribers are notified if a messages arrives on a topic.
A. People do still come up with protocols. They've just moved up a level of abstraction. Why deal with the problems http already solves if you don't need to?
B. We now have big enough actors (corporations) that there is less incentive to unify, though even this isn't entirely clear cut. A lot of companies do seem to be trying to create standards for things like iot devices with some success
C. Web applications are the way most applications are used on desktop nowadays. The creator already needs to eat the cost of hosting the servers, so you might as well go for control and monetization over delving into making peer to peer work.
These days HTTP is used for everything. Server-to-Server API calls, binary data transfer, IPC, etc. A lot of these things get implemented on-top of HTTP though. HTTP is used much more as a transport-layer protocol now, an abstraction layer on-top of TCP.
How did we end up here? It appears that there was an organic need to build an abstraction layer that's easier to work with than TCP, which is probably seen as too low level and much more difficult to work with. Browsers supporting HTTP out-of-the-box with AJAX made this a widespread practice.
These abstractions come at high costs though.
It’s been great fun hosting parties and projecting visuals across the room onto an opposite wall. Then passing around a very old Android phone to go through the presets.
I highly encourage others to do the same with their favorite applications it’s fairly straightforward and makes it a pleasure to use.
Eternal September and security concerns.
More than finger, I so miss the times of USENET and the user experience of its hierarchical system of groups with threaded, text-only pull messages (I accessed it from gnus (the emacs newsreader) via my HP 9000 715 running HP-UX 9.03).
Per-message read/unread status in practice requires keyboard navigation (or as an inferior alternative, paging with read/unread tracking like in web forums), which doesn’t work for mobile.
The lack of read/unread tracking is also why we don’t have long-running discussions on HN.
The only remaining medium with per-message read/unread tracking we currently have is mailing lists.
Because of corporate network firewalls. Make no mistake, they would gladly break HTTP if they could, but it became too important, so now everything has to piggyback on top of HTTP.
Enforce the end-to-end principle and new protocols will flourish.
Why aren't you doing it?
He sure picked the right woman.
The Web being the Web, there's more, courtesy Wikipedia: "He met his wife, Katherine Anna Kang, at the 1997 QuakeCon when she visited id's offices. As a bet, Kang challenged Carmack to sponsor the first All Female Quake Tournament if she was able to produce a significant number of participants. Carmack predicted a maximum of 25 participants, but there were 1,500. Carmack and Kang married on January 1, 2000, and planned a ceremony in Hawaii. Steve Jobs requested that they would postpone the ceremony so he could attend the MacWorld Expo on January 5, 2000. Carmack declined and suggested making a video instead."
I ran down the source for that. It's Carmack himself, and the full story makes Jobs look even worse: https://www.facebook.com/permalink.php?story_fbid=2146412825...
Oh. That settles it.
> Anna Kang left Id a couple weeks ago to found her own company - Fountainhead Entertainment.
> It wasn't generally discussed during her time at Id, but we had been going out when she joined the company, and we were engaged earlier this year. We are getting married next month, and honeymooning in Hawaii. At her thoughtful suggestion, we are shipping a workstation out with us, so I don't fall into some programming-deprivation state. How great is that? :)
You could also get the schedule for movies shown on campus with `finger @lsc.mit.edu`. That finger server is actually still running but looks like it's not being updated.
Or maybe they don't have any events right now due to COVID...
(Note the HTTP link. The HTTPS cert is expired. But even if you bypass that warning I don’t think it’ll ever work due to other certificate errors. On almost all .mit.edu sites HTTPS is broken if you don’t have an affiliate client side very installed...)
As you might know, finger is an old protocol (actively used well before my time) which in essence showed information about the users on a server running a finger daemon (usually a Unix-like system).
As I understand it, when you queried for information about a user, a piece of the information you got back would be the contents of a ".plan" file in the user's home directory.
In this file a user could provide what we now call "status updates", which would then be promulgated by finger. You might have put your location, or what you were working on.
To avoid doubt, you can do all of this on the website and don't need to actually use the finger protocol at all. But for true nostalgia you'll need to use the finger command - which works!
you would write your status (or what you plan to do or whatever) into a .plan file in your home directory,
and you could use the finger command to query the contents of that .plan file from other users on the machine. or on any other accessible machine.
plan.cat appears to be an attempt to make that feature accessible through the web, complete with the ability to create an account and add your own .plan file.
not what i would go for. i would much rather prefer something similar that i put on my own webpage. (well, technically, all it would take is to agree on a standard url like https://my.home.page/plan or something like that.
Through the Hacking Glass:
Mine usually had snarky movie or TV quotes. Usually from MST3K, Babylon 5 or Army of Darkness.
Almost 30 years old.
Edit: fix typo and now I realize these texts are returned according to the browser's locale. I'm Catalan, so that's why I see them in Catalan :)
That being said, fundació.cat is allowing the sites to exist and doesn't seem to care that much either about what's going on in their domains (I sent them an email once asking for a list of all / the_most_popular .cat sites to find sites in catalonia worth promoting, and they don't even have that kind of information available), so if they don't care themselves, what can lowly citizens even ask for.
 (go to specification 12) https://itp.cdn.icann.org/en/files/registry-agreements/cat/c...
so I wrote a time tracker for it following a simple format that would calculate roughly how long I spent on each project.
+ new task
Is that it? Is there somewhere the file format is defined?
Note that plan files aren't exactly a micro-blogging platform like some people seem to refer it to as. To me anyway, it's a way to capture work and todos in a simple digital journal.
This is the standard I followed for myself...
This is a rolling plan file where things get moved on completion against a specific date (no backtracking, sliding tasks)
- is todo
* is done (for grepping)
bugs are tagged [bug]
other tags can be use as [tag]
~~is a cancelled~~ task or bug
// is a comment or thought
@next is upcoming work
@later is backlog
if it is not done, it's in next, later or ~~cancelled~~
date is YYYY-MM-DDD followed by a rough recorded timesheet
:0000-0000-XXm as in :start-end military time -minus XX mins of AFK.
timesheet that is parsable by another program to get time spent /week /month /total
### 2021-01-10-sun :0900-2100-50m
* ditched ~~passport~~ [wtf]
* auth via bcrypt and jwt tokens
* new vue app, trying water.css - nice [noteworthy]
* JSDoc is awesome, makes typescript a lot less needed [noteworthy]
* sign in with email / pass against db
* register against db (not in vue yet)
* validating jwt tokens properly
* clean up package.json
$ cat .plan
To be the only person on this system who uses this obsolete feature of finger.
was finger in windows 10 as well?
finger @plan.cat|wc -l
If I remember correctly, you could also finger some of the soda machines on campus and check their inventory.
Login: jcs Name: Joshua stein
Directory: /jcs Shell: /bin/plan.cat
Last login Wed Nov 17 03:15:42 2021 UTC
Mail forwarded to firstname.lastname@example.org.
Creating an IMAP client for and on a System 6 Mac, recording videos of its
development at https://jcs.org/system6c
Working on a WiFi RS232 modem thing with integrated PPP and an SSL-stripping
SOCKS5 proxy embedded for my Mac
If anybody manages to get a working login with curl, I'd love to see the magic incantation you used.
T=`mktemp` && curl -so $T https://plan.cat/~YOUR_USERNAME && $EDITOR $T && \
curl -su YOUR_USERNAME -F "plan=<$T" https://plan.cat/stdin