There is a key point which articles like this fail to understand about Gemini. Gemini draws our attention to the distribution of responsibilities between clients and publishers on the web, and the implications of that distribution, and asks us to question this balance in a new medium. On the web, the publisher is entirely responsible for both the content (HTML), layout (HTML+CSS), and its presentation (CSS). But with Gemini, the publisher supplies only the content, and the client (or rather, the user agent) deals with the layout and presentation. This allows for clients to work in a huge variety of situations different from yours - some people render gemsites for ereaders or for print, some people "render" them with TTS as homebrew podcasts to listen to on the train, and others rely on screen-readers to browse - and the constraints of Gemini provide all of them with a first-class experience that the web cannot.
This is why the markup format, gemtext, is heavily constrained. You should try to express yourself within the constraints of the medium. The medium I'm using now is constrained, too - I can't add inline images or links on Hacker News, but we all seem to have found productive uses for this site nonetheless. And would HN really be better if we could write arbitrary HTML/CSS/JS in our comment bodies?
Personally I think Gemini team's approach has been to throw out the baby with the bath water. This is really unfortunate because the world needs a system which allows for lightweight pages where the user has absolute control over what is fetched, what is sent etc.
What ideas do you have on getting there from here?
The great bugbear of the Web has been enforcing standars-compliant markdown. We tried that. Authors didn't like it, and we got quirks modes with quirks on their modes' quirks' modes' modal quirkiness....
I personally don't understand why they just couldn't use markdown or simple HTML 3.2.
As an aside, there's honestly not even anything wrong with HTML 4.0 as a format. Up until the beginning of the 2010s, it was perfectly practical to create a third-party browser implementation supporting CSS 2 and HTML 4.
> The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch.
Ah, but just for completeness, despite my own fondness for the idea of a markdown-web, their faq states:
> 2.9 Why didn't you just use Markdown instead of defining text/gemini?
> The text/gemini markup borrows heavily from Markdown, which might prompt some people to wonder "Why not just use Markdown as the default media type for Gemini? Sure, it's complicated to implement, but like TLS there are plenty of libraries available in all the major languages". Reasons not to go down this route include:
> There are actually many subtly different and incompatible variants of Markdown in existence, so unlike TLS all the different libraries are not guaranteed to behave similarly.
> The vast majority of Markdown libraries don't actually do anything more than convert Markdown to HTML, which for a Gemini client is a needless intermediary format which is heavier than the original!
> Many Markdown variants permit features which were not wanted for Gemini, e.g. inline images.
> A desire to preserve Gopher's requirement of "one link per line" on the grounds that it encourages extremely clear site designs.
> Of course, it is possible to serve Markdown over Gemini. The inclusion of a text/markdown Media type in the response header will allow more advanced clients to support it.
My experience of Gemini's "one link per line" deal has not suggested it leads to "extremely clear site designs", but I'm also not a Gopher person so maybe I'm missing The Paradigm.
The argument seems to boil down to "these other formats have many variants or features that we don't want, so we're going to invent something new".
That... just does not make sense. I get that they admit that they do borrow heavily from Markdown, but there's no reason why they can't pick features from Markdown (including inline links and images), put them in their spec, and still call it "gemtext".
The real reason they don't do this is right there: they don't want to. They decided on a weak set of features, leaving out things that most people probably would find useful, and that's that.
I do understand the desire to limit features and extensibility, to avoid Gemini turning into the mess that is the current state of web standards, not to mention the attractiveness of being able to cut off ads and tracking by design. But I agree with OP that they're going about it in a way that all but guarantees they will not be particularly successful. Telling users they can't have inline links or images, and telling developers their creativity in implementation is severely limited isn't going to make you many friends.
And maybe that's also by design. If they want to recreate the internet of the 80s and early 90s, including the "feature" that a very tiny percentage of humanity was on it, maybe this really is the way to do it.
I wish them well, but I don't think this would be an ecosystem that would give me joy (no judgment, and I'm sure the Gemini folks don't care, and that's all fine).
That's a fair argument for not saying, "We're going to make our servers only use a subset of HTTP and HTML." However, it neglected the potential to support a subset of HTML (or XHTML if they wanted to avoid the parsing complexity of HTML5) on Gemini itself.
At that point I think the choice gets back to their whole goal of
> A client comfortable for daily use which implements every single protocol feature should be a feasible weekend programming project for a single developer.
This isn't a particularly appealing protocol characteristic to me, but YMMV.
Semantic document structure need not have a fixed presentation.
Old-school UMIX manpages written in roff were most often displayed on VT-100 terminals, were designed to be compatible with teletypes, and yet could be printed pretty-formatted in variable-width fonts with true italic, bold, and point sizes.
You can still see this, e.g.:
man -Tps clear | ps2pdf > clear.pdf
Open that and you'll find a fully typeset document.
Which is kind of the point of semantic markup: it is independent of presentation.
Except that you rendered to Postscript--which is a general, stack based programming language (with a bunch of domain specific font manipulation code baked into it as well).
Thus demonstrating my point that the moment you go above a VT100 in presentation quality, your complexity goes exponential.
The point is that the same markup source produces different output endpoints depending on the device and capabilities available. Straight ASCII text, possibly with ANSI sequences, for a terminal. Fully typeset output for a raster display or printer.
What you're raising as an objection is actually the entire point of my example.
The complexity of the markup isn't relevant. The complexity of the renderer is.
And you are grossly underestimating the complexity of even an ASCII typesetting engine. RUNOFF/nroff/troff/ditroff/groff isn't even easy for a character device like a vt100. The developers of those engines reads like a who's who of computing luminaries.
My point as to why rewriting a web browser is hard is that you are developing a "typesetting engine", and developing a typesetting engine is HARD.
In a world where only megacorps have the resources to fully implement web browsers and performant web servers, it is entirely reasonable for Gemini to lessen the complexity of development at the expense of features.
There is a hell of a gulf between Chrome and a gemini client. OP just seems to be lamenting how wide it is. I like simple software, but not when it's too simple. I spend my whole life in a tiling window manager + terminal... but the whole "suckless" thing is too far for me. Gemini does seem a bit too far as well.
If you have a GPU and use a GPU-rendered terminal, then I'd beg to differ. Kitty is much faster than st especially when I'm tailing fast-moving files. suckless tools are certainly simple, but simple is not always fast. They _can_ be fast due to dispensing with aspects of complexity, but when compared to software that is designed to be fast, they're not as fast. One could argue that suckless tools form a good balance of simplicity, ease of interoperability, and speed that come from their ruthlessly simple perspective, but to say it's globally faster or more feature-rich is heavily overstating the truth.
Yea, it depends on the where of the speed, I am using tiling management so I constantly spawn new terminals and most profit from startup speed which for st is almost instantaneous (urxvt was close but slower the rest kitty/alacritty etc. were all slower for starting up).
I use Alacritty and it spawns new windows as what I would describe as instantly. I'd say I see the window rendered when my finger leaves the key on the keyboard. It may be using a daemon model, I'm not sure.
I've been meaning to move to Foot, because it's apparently _even_ faster, but I haven't gotten around to it yet.
You can dig a hole for planting a sapling with an excavator, with your hands, or with a shovel (and much more). Chrome chose the excavator, Gemini the hands. I'd rather have a shovel. This isn't a binary choice, there's a lot to explore here.
Sure, but those are two extremes. I can't believe that there isn't some middle ground (even if that "middle" is much closer to Gemini than the WWW) that continues to achieve Gemini's goals without sacrificing so much on both the user's and developer's sides.
Once again, in my view, the so called "limitations" exposed here (hyphen and "long-ass lines") are just problems of imperfect clients; while I can understand the long rant about lack of inline links and italics and bold, the analysis of how limited a developer is in "expressing his creativity" in my view shows that the author has not understood that gemini is *user/information first*, so these limitations are there to convey information with a low signal-to-noise ratio, any "embellishment" is useless and not recommended on gemini
gemini is designed in these ways to make the content consumable everywhere, separate from its representation
The idea is to be able to read text anywhere, from an arduino to a 64qbit Qwave sunBreaze, where "reading" also includes blind and deaf people (via text to speech), people who do not care about the possible "text justification mania" that the author seems to care a lot about
Finally, I think that gemini together with small companion protocols and file formats (e.g. amb[1], uxntal/subleq) are fully compatible with permacomputing concepts and this, in my naive but very tired view of the current state of the internet, is enough to understand what is good and what is bad.
> Finally, I think that gemini together with small companion protocols and file formats (e.g. amb[1], uxntal/subleq) are fully compatible with permacomputing concepts and this, in my naive but very tired view of the current state of the internet, is enough to understand what is good and what is bad.
In practice, where is any of this? Where's the screen reader support? How many blind and deaf people are using Gemini? This is like building a bike lane in a truck parking lot; it makes me doubt anyone will use it and makes me think that it's air cover for the truck parking lot folks to make their parking lot seem more accessible than it actually is.
What is "permacomputing"? It's not listed in any of the goals of the Gemini project. And why do we need amb or uxntal/subleq architectures? Is there a rational chain of thought here that justifies these ideas which doesn't depend on moral philosophy?
(Note that that's my other problem with Gemini. There's a lot of talk of good, evil, producers, consumers, publishers, and all, but the terms are poorly defined and the motivations hand-wavily justified. None of the actual motivational documents regarding the protocol bring these up either, they are just brought up in discussion _around_ Gemini, which is frustrating because it feels like there's a mismatch between the advertised uses of Gemini (and why a user would care) and the concerns of actual Gemini users.)
It's hard to take that seriously given the limitations of gemtext. A lack of inline links and images, and basic formatting, is the opposite of "user first".
Personally I'm glad the author doesn't like Gemini, because all the stuff he dislikes about it seem to be stuff I actually like, and if he had his way with Gemini there would be all sorts of stuff he seems to like and I dislike.
I can get behind the long lines and lack of italics and bold (there is talk about implementing it in the spec, I hope it gets accepted) but the rest of it is just griping that Gemini isn't HTTP/HTML. I especially get on the case of people who want inline links because if you read the reasoning behind it it is exactly to not have the things the author of this article argues for in favor of them. It exists for a reduction of ambiguity.
It was not designed with the explicit intent of never being updated, it was designed with the explicit intent of being complete one day soon. There will come a time when Gemini will be done, and even now any changes are taken very cautiously, but it is still in development.
OK, but the FAQ states the protocol is already considered feature complete "modulo small changes to remove ambiguity and address edge cases" and that no new features will be considered.
Extra syntax to allow for italics and bold, much less any accessibility features, seem like new features.
Well if the maintainer decides not to implement any text formatting then there are still options. The spec is designed to break, not bend, explicitly, and can always be forked, or we can do what we have always done, _this sort of thing_ and its not ideal but it's really no big deal to use some type of markup style punctuation rather than actual formatting.
It's wild to me that so many technical people, including the author, consider arbitrarily inserting \ns at a certain character length a desirable feature. Adding semantically insignificant line breaks results in a loss of information - if you break lines at 80 characters, you can't distinguish between a semantically significant line break at character 80 vs. just running out of space in the line. It also imposes the writer's preferred character width on the reader: What if I want 132 characters? What if I have poor vision and/or a small monitor and can only fit 60 in my desired font size in the desired portion of my screen? And it necessitates weird rules and/or stricter line-length restrictions to deal with stuff like quoting, e.g. as described in https://useplaintext.email. So it sounds to me like Gemini got this one right, and if your text editor isn't smart enough to wrap lines when the input doesn't contain a literal \n you should probably find a better text editor.
/rant
In terms of the other limitations the author mentioned...yeah, those do seem kind of silly. I guess I don't see the advantage of a custom and even more limited format over, say, Markdown. But of course everyone has their own preferred subset of "important" features and if you just take the union of them all you'd wind up back at something close to the current feature set of the Web.
In the case of Markdown, the extra newlines are there so that you can write for two different kinds of readers at the same time. A text editor will show the unrendered state and a Markdown converter will take it to the rendered state. Both should be readable.
So the newlines aren’t “semantically insignificant” any more than comments are in source code. Just because they’re stripped during the conversion doesn’t mean they’re meaningless in the unconverted state.
Sure, but there is also a long-standing tradition of making text files readable without soft-wrapping, by using line breaks where it makes sense.
The style guides for many programming languages require you to put hard breaks in long lines, for example. Also, old-school mailing lists require this.
Right, the point I was arguing in my top-level comment is that that tradition is misguided and should be done away with. Including in mailing lists and commit messages.
The place where it "makes sense" to have a line break in prose is a function of the reader's desired line width, which is not only unknown to the author at the time of writing, but also varies from reader to reader. If you don't insert semantically insignificant newline characters, any modern application for reading text can soft-wrap long lines to that width. But because arbitrarily inserting newline characters destroys information, the opposite isn't true. If you add \ns to break your lines at 80 characters and I try to read your text in a 60-character-wide terminal, I'll get alternating line widths of ~60 characters and ~20 characters. And I can't just replace all the newline characters at position ~80 with spaces and re-wrap the resulting text myself, because for all I know one of those line breaks could have been semantically significant.
Line breaks in code are pretty much always semantically significant to the human reader even if they're not semantically significant to the compiler because the relative vertical and horizontal alignment of non-whitespace characters is fixed and intentional. If you resize your browser window or zoom in or out so that the line wrapping in this comment occurs at different places in the text, the readability and meaning of this text is essentially unchanged. That's not generally true of code - even in languages whose compilers ignore whitespace, arbitrarily shifting, adding, and deleting line breaks doesn't preserve meaning (to humans) because programmers generally use whitespace methodically for the sake of clarity and readability.
> Line breaks in code are pretty much always semantically significant to the human reader even if they're not semantically significant to the compiler because the relative vertical and horizontal alignment of non-whitespace characters is fixed and intentional.
What would be cool is a well-behaved autoformatter like Black, or perhaps gofmt or rustfmt (clang-format definitely fails since it's not even idempotent), which automatically reflows code to the width of your text editor (or even shrinks the indentation size) as you resize or split the window, and reformats it back to a standard width upon saving and committing. This requires that formatting to a different width then back to the original is a no-op, which I believe Black and rustfmt can achieve, but is harder to get right than storing the code as an AST on-disk. Also, rustfmt fails to reformat the interior of macro invocations.
> unwrapped markdown in a line-wrapping text editor
But editing text in soft-wrapping editor is distracting (to me), as it continually re-wrap whole paragraph during editation. In hard-wrap text editor i can just edit paragraph and then press key for auto-wrapping.
Treating a paragraph or block as a logical unit is what I am used to. It works in Markdown, HTML, Latex, and basically every word processor.
Treating a single line as a logical unit is just as valid, technically, but it imposes requirements on me, the author, to change my workflow. If I found Gemini more compelling that might be worth it, but as I'm sure you've all noticed, that is not the case.
To clarify, I don't think of it as lines vs paragraphs. The text I'm typing right now doesn't contain any newline characters, but it's still a paragraph. I think of it more as a distinction between paragraphs with line-wrapping points predefined by the writer vs. paragraphs without them. Either way I'm very underwhelmed by Gemini myself so I don't blame you in the least for not wanting to change your workflow to accommodate it.
none of these faults resonate with me. gemini feels like an http html lite, that exposes what is. that the author of this post wants more us fine. but we have html already for that. the idea of making visible making explicit the content, of not making it rich: it's simply andifferent philosophy than what this author seems to be seeking.
I tried to get into Gemini. I wrote a server in Elixir. Then I started to write content. The lack of inline links is a brutal and I'm pretty sure it'll keep Gemini from growing beyond anything but a curiosity.
Personally, I could never use a platform that didn't support inline links[1]. Simply unthinkable! How could you possibly indicate which part of the text a link was related to?
HN’s lack of inline links is one of its worst features, and comments with large numbers of links are unbearable. Fortunately, longform HN comments are rare. Gemini also sounds more limited than even HN comments, because here you can put a URL like http://example.com inline and it works fine, you just can’t replace the URL with a custom label. In Gemini, it’s more literal than that - a link must be on its own line, not in the middle of a paragraph.
I've noticed that at least in the client I use (amfora) URI are not treated any differently from text and the client can insert a line break in the middle of them, making selecting the URI to paste it elsewhere hard.
This is not the client's fault btw, it's part of the spec. Anything not preceded by the magic characters '=>', '#', '`' or '>' is just treated literally.
The article does mention this a bit, and (after reading quite a bit of Gemini content) I agree:
'The links are awkwardly placed, and the "placeholder" markers
(such as numbers or brackets) to connect the text to the link below
has not gelled to a standard.'
It's like hashtags in tweets - it'll eventually become some kind of community ad-hoc standard, that will be inferior to simple inline links. I guess it makes things harder to parse, so understand why they did it - but inline links is HTML's killer feature. The rest is fluff!
Following an inline link on Wikipedia I know what to expect, basically a definition of the linked concept. External links on the other hand I consider more like footnotes at the end of a book chapter or an article. Regarding inline links all over have a look at the 1999 website a-blast.org / assoziations-blaster.de
I really don't like the popular concept in the Gemini project of conflating that developer-centric software is somehow _less evil_ than tech megacorps. Software can be user-hostile and be written by individuals, coops, collectives, or corporations. Software can be user-friendly and be written by individuals, coops, collectives, or corporations. Being anti-megacorp doesn't make your software friendly to use or more effective, nor does being pro-megacorp. Gemini is developer-centric and user-hostile while also being written by non-megacorp developers. That's all. Let's stop trying to use "anti-megacorp" to promote software that is developer-centric and user-hostile, when it is simply just developer-centric and user-hostile.
The proof is in the pudding. Most posts on Gemini are all about how awesome Gemini is and ascetic meditations on digital minimalism. Essentially, it's the bouncer at the "plain text cool kids club". I think many of us have an interest in creating non-megacorp-friendly software but the "plain text cool kids club" isn't it.
EDIT: This post has swung wildly in score so there's obviously some interesting thoughts happening here :D
Indeed. For all of the cheerleaders that Gemini has, check how deep their convictions run. Here's an exercise: collect a smattering of names of those who champion Gemini the loudest, and then take a survey of their personal sites on the HTTP web. Do those pages look more like those found on cr.yp.to, or do they look more like the MySpacified web that they purport to be fed up with?
> Opposed to the web's ubiquitous tracking of users
> Tired of nagging pop-ups, obnoxious adverts, autoplaying videos and other misfeatures of the modern web
Mere design you find distasteful wouldn't really signify here. If you're talking about someone who's tracking and got autoplaying video on their site, I'm interested to hear who that'd be!
(For the record, I am not a champion of Gemini, and I like how MySpacey my personal site looks; I just don't think this is a fair take)
I did not offer any feedback about subjective matters of tasteful web design. I did not accuse anyone of including adtech (of the tracking type or not) and autoplaying videos. I don't need to, and it's irrelevant besides. I offered only a very simple litmus test for measuring a commitment to values. I'll go further now, though, and say that I find your response to be not just unfair, and not just deceptive, but deceptive in a really insidious way.
In addition to crafting a suggestive response that tacitly asks the reader to attribute two flawed arguments to me (re "design you find distasteful", and you're being "interested to hear" "who's tracking and got autoplaying video on their site"—two really obnoxious strawmen), you've used selective quoting of the linked FAQ to present a narrow target and paint a picture of a fundamental conflict between my remarks and the essence of Gemini. Extremely bold to do that while in the next breath having the confidence to offer an opinion about what is or is not a "fair take".
Gemini's proponents purport to be interested in a set of values. Proponents defend Gemini's design in terms of those values; the values form design constraints that inform all aspects of the project (including, for example, its markup format). Given that it is possible for Gemini residents to demonstrate a commitment to those values even with vanilla HTML and HTTP—because one is forcing anyone to make use of any of the Web technologies that Gemini omits—then we should expect them to do exactly that. The litmus test is to ask whether they do that, or whether they expect the downstream recipients of their own work to be comfortable compromising on those very same values when it comes to consuming media over the Web. This is the _entirety_ the argument I laid out before.
> Gemini's proponents purport to be interested in a set of values. Proponents defend Gemini's design in terms of those values; the values form design constraints that inform all aspects of the project (including, for example, its markup format). Given that it is possible for Gemini residents to demonstrate a commitment to those values even with vanilla HTML and HTTP—because one is forcing anyone to make use of any of the Web technologies that Gemini omits—then we should expect them to do exactly that.
> you've used selective quoting of the linked FAQ to present a narrow target
If there's a part of the FAQ I've not quoted that commits Gemini's proponents to the value of making websites "look [...] like those found on cr.yp.to", please let me know.
That is: I don't think it's fair to argue from the idea of judging a "commitment to values" that aren't explicitly claimed by the project. That's not suggestion; I'm just saying it. If you can't be explicit about what's been committed to and how the commitment's been broken, then it's not me being "suggestive." If you do want to elaborate on that, I remain genuinely interested to hear it, because I still think it's possible you have stuff in mind that I don't know about. I broadly agree with you that it's fair to judge Gemini folks by their HTTP/S sites. However, for it to have anything to do with "how deep their convictions run", that judgment has to refer to their actual convictions, not just your preferences or mine.
No and that's the point. Gemini is developer-centric and user-hostile. It isn't evil. These are distinct concepts. We can't use good and evil as a way to exonerate bad software.
Coming back after thinking over your comment and this thread for a while.
The Web really just does not have a user-oriented document format presently.
HTML5 via WATWG / Google / Facebook is advertiser friendly. The result is horrible both for anyone looking to independently implement a fully-compliant, capable Web browser AND any user actually accessing Today's-Web-As-Designed.
Gemini addresses the dev-unfriendliness ... bu still tosses out far too much functionality IMO in the name of simplicity. The basic semantics of HTML5 are actually reasonably sound, but without some external enforcement over styles and complexity, it's simply going to be abused. Having used tablets and e-readers principally for the past six years or so, I much prefer the presentation of, yes, PDF (generally, it's still possible to stuff those up entirely) to virtually anything rendered in HTML by a browser.
At a desktop system, console-mode simple markup remains reasonably readily consumed, and is quite useful for reference and incorporation into other works (writing / literature study).
I'm not entirely sure where the answer lies, but I don't think Gemini is the promised land.
Possibly a bondage-and-disciplin markup format, enforced by ... what exactly I don't know, though a defined set of document formats, and perhaps an independent search platform's ranking systems, might be one approach.
(Yes, addressing the issues of copyright, compensation, and authors integrity might also help, though at this point, stripping the Web of all advertising seems a good start.)
I largely agree with this. A constrained markup format I look to for inspiration is EPUB [1], which uses XHTML and a constrained directory layout, but recent versions have fallen in thrall of HTML5 and has the potential for its attendant complexity. I think a scoped version of EPUB has a lot of promise and is readily supported on many, many devices. Another format inspired similarly is the now-defunc WML [2], which uses an even more constrained XML document to describe layout specifically for constrained devices.
Regardless, I largely feel that constrained markup formats are where the effort to make the web both user and developer friendly again should lie. There is also another defunct format, the AMB [3] which is a very different approach, but its goals and assumptions are a little too far out from the expectations of modern users IMO.
Epub ... is close, though it still hits a few failure points.
On physically-constrained devices (e.g., phones < 6" diagonal), it's about the only reader format that works at all. For larger devices, I strongly prefer a fixed-layout format such as PDF or DJVU.
Designers still foul up ePub far too often. The evolved typography of print took about 500 years to reach present development (though most of the modern elements were in place by the early 19th century). After a long enough period of getting things right, any change is most likely to be in the direction of Schlimbesserung: making things worse whilst attempting to improve them. This includes various font, point, and other stylistic choices. ePub affords too much freedom and authority to designers and not enough to readers.
On desktop platforms, there are few good ePub readers. fbreader on Linux is especially unpardonably poor. (I think Zathura may improve on this, if it does support ePub.)
And I think ePub still doesn't support formulae properly.
Thaks for the AMB and WML references, I'd not seen those yet. Oh ... WML -> WAP, right. I still think that the Web needs to be split up, probably along the lines I suggested some years ago:
These are ultimately different domains with different needs and uses. There's some call for complex and mixed formats .... but far less han their promotors would like us to believe. Advertising must die or be killed.
> On physically-constrained devices (e.g., phones < 6" diagonal), it's about the only reader format that works at all. For larger devices, I strongly prefer a fixed-layout format such as PDF or DJVU.
Interesting, I don't think I have a strong preference here. For textbooks I tend to prefer DJVU simply because of filesize, though DJVU readers aren't quite up to snuff in comparison to, say, evince (at least on nix, I'm not sure the state on non-nix).
> And I think ePub still doesn't support formulae properly.
Well, in theory, the latest versions of ePub have support for MathML. In practice... I don't know. I've tried to consume books with formulae on ePub and it's been a little nightmare.
> These are ultimately different domains with different needs and uses. There's some call for complex and mixed formats .... but far less han their promotors would like us to believe. Advertising must die or be killed.
There are promoters still? I would have been more sympathetic for mixed formats and flows when the web was younger, but these days, it feels obvious to me at least that the cases do have very different needs. Hell, every time I have to deal with a shadow DOM (which I don't have to do often, luckily) I regret the choices that ended up creating the concept of a shadow DOM in the first place.
Is your sub a place to discuss issues like this? I'd love to think/discuss these sorts of things in a thoughtful environment, and thanks for your exchange.
DJVU and PDF are largely equivalent in that both result in fixed-layout documents, which is what I'm distinguishing here. Both HTML and ePub are liquid and reflow. That's most useful when your display is poorly suited to reading textual documents.
Thanks for the MathML reference.
The subreddit is effectively dead. Reddit is a hostile entity.
The original promise of the Internet and, especially, the WWW was to elide the distinction between publisher and consumer--i.e. developer and user.
The incredible increase in the complexity of the modern internet has almost completely restored that fundamental distinction, if not made it deeper and wider, giving rise to publishers (e.g. Facebook, Twitter, YouTube, etc) and other media companies (e.g. Google) with powers we haven't seen for over a century, if ever.
From the 1990s and, to a lesser degree, early 2000s internet culture--both the zeitgest and scholarship. I'd post links but ever since Google began heavily favoring publication date over keyword and semantic matching I've found it difficult and often impossible (not for lack of trying) to find the various websites, manifestos, blogs, and magazine articles that I read and re-read back then.
One of my favorite manifestos was about the issue of [the non-existence of] spectrum scarcity, which also hashed out the threat posed by the reimposition of a strong publisher/consumer dichotomy. Alas, I haven't been able to find it again in the past several years.
These days whenever I come across a good article (my metric is an article I return to at least once, either to re-read or cite) I archive the link. Google is now useless for finding anything but the most recent content, and headed in that same direction when it comes to finding substantive content. Another object lesson in the perils of centralization and technological reliance.
> The original promise of the Internet and, especially, the WWW was to elide the distinction between publisher and consumer--i.e. developer and user.
I disagree. I do _not_ equate a "publisher" with a "developer". I'd much rather see a world where publishers are _not_ developers. It's not like all the books I read are written by developers.
A developer is analogous to publisher because of the deep publisher/consumer dichotomy. Complexity favors scale, scale requires capital; similarly, complexity favors specialized skills (e.g. developers), which require capital.
Of course, publishers are not actually the same as developers. Few Facebook engineers consider themselves publishers. Heck, Facebook doesn't consider itself a publisher--in the contemporary context a publisher would be a media platform (or "technology" platform). But it's publishers who pay developers, and it's often developers who seek to find themselves in the role of publisher, building their walled gardens, through their startups. Similarly, both publisher and developer are active roles, whereas user and consumer are passive--they exist as largely distinct roles with one primarily existing to enrich the other in exchange for entertainment.
The original promise was in many ways overly simplistic. For one thing, scientists (e.g. CERN scientists) were much better positioned to acquire and utilize certain technical skills as an ancillary part of their job. But the ultimate belief was that by eliding the distinction between publisher and consumer, the erstwhile consumers would be empowered and the erstwhile publishers disempowered. The concern was, as it remains today, about relative control over both the media and the message.
It many important ways it's still relatively simple as an absolute matter to throw up a simple HTML site. But the opportunity costs are different, and so we self-sort ourselves into traditional roles. In that way the original promise was even more naive in believing that roles wouldn't bifurcate in a manner similar to how they historically had been.
Also, suffice it to say that there are still huge markets where developers work building systems not directly related to publishing. But it's the publishers and their ecosystems that are driving most modern software systems, and they drive those systems in directions that are rarely well suited for the needs of everybody else. Rather, those systems are optimized for a network where media platforms push out highly sophisticated content and users are still largely passive consumers notwithstanding the interactive, social aspect.
> but it's publishers who pay developers, and it's often developers who seek to find themselves in the role of publisher, building their walled gardens, through their startups. Similarly, both publisher and developer are active roles, whereas user and consumer are passive--they exist as largely distinct roles with one primarily existing to enrich the other in exchange for entertainment.
There's no nuance here. You're defining "active" and "passive" as two sides of a binary; one can only be active or passive. See, in my daily life I do several things, like using running water, that I am only partially active in. I can choose when to run the water, but the infrastructure and construction of the apparatus under my control was built by someone other than me. So is this active or passive? Does my relationship to running water need to change if I'm happy with it? Why? Can we define "passive" in a philosophically consistent way?
> The original promise was in many ways overly simplistic. For one thing, scientists (e.g. CERN scientists) were much better positioned to acquire and utilize certain technical skills as an ancillary part of their job. But the ultimate belief was that by eliding the distinction between publisher and consumer, the erstwhile consumers would be empowered and the erstwhile publishers disempowered. The concern was, as it remains today, about relative control over both the media and the message.
I think this was one of many messages floating around on the early net and web, and you've made a nostalgia-oriented strawman around it. The ideology itself seems so simplistic as to not stand the test of a basic philosophy course. If people could be neatly slotted into "producers" and "consumers" the world would be a very easily legible place, but it's not.
> Rather, those systems are optimized for a network where media platforms push out highly sophisticated content and users are still largely passive consumers notwithstanding the interactive, social aspect.
Again what does it mean to be "active", what does it mean to be "passive", why is there a dichotomy, and why is it bad to be "passive"? The terms you use seem to have an implicit moral philosophy/valence to it, but I'm not sure why. Under what ethical framework is "passive" bad and "active" good? There's the reality of Gemini to confront as well that this world full of "active" people only produce canned, specific, largely similar content that is only read by other "active" people; the world of "passive" people outside publish and consume media much more diverse and rich on a multitude of platforms that are full of your "largely passive consumers".
To tie this back into the protocol itself, is Gemini then a form of religion or philosophy? Because much like people appreciate not being prosletyzed to by religious figures, people also appreciate not being prosletyzed about technical religion as well, and _no_ Gemini advocate ever mentions how there is mostly a religious belief underpinning Gemini rather than a practical one oriented around the goals in the top pages. (I say this as someone who has written apps and pages on Gemini.)
This is why the markup format, gemtext, is heavily constrained. You should try to express yourself within the constraints of the medium. The medium I'm using now is constrained, too - I can't add inline images or links on Hacker News, but we all seem to have found productive uses for this site nonetheless. And would HN really be better if we could write arbitrary HTML/CSS/JS in our comment bodies?