I have nitpicks and criticisms about Gemini, and it's not a protocol that I use very often (if at all), but I also don't see the harm in it. It seems to have a pretty strong grasp of what its goals are and (minor criticisms aside) it does a decent job of accomplishing those goals. Nowadays I tend to compare Gemini more to things like Pico-8 or Markdown rather than think of it as a web competitor -- and as a result I've started to develop a lot more respect for the project. It's not designed to replace the web or revolutionize how people share content, it's designed to be a useful medium for the community that uses it.
All that to say, I'm not sure I understand the criticism I'm seeing here. A Lynx-like browser with proper graphical mouse support and a couple of extras built in is a fine project. And support for Gemini/Markdown gives the browser a clear use-case beyond HTML that means it'll be practically useful for some people; it's not just an experiment in failing to render most websites because it doesn't support CSS, there's a category of content that you know will work, and a community of people making that content.
That's assuming it works well, but if it does -- I don't know, seems like a cool project? It's good to have more Gemini clients.
I think of Gemini less like a competitor to the web but more as ham radio.
Radio amateurs can communicate through the internet with their phones and it would be faster, simpler and easier. But radio gives them both the nostalgia feeling and a niche community to belong.
I don't use Pico-8, and I find its limitations frustrating; I am never going to make a game in Pico-8. But I do use stuff like Jummbox when I compose music, for basically the same exact reasons other devs use Pico-8 for games, and I love Jummbox's limitations for music composition.
Limitations are a way of fostering community (ham radio enthusiasts all kind of get to know local operators, Jummbox makes sharing song sources in a digestible way super-easy). Limitations also allow you to not care about complications that would be barriers to building things -- I don't want to set up a VST before I start writing music.
So my feeling on Gemini has shifted from being a curmudgeon about honestly kind of really nitpicky, shallow stuff like the ability to mark-up inline language transitions -- into realizing that when you step away from thinking of the project as some kind of attack on the web then yeah, it actually makes a ton of sense to build a small community around a very limited format that forces everyone in that community to be standardized in how they share with each other, keeps the community a little bit niche so that the people in it are a bit more friendly and personable, and forces its participants to focus pretty much only on what they're writing and nothing else.
I don’t think I’ve ever seen HN react so negatively to a project, but I think that has to do with the image people have of what a “browser” is.
Kristall isn’t very useful for HTTP even though it supports the protocol. That said its goal is the “small internet” and very basic sites on HTTP will work, which is is inline with that goal. Some have complained even Google search doesn’t work, but Google search doesn’t fall under the umbrella of “small internet”.
When it comes to Gopher and Gemini it’s far more useful, but only a small community of people use either in 2023 so that’s not going to appeal to most people here, even though that’s the primary use case of the project.
> I don’t think I’ve ever seen HN react so negatively to a project
I thought the title was all wrong. It created false expectations. Too many people thought this was an HTML browser. I first came across Kristall when I was looking at GeminiSpace ( https://en.wikipedia.org/wiki/Gemini_(protocol) ) - the gemini protocol (a replacement for http(s)) plus GemText (a replacement for html).
Kristall is a browser for GeminiSpace - first and foremost. GeminiSpace has embraced some aspects of the legacy small-web - specifically gopher and finger. Lots of the browsers in GeminiSpace support all three: GemText, Gopher and Finger.
Some of these GeminiSpace specific browsers also try to support other types of markup languages (html, markdown) but that's not their primary use case.
I can see that Kristall is trying to be more than a Gemini Browser. It can do well here if it was clear about the level of html it was going to support. I'd be very happy with HTML 3.2 ( https://www.w3.org/MarkUp/Wilbur/ ) for example (mid-90s).
Unfortunately, it doesn't even do 'turn of the century' HTML tables very well - something that the lightweight Dillo ( https://www.dillo.org/ ) does well.
As Kristall's level of html support increases; I'd love to be able to include it as a viable html browser (similar to Dillo and a bunch of other older browsers I have installed). As a GeminiSpace Browser, however, it is among the elite. This should have been its intro to HN.
I wonder if it's a time-zone thing with when the link was posted.
I think I remember reacting myself at least somewhat negatively to Gemini first time I heard about it, so on some level I understand how someone's initial reaction could be hostile -- but I also feel like "limited tools producing tight-knit communities and specific community-tailored content" is right up HN's alley, and I am also surprised to see so many people questioning why this should exist.
"It doesn't work for the majority of the web."
So? I assume the project isn't lying, it does actually support HTML. I just don't see how it's a bad thing to have a lightweight browser that handles static HTML; I don't think anyone is advocating that anybody uninstall Firefox over this.
----
I'll kind of go a step further here and give a mildly hot take: nobody should be doing normal browsing on an indie browser in the first place; I consider that a likely security/privacy risk. The vast majority of indie browsers do not have full-featured uBlock Origin support, they don't have the browser hardening features that something like Firefox ships with by default.
So on some level I feel like "can I use this to read CNN" is not necessarily a great metric to use for an indie project, because if you're reading CNN it should probably be in Firefox with uBlock Origin installed (or something comparable, but I typically advocate that Firefox and its derivatives have better tracking protection than other mainstream browsers).
In contrast, if something loads in this browser, what is your security risk? Limited MITM attacks and network analysis? I don't feel the same aversion I feel to most indie web browsers when I think about someone reading a limited number of indie web articles in this thing or building projects around it.
Peoples sense of the internet is just a corporate engulfed dystopia, they don't see a reason behind this because they haven't experienced what it could be otherwise.
Ehh, I don't think this would support the Internet Archive (hardly a corporate-engulfed dystopia), or even have supported the shitty Sailor Moon fansite I made in 1998.
I wonder if it's possible to start with one of these small-internet browser projects and expand it to the extent of adding CSS and adding some mechanism to handle something like HTMX provides (without including javascript capabilities).
I haven't fully thought this idea through, but it does seem like it would be a manageable way to start a non-corporate browser that could handle modern layouts and some degree of interactivity without the mess of privacy and security issues that javascript introduces.
You can't call it an internet browser if it doesn't work properly on 99.99% percent of existing pages. It's like calling telnet a browser, when it only accurately displays text/plain.
But it's not a small internet browser it's a small-internet browser. It is made to browse the "small-internet", which is not the same thing as the internet.
The Internet is simply a collection of IP networks.
The World Wide Web is the collection of HTTP servers and "Web Browser" clients like Chrome, and, uh, Chrome. Also WebKit.
An "Internet browser" would be something like `dig`. Though, maybe that's better called a "DNS browser" in this boring world where every app that talks to one or more remote servers must be called a browser.
I use a text-only browser, i.e., an HTML reader than can make HTTP requests, with no support for CSS, JS, WASM, etc. and I access the entire web with it. For Gopher and Gemini I find I do not need a browser.
These projects do not have to appeal to large numbrs of people. The goal need not be large audiences for advertising.
I have no idea how many people use the same browser I do, that has not mattered IME and TBH I really do not care. Although I have seen on the project website that a number of companies made donations to sponsor certain features, and the company names and amounts are listed on the website. Those companies must have found the program useful.
Methinks there are just so many folks who have bet their entire livelihood and career on the asymmetry that the web has enabled, where its users are generally powerless and control nothing on their computers. Anything that could give control to users is perceived as a threat. This is highly dysfunctional, IMHO.
Even when such things have nothing to do with whatever commercial endeavors these folks are engaged in. I use HTTP every day via small programs I can edit and compile and therefore control. Everything works. I can do "industrial strength" web search without ever opening a browser. I find it hard to imagine these techniques would not be useful to others.
The internet needs need more projects like Gemini and Kristall.
Check out the arguments Google tries to make about its position as the "default browser" in this court filing from earlier this week. The company pays hundreds of millions of dollars to various parties in order to be the "default" search engine, its CEO himself was engaged in setting up such "payola" arrangements for Google prior to becoming CEO, but one would never guess that from the assertions Google is making here.
What if these operatings systems were, by default, set to cycle through a list of search engine choices, randomly selecting one each time a search is submitted, until the user chooses one as the default. Over time, the user might become familiar with a variety of search engines. She could then make a more informed choice about which one to set as her "default". Instead, what we see in this motion is one "tech" company arguing that other "tech" companies think it is superior. Who cares what "tech" companies think. What do users think.
I think part of the problem is that the "small internet" isn't a concretely definable thing in the same way that "Gemini space" is. Gemini browsers are meant to browse Gemini space and the expectations and limitations of that are known ahead of time.
Meanwhile, this app is meant for browsing an unknown, undefined limited subset of the plain old HTTP web; and when you step out of that sandbox (and you most definitely will inadvertently do so) what you are left with is a broken experience. A web browser that goes out of its way to not support foundational basics is just a shitty web browser given that my normal web browser can hit the same sites.
Restricted functionality is a security benefit - could be interesting in some scenarios... Not mentioned by the project's presentation, but there has to be an audience there.
This is a common trope I hear but I'm not sure it's true any longer.
One of the reasons Google login requires JavaScript to be enabled is to fix flaws in non-JS HTML that allow for hijacking passwords. It's why they took a much harder line on simple auth a few years back; too many instances of people being tricked into handing their Gmail account to scammers.
What flaws allow sites to hijack passwords if pages aren't programmable? Are you talking about phishing, and how would js help?
My mental model here is that the browser's password manager autofills a password that the user doesn't even know, and without scripting or dynamic resources, it's not possible to exfiltrate that data automatically. Without styling, form buttons would have a standard look so it's harder to trick the user into submitting something without knowing (c.f. the recently posted fake captcha that tricks users into revealing the visited state of links).
The OP link is a browser that doesn't support CSS. Not supporting frames seems like it'd also solve that problem.
Edit: also, in a modern browser, passwords are autofilled by the browser which knows where input is going and can't be tricked. You could make an argument about other sensitive information, but that sounds like it's the same as plain ol' phishing.
Unfortunately, there's no language for querying the client "Do you support CSS" / "Do you support frames" without CSS so Google is forced to err on the side of caution and reject browser that don't support JS (because they can query that).
Webdevs are aware of flaws in their browser so that they have to hack up a bunch of javascript in order to safely login instead of cooperating with the browser developers, which are their employees.
On the other side chromium devs need to develop quirks/hacks for specific sites in order to support their bloated javascript hacks (:
It's not flaws in the browser; it's flaws in the design of HTML and CSS (i.e. "Oops nobody realized that a real right bastard could bend the static rendering tools to make this happen; well that's a problem"). I'm having difficulty sourcing the details right now, unfortunately; I believe the issue was that non-JavaScript HTML combined with CSS lets you frame-in the target site's login page but situated in such a way that the user thinks they're doing something else, and their clicks and keyboard input go into another page's content / security domain. Breaking this attack requires the framed-in site to be able to use JavaScript to detect that they were framed-in and break out.
It's one of those situations where "This was mis-designed with insufficient eye towards how right bastards could use it, but the cat is out of the bag. We can't change the spec because it'll break legitimate uses of this technology, so this is the best option we have." Similar to how images were naively scoped to be embeddable from any web domain and then people invented pixel bugs, but you can't change the security model around image loading without breaking vast swathes of the existing web.
My guess would be: using CSS to place the malicious site's input files exactly where the framed site's input fields are, but "on top" (on the z axis), so the user actually inputs data into the malicious site.
How is this “attack” any different than simply mirroring the site? The URL bar shows the malicious site either way. If you enter your password on the malicious site, you’re pwned. Don’t need frames for that.
That's never been sufficient to keep bad actors out because end-users don't grok the URL bar.
This attack is superior to a mirroring attack because it uses the target site's own UI resources, so it looks very legit (no need to pay someone to monitor the UI of the target site for changes and rework your exploit to attack them).
IMO the better comparison is selling a 9” black and white TV with only an OTA antenna as input as a TV in 2023: technically correct, wildly out of whack with customer expectations.
I think they point here is, they make it obvious that they don't want to support JS,CSS, etc... The support for other protocols is the same thing - neither a finger page, nor a gophermap need css or JS.. So if you want to use it with the part of the web that requires those, this is not for you. It's a tool for those who use a different part of the web. Those who only need a TV and not a circus. One might see a commentary about the state of the web in the project, but not necessarily - it's just a tool for those who have a need for it, e.g to browse the intranet for a research institute.. if this browser is all you need for that you greatly reduce the attack vectors on your institute among other benefits.
Given that customer expectations of TVs now include being normalized to TVs getting slower with updates that cannot be avoided without jumping through hoops, sometimes even bricked, and occasionally even showing ads because the company you bought it from decided they needed to extract more wealth from you, that is pretty spot on. But perhaps not how many people think it is.
Exactly this, someone shared a project they thought was cool and that they thought others might find cool as well.
The hostility displayed towards an open source project in this submission is completely excessive just because it doesn't meet the requirements of many/most users. I'd feel awful if I were the creator of it and read the comments here, which are being extremely harsh about what is otherwise a very nice little project.
Personally I find it very useful, but I'm the type of person who would.
I mean, that's the source of the complaint though.
It's 2023. If it doesn't work on google.com, it's not a browser that works on "The Internet" (due to conflation of terms over the years, people assume that "The Internet" means "The Web," especially when the term "browser" is mixed in there).
To be clear though, a much better user interface for retrieving and displaying small documents through gopher, gemini, and finger in addition to HTTP is pretty cool.
"Opening a browser and going immediately to a search engine" is how most users use the Internet (proxy statistic; google.com is the most visited web page by a country mile, followed immediately by YouTube, Facebook, and Twitter). A browser that doesn't work with the most popular web pages is not a "browser" for practical purposes of most people.
... which is fine, but I suspect the headline is throwing readers here off because they're equating "browser" to "web browser" and then the actual tool flips their bozo bit when it can't even properly render popular sites in a degraded mode. Perhaps the tool can use better branding: "A general-protocol Internet document browser that can also do some HTML," for example?
Well, Apple, Windows, and various GNU/Linux distros (as they can also run Chrome and Firefox, i.e. two of the most popular apps, plug in a mouse and keyboard, do 99% of the "computer stuff," so they're fine). Chromebook is probably on the fence since you really have to beat it with a hammer to run Excel on it (depending on how you turn your head and squint, that's maybe true for Linux also, but you can get Excel running on there if you really shoulder into getting your compatibility layers working).
... but nobody considers the Arduino a "computer" in the same sense they consider a Raspberry Pi a "computer" because it can't run general-purpose apps.
Yeah let's stop here, you say people think Linux can run "general-purpose apps" but a raspi cant. Just stop try to prove your right...by moving the goalposts.
Arduino is a Micro-controller, Raspberrypi is a SBC -> Single board computer.
>>and user community that designs and manufactures single-board microcontrollers
No, a RasPi is a GNU/Linux platform and can run general-purpose apps. You misunderstood my comparison; RasPi is a computer.
I'm saying nobody considers an Arduino a "computer" in the same category as a Raspberry Pi is a computer (and similarly, nobody should consider Kristall a web browser, though it is something else, which is cool).
And by analogy, Kristall is a "microbrowser" and I think HN had a bad reaction to it because the top-level description was "a browser."
It's the Arduino of browsers.
That doesn't make it bad, but this site is full of pedants and when you call a tool by a slightly-different name, you get "um-actually'd" to death.
(Sometimes, you even get someone swearing at you because they think you don't know the difference between a general-purpose small computer and a microcontroller ;) ).
As someone who has come to prefer viewing web-pages in reader-mode rather than their default-layouts, I really love the idea of having a leaner, more minimalist web. Just pure content without all the bloat...
I feel the same. I've often wondered how much is missing from servo if we just wanted a noscript browser. It would be nice if redox os had a rust only browser.
kristall is along with geminaut and lagrange, the perfect triptych for wandering the small web! until recently they were the only ones to pass (almost) torture tests [1]! highly recommended
Completely agree - they are the top three GUI browsers running in GeminiSpace. Since I am seeing lots of confused comments - both Kristall and Lagrange are x-platform and portable. Very easy to take out for a spin:-)
There's also JGemini (https://github.com/kevinboone/jgemini), a cute little 107kb jar file that's also a full GUI browser. Yes! 107kb:-) Also x-platform and portable (java -jar ./jgemini-1.0.jar). Not many features (no tabs, just multiple windows) but it's a good example of how quickly browsing solutions can be built when markup is reduced.
That is... you don't need billions of dollars and a decade to present a viable solution in this space.
GeminiSpace actually reminds me much of the early days of the web. People will complain about the limited markup. For some users, the additional security/privacy of the protocol and the reduced 'noise' makes this appealing.
React and angular are definitely the wrong tools for the job if one is trying to make a semantically-parseable page. They're tools for human interaction, not machine interaction. It turns out humans care about things that HTML and CSS alone weren't sufficient to account for (like delaying streaming of data until it's actually needed, saving on bandwidth, which humans have to pay for).
Love Lynx and not particularly fond about JS or responsive frameworks. But I do not understand what React or Angular could have contributed that would be incompatible counter to or otherwise could have hampered the semantic web — so far as that would have ever been a viable thing is the first place.
Can you elaborate on how react and angular have destroyed the semantic web?
In the old days you could look at at raw non-rendered HTML and it would be so simple that you can render it in your head.
With the advent of more sophisticated web frameworks like the one mentioned, that’s no longer the case. The site consists of MBs of scripts and templates that take a gigawatt to render.
+1 on the bloat, though i would primarily blame sloppy code and cheap fast pipes.
On complicated scripts taking a gigawatt to render, I would like to point out that in the good old days that was all happening too — just not in your browser. Inefficient PHP and CGI scripts, massive Java frameworks. Today, still the majority of complexity and heavy lifting is kept away from our browsers. It's sobering to think that most of the gigawatts we burn on our phones, do not show up on our power bills...
I think that for most businesses, their sites could be statically generated, with iframes and embed tags for interactive parts like forms, and the web would become much faster, more pleasant to use, and have a notably lower carbon footprint. Making images smaller by default would help too.
For interactive sites, if videos were limited to 480p on mobile and 720p on desktop unless manually changed, I imagine the carbon impact of data centers would drop considerably. For content viewed on TVs (where you usually sit quite a bit farther back than a monitor), such as Netflix or Hulu, I think they could set it to 480p by default and a lot of people would never bother to change it.
Unfortunately actually calculating the carbon footprint of a bloated web (compared to a lite version) would be very difficult. As you mention, a lot of that bloat is on the backend. New Reddit may transfer 6x more resources over the network than Old Reddit, but both of them have to process on the back-end what links should even be shown for a given user, so I doubt switching to Old Reddit would result in 6x fewer emissions. But even if it only resulted in 2x fewer emissions, that'd still be a considerable improvement. Part of the investigation would require seeing how much energy is used by the data centers processing what to send, versus how much energy is used by ISPs transferring that data across networks to the end-users.
In any case, if sites were more like Hacker News, Craigslist, and Wikipedia, versus New Reddit, Amazon, and most news sites, I feel confident that the carbon footprint of the Internet would go down notably. HN and Craigslist's designs are going to be hard sell for most businesses, but something like Wikipedia proves you can have an attractive design with low page sizes. And in the case for newspapers, it'd be nice if their web versions were more similar to their paper versions. That is tough with a free + ads model, but honestly I'm more likely to see an ad if the whole site is just text and there's a text ad in the middle of it (hopefully properly identified as such, though.)
I have been using lynx for almost 3 decades. It is still my primary browser. All of my bookmarks are stored in lynx. When necessary, I can spawn my defined x-www-browser from inside lynx, as when typing this comment. The two together are quite functional...
"Never" is a tiny bit harsh, IMHO; arguably, there have been several of them they just (evidently?) don't offer content producers as much value as it offers to consumers (which includes search engines), and thus the incentives play out in exactly that way
* JSON-LD markup exists in plenty of modern sites
* schema.org markup exists in some
* microformats.org briefly raised its head
The BBC website is the biggest example I know of which has a lot of semantic markup/annotations. IIRC they used to actually have RDF attributes in a lot of the BBC Radio listings
I quickly moved on to Links when I was using Lynx for a while (some years ago) as it optionally did images and could work with some JavaScript. Depends on ones use case.
I think it would be more flexible to instead of Geminispace and developing a browser without JS support to enforce “smolnet” would simply be to define a HTMLite standard with a subset where e.g. the entire script tag or img tag is not part of it.
Then you have HTMLite verifiers (probably the
simplest thing to verify!) to ensure a site is compliant and voila you need no Gemini protocol, only simple HTTP/1 and you can also render it in anything from Firefox 1.0 to Kristall to Lynx to Chrome 100+. As a bonus now you would also have very mature accessibility support thanks to modern browsers.
We already have the tools for smolnet. We don’t need to enforce it by removing features. We just need to define what little it should be.
Having said that, all the power to people who love tinkering this way instead. I just think it will be a hindrance to broader adoption and wasting a bit of flexibility and reach (in terms of both software and people).
> The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch. Even if you did it, you'd have a very difficult time discovering the minuscule fraction of websites it could render.
> Alternative, simple-by-design protocols like Gopher and Gemini create alternative, simple-by-design spaces with obvious boundaries and hard restrictions. You know for sure when you enter Geminispace, and you can know for sure and in advance when following a certain link will cause you leave it. While you're there, you know for sure and in advance that everybody else there is playing by the same rules. You can relax and get on with your browsing, and follow links to sites you've never heard of before, which just popped up yesterday, and be confident that they won't try to track you or serve you garbage because they can't. You can do all this with a client you wrote yourself, so you know you can trust it. It's a very different, much more liberating and much more empowering experience than trying to carve out a tiny, invisible sub-sub-sub-sub-space of the web.
Being it's version 0.4 I'm willing to look past a lot of things being fully implemented yet. Especially in this case since Settings -> Style lets you configure the sizing of every component to your liking so we know text scaling isn't just forgotten about or restricted. Another example is text selection, looks like it's marked as experimental.
http, https are disabled by default, they need to be enabled in File/Settings/Generic.
I was hoping it would be a single executable but (on Windows) it's 56 files: 1 exe, 33 dedicated dlls, 22 translation files.
Google Search does not work, it's impossible to get past the cookie consent page: https://imgur.com/a/daGMASS (Same thing happens if one tries to put the search words in the url.)
I think (extremely) rudementary CSS support would fix many of these cases. You don't need to support more than the basic layout constraints, but you need something so sites like DDG's search page without Javascript can work.
I believe Google will still work on very old browsers through something like user agent sniffing, maybe setting the UA to IE5 will trick it into rendering HTML that might actually work (though the search results will probably still be useless)
The GitHub notes that they only support a reduced set of HTML, but not which set it is. My guess is they do not support any interactivity related elements.
No, but I think I must be doing something wrong, because no form or input field of any kind is ever displayed on any website. I tried the simplest form imaginable on localhost and it didn't work either.
I am ok with no CSS/JS/WASM, but no graphics? This is wrong on metaphysics level. Like Socrates believed, writing was not an effective means of communicating knowledge
It says that one of the features is "In-browser rendering of text, images and video". That seems to be at odds with the statement that it doesn't support "graphical websites".
All I can think is that it doesn't support canvas - which would have surprised me anyway.
The "Hacker News on Kristal" screenshot from a sibling comment[1] shows HN rendered without the logo in the corner - I believe the browser won't render inline media (which is how many other Gemini viewers work), in which case the parent's complaint applies.
I think that it's kind of an anti-tool-for-thought - meant to be an art piece or a political statement, and not a tool for getting work done.
If the dev is reading these, just know that a lot of us here _do_ quite like this. I'm in the "market" for a small internet browser these days, and this is one I very well may get good use out of.
From what I can see it supports point and click via the mouse? I don't think lynx does. At least the few times I've used it I ended up using tab/cursor keys which gets a bit tedious.
What's this for only gemini? It seems to not support http, https or gopher even though it kinda mention them ?
Also seems to just segfault when loading a gemini site..
Not a fan of Gemini for various reason, but this browser seems like a really cool project. And browsing the other projects, like LoLa. They seem really cool too. And I really love the style of the whole website. Nicely done!
This is interesting, but what is really missing from the non-graphical (and non CSS/JS/WASM) web is a good search engine. Or, if there is one that returns only these kinds of results, I don't know it.
Well, search.marginalia.nu exists, but it would be kind of hard to use it with this browser, since along with CSS and JS they apparently also don't believe in input elements.
I've thought for a while that the world needs a new simplified web, based on something like markdown but properly standardised. I'm stoked to discover that something similar actually exists!
I'm quite pleased with how my personal site renders in it. Other than my navigation menu not showing up in quite the right place (which is definitely on me), everything else flows very nicely.
I'm cross-publishing my website on gemini, basically rendering HTML and gemtext from a very slightly enhanced variant of gemtext that has some rendering hints that are stripped away.
# Gemlog
=> /topic/ Browse by topic
=> /links/aggregators.gmi Aggregators
=> /log/feed.xml Atom Feed
This section of the memex contains what might described as a weblog.
%%% FEED
%%% LISTING
(the FEED directive tells it to generate an Atom feed, and LISTING to inline the documents list rather than put it in the side-bar in the HTML version)
Gemtext is very close to what I want. The only thing I wish is that it had some rudimentary support for illustrative images. Not like inlined in the text, but at least centered figures. That would go a long way.
Fully agree. I am also wondering why simplicity needs its own protocol here. Striving for simplicity is more a philosophical than a technical task.
Edit: The creation of new protocols and technical solutions alone increases cognitive complexity. From this point of view, it is even counterproductive if existing solutions can enable the same.
Dunno, the gemini protocol and gemtext format is like the internet equivalent of a 6502. It's extremely approachable to anyone with a basic understanding of programming. Like you could slap together a working client or a server in an afternoon, most likely.
Its built-in limitations also inspires quite a lot of creativity.
It’s disheartening to see so many posts like this on ‘Hacker’News. It doesn’t have to be some big, serious thing for people to either enjoy or find useful.
Some people had an idea, went out, and made it a reality, built a small community around it and now we have Gemini.
We should encourage such things. Projects don’t need to change the world and they don’t need to be useful to all people or exploitable by corporations. Building something because it’s fun, or because you want it to exist is good enough.
You say that, but plenty of old computers were based on the 6502. Those were used both for play and for real work. Before the time of resource-heavy network protocols, they could even host and access BBS systems (for those unfamiliar, basically a precursor of the Internet).
Gemini cannot do a lot (which is by design), but it also has huge capabilities. For instance, while client-side logic is impossible, there's nothing preventing you from writing a "web app" that does its logic on the server side. Yes, you would require page refreshes to update what the client sees, but with Gemini each page load is much, much cheaper than HTML because the response payload is smaller to transfer and easier to parse.
Assuming that your Gemini client allows enabling inline images, there's absolutely no reason why you couldn't build a stateful web-app that is a clone of Twitter, Facebook or another social media site. There's no reason why you couldn't design a simple webmail client a la Gmail, or a system monitoring dashboard, or a bug tracker, or basically anything else that doesn't have to rely on complex layouts or inline video to do its job.
Unix was born to play Space War. NCurses exists because of Rogue.
Text adventures where used a lot on text parsing, and video games boosted the PC industry on multimedia.
That's pretty much the stated objective, in addition to designing the protocol in a way that freezes features at the outset to prevent it from becoming another Web 2.0.
Exactly. See my other post here — I think a HTML subset would make more sense. Then these sites would run safely on both Kristall and Firefox for as long as they adhered to this simplified standard.
Being unpractical is the point. It's unpractical to implement tracking, anything beyond simple ads, websites with flashy or distracting designs, picture-in-picture video autoplay like news websites often do, popups, moving banners, and RAM/CPU heavy client-side scripting (or any client-side scripting, for that matter). So your website will have to do without that.
And that is by design. Gemini is designed to exclude those features, because the target audience of this protocol wants to browse a web where those features are impossible to include.
This has always been my real issue with Gemini. Gopher is useful on all computers I own and actually makes retro computers more useful with the services available on it.
Gemini’s TLS requirement prevents access by a of machines that could benefit from it.
Is there anything on gemini other than people talking about gemini? Not trying to be facetious but last time I looked I couldn't find anything. I'm sort of directionally sympathetic to their goals so I want it to be a case where there is great stuff there that I just didn't know how to find.
Just glancing at the Antenna feed, there are about 10 posts a day, and one a day about Gemini. Most content is just blog-post updates on computers and people’s lives.
Antenna is an aggregator feed for Geminispace. You can subscribe to it with a Gemini client that supports Atom feeds.
=> gemini://warmedal.se/~antenna/ Antenna on Geminispace
Really. I want Epiphany and Firefox to allow me turn off JavaScript like I can allow/disallow {Audio, Video, Webcam, Location, Notifications...}.
The single wrong decision was following Google into that JS-Show. JS has it rationals, I'm using it as programmer sometimes. But JS was consider harmful for the reasons! Google intention was using JS for it's so called web-application/single-page-application to lure users into the cloud. And they opened the opportunity for a bloated web with user tracking via JS, bitcoin miners via JS, animating all kind of elements with JS and so on. Result? Fan spins up, laptop battery discharged.
There are annoyances. 1) many sites can't even be bothered with a basic noscript or it was the default message from some “starter” app rather than something meaningful/descriptive 2) basic sites like blogs are putting their image loading behind JavaScript for no reason (unless they or WordPress plugin developers aren't aware of <img loading="lazy">) 3) too many folks are relying on third-party CDNs and client-side parsing for something that should obviously had been done at build time like code syntax highlighting and rendering LaTeX (almost every ‘modern’ docs project fails this so our tech industry fails here).
There is nothing about HTML/CSS/JS that prevents simplicity. It is purely how it has been used and abused. You can also disable JS and use user agent stylesheets in any modern web browser.
It would be great if React could be built directly into browsers, but it would greatly curtail the current flexibility of server-vended React. The project is able to evolve quite quickly unshackled from a w3c process and the pulse of major browser updates.
(IIUC, there was a proposal in Firefox decades ago to make the engine into several flexible modules and a page could declare which modules it depended upon, then the browser would either cache them and use them for multiple sites or already have them builtin. You'd get the best of both worlds: rich and expressive pages without the frequently-paid cost of poly-filling the gap between how the developer wants the render engine to work and the actual implementation of the render engine.
Sadly, I suspect the actual complexity to implement would have made for a worse overall situation than what we have now).
I haven't done any professional web stuff in a few years, wasn't this kind of the idea of web components? As you said, The React team can move a lot faster than w3c and each browser vendor.
Where is the button to enable JavaScript? For reasons of simplicity, resource use, and security JavaScript should be disabled by default. But there ain't that button, it's complicated.
Where is the security? Chrome last I checked had eight actively exploited zero-days last year, which is laughably bad compared to the other operating system I use. Perhaps if the modern web was simpler, a browser would be easier to implement, and more time could be spent on making it not a raging security dumpster fire? But it ain't, it's complicated.
How does one even setup user agent stylesheets? What could that be but yet more complexity? Meanwhile, I'll use w3m and amfora and if it's a broken page that mandates Flash, JavaScript, whatever, I most likely won't bother launching a "heavyweight champion" browser. The CPU fans will last longer that way.
You can read a book on your tablet / laptop, or even watch a movie adaptation; or you can read a book on dead trees. Some people would like to recreate certain aspects of the "dead tree" experience without actually killing trees.
As stated by robin_reala, that is not how screen readers work. At minimum they still definitely parse CSS as many accessibility enhancements were only introduced by CSS with no equivalent (or even a workaround) using pure HTML, and in practice will also require to have JavaScript support because when a website decides to be an obnoxious GPU-hungry Flutter application even CSS wouldn't cut it.
This is not really correct, many screen readers only support a very small subset of CSS which doesn't really match the parent's comment, that browsers must support HTML and CSS.
> This is not really correct, many screen readers only support a very small subset of CSS which doesn't really match the parent's comment, that browsers must support HTML and CSS.
This doesn't match with current versions of accessibility software which either connects to a browser or simply bundles an embedded version of Chromium. The ship has sailed on no-JS websites sadly, and not supporting (substantially) all of CSS modules would simply render 90% of websites inaccessible.
All that to say, I'm not sure I understand the criticism I'm seeing here. A Lynx-like browser with proper graphical mouse support and a couple of extras built in is a fine project. And support for Gemini/Markdown gives the browser a clear use-case beyond HTML that means it'll be practically useful for some people; it's not just an experiment in failing to render most websites because it doesn't support CSS, there's a category of content that you know will work, and a community of people making that content.
That's assuming it works well, but if it does -- I don't know, seems like a cool project? It's good to have more Gemini clients.