Please, do yourself and us a favour and sign up for a free account on https://neocities.org, and make a quirky website by hand for all of us to enjoy.
I half want to write a post like this but with dead-simple instructions, like "Open Notepad, write <html><body>Hello!</body></html>" inside and upload it to Neocities. Congratulations, you now have a website!"
A good addition would be a nice CSS stylesheet (https://newcss.net/ looks good, linked by sdan in his comment here) to get the site looking nice and mobile-friendly. That's pretty much all you need to make a simple website.
I feel like there's a place for a service that can,
* Host static content like netlify/neocities on a CDN
* Allow you to register a domain
* Choose a template and enter content with basic markup
This should be able to be done in < 5 minutes for a non-tech user. You'd think registrars would be incentivized to build such a service.
edit: ~~markup~~ markdown/WYSIWYG
I hate those undisableable social media accomplishments that come up as notifications, is Github.community based on Discourse or did it just copy their bad ideas?
edit: wow you can disable them in this one, amazing. I still hate it.
but i'm sure they've added cool new react emojis, so i guess they've been pretty busy with that.
E.g for navigation sidebar:
For form controls:
[ ] I'd like to receive marketing emails.
( ) Female
It'd be even better if it can support basic CRUD/BREAD operations. Basic CRUD webapp with just markdown, imagine that.
It should be possible to do something like neocities/Gitlab/Netlify with a one-time fee to cover the domain.
I didn't have any idea what Apache was, how files were served, or really anything about internet infrastructure. And that's fine, certainly good for a kid or to whip up a quick business site, but it doesn't satisfy the hacker in me today. There's too much magic going on behind the scenes for me to feel much ownership of it.
I suppose it sort of depends who you're aiming it at. TFA didn't feel to me like it was trying to approach novices (after all, it's talking about leaving frameworks and libraries behind, novices aren't much using frameworks). Your instructions are good for an easy introduction for people with no web development experience, to recreate that sense of ease, but I think it's a different thing from what the article is going for. Even a simple site requires a fair bit under the hood, because serving files isn't all that simple unless you just hide it away from the end-user.
How to host your tiny website with beaker (https://beakerbrowser.com/)
1. Open Beaker
2. If you don't have any content, select "New Hyperdrive" from the burger menu, enter your site name, and start editing content with the integrated editor
3. If you already have a folder with content, select "New Hyperdrive from folder"
4. You have a website ! Update it locally and changes will be propagated. Your site address is the hyperlink in the URL bar !
There shouldn't need to be more steps and, more importantly, dependencies on external processes/companies/organization to say what you want to say and spread it.
 For varying values of "always".
- Beaker, like any decentralized web thingy, allows visitors to re-share content if they wish to do so. Integrity is provided through the crypto bits
- There is of course the possibility to ask a third-party such as Hashbase (https://hashbase.io/) to host content for you. The difference with WWW hosting is that you don't depend on them to serve content at all times, only when you or your visitors can't; you aren't as dependent
Beyond self-hosting, the major differentiator is that the "web" goes further than just interlinking, it also contains safe distribution. Beaker does even more: foreign content is typically accessed through "mounting" other sites, just like you'd mount with FUSE, so an application doesn't really care where the content comes from. It's all files in the same virtual filesystem.
Monthly costs per watt are around 0,3EUR/kWh * 24h/d * 30d/m/1000 = 0,2EUR. That's roughly the price tag for keeping a raspberry pi on 24/7 (yes, it's below 0,2EUR/month). For an average desktop/laptop, which will consume way less then 30W in idle, it's 6 EUR/month. The price of an average hosting offer.
Or just commit directly to Github, enable pages, job done :)
My website, https://cyrialize.dev/, is hosted on Neocities. I used jekyll to create it and I use water.css which I found on CSS Bed (https://www.cssbed.com/). The Neocities gem is VERY useful.
My process is super simple, it's basically:
- Make changes
- Commit & Push
- jekyll build
- neocities push _site/
I'm using it to deploy my own site (which happens to be subscription, but that isn't required): https://metaluna.io/
How do you upload files without WebDAV? As far as I know, there's no API.
BTW I'm not sure if it still exists, but they had a lifetime supporter account, where you can get a lifetime subscription for $100 in Bitcoin. I got it to support the effort, though I don't really use it, but a bunch of Terabytes of bandwidth and 50 GB of hosting space won't go to waste.
This uses their official CLI RubyGem along with the token you can get through the web interface.
yeah, even easier (or maybe more minimalistic) would be:
goto 192.168.0.1 (or .1.1) and figure out how to port forward '80' to your machine.
goto a dynamic dns service, get some sub-subdomain, and setup a curl script to set it to your IP.
put an html file in your apache server.
voila, you have your own website*
*if your ISP allows traffic on port 80
For me, I hate installing stuff, last time I installed Apache must have been before 2004, and I would like to not have to go back.
How sad that this has become the widely accepted narrative.
There’s a lot of value right now in NOT building things that way.
Last week I had to deal with fixing another dev’s mess on a stuck project.
Big company website, but nothing fancy at all. Purely a marketing window. The amount of complexity he put into it by using Vue.js was insane for the scope of the project. INSANE.
To do something as easy as changing the pages <title> tag we had to write an unjustified amount of lines of code.
Framework-itis really is a bad disease, it not only affects your work, but it definitely clouds the simplest form of judgement, it appears.
Then we have exactly this: someone who got a hammer and spent years treating everything like a nail comes to a reckoning, usually framed as a longing for the good old days when things used to be simple.
Well, you know, things can still be simple, if you don’t offload to unjustifiably complex frameworks the duty of understanding what’s going on in your project.
To be clear, I’m not at all against frameworks. I love and use some of them, but they’re like a closet. If you are a tidy and organized person, your closet will be full of neatly folded clothes; if you’re a messy person, it will still be a repository for heaps of displaced garments, ready to fall out as soon as you open the door.
He might just be that kind of JS dev that really likes to build a webpack castle.
You know, if you can drop in a url to a cdn-distributed version of a JS, you'll instead rebuild the whole thing in webpack, babel, and several other things just to be able to type "import".
(I don't know Vue, but this smells a bit.)
We wanted an internal component to be able to change the page title, which ended up with a project to transition to using react-helmet. It ended up taking a few weeks before this project completed.
const setTitle = (t: string): void => document.title = t
We also couldn't console.log in this codebase, it was very ideologically pure but it took ages to make any changes
However, I've also seen simple pages that are line by line copies of tutorials for whole sites, pared down to a single page.
Frameworks aren't bad, unstructured working practices and environments often breed this sort of toxicity in code.
I made that _tiny_ website for evenings with friends. It works great. You enter a few names, hit the save button, you get a list with scores and a +1 button for each players. When you refresh the page the scores are reset. The most important feature is the _ding_. The ding makes it fun.
It's ridiculously small, but it brings lots of fun.
It's tiny, it's fun, it's enough. Keyword : enough. Yes I could add a preventDefault, but I don't care. Really, I only entered the names of my friends once, it has been in my localStorage since, and it's only really meant to be used by me.
Here's a screenshot of my latest project, nothing fancy, just starting to tinker with Tufte CSS and typography (Lyon Text and Concourse): https://cln.sh/BzBD
It's interesting to note that the website content itself for this article is 5,045 bytes (HTML + CSS), but the analytics code (firebase-analytics.js) is 26,458 bytes, and firebase-app.js (whatever this is?) is 19,865 bytes.
Not a criticism; analytics is important & maybe the app JS is too. Just think it's worth bearing in mind that it's really easy to blow out your website's total delivery impact to users by several times with third party includes.
A lot of times they're not. If you don't have a product, and you're not trying to sell anything, consider the possibility that you just don't need it.
All those things can be gathered from the initial request if you can set a cookie on the user's machine using a cookie header. If you're Firebase then you'd need a request for firebase-analytics.js to set the cookie, but the file itself could be empty.
Also useful to know what fraction of your users are recurring versus new users to tell whether people actually return to your site or if they just check it out once.
Once the cookie is set you can get all this from the server side logs.
Lastly, the refer stats can help you figure out what other sites and communities are interested in your project so that you can further engage with them.
Referers, if they're available at all, come in an HTTP header so once again server side logs would give you this data.
There is no reason to serve any JS for logging unless you want browser fingerprinting data like the user's window size. If your analytics script is more than 0kb then you're tracking people.
Remember awstats and jwstats?
But it doesn't take 50 kB of JS to be able to do that!
Also, believing a page is better because I spend 10 minutes on it rather than 5 is nonsensical. Am I reading everything (good) or am I hunting around for some piece content and not finding it (bad)?
Time on page is a stupid metric that doesn't measure anything useful.
It's not like this data gives you an immediate and perfect readout of the user's entire brain, but it's not useless either.
If you collect information about how often a user is moving their mouse and scrolling the page in order to tell how long someone has been on a page then you've moved from collecting what's useful to collecting everything you can just in case it's useful. That decision comes at the cost of invading everyone's privacy, and that needs to stop.
There's an issue on our Github which will help us perhaps reduce it to under 1 KB too. We're working on that https://github.com/plausible-insights/plausible/issues
It would seem you are collecting PII and processing it server side (?), e.g. calculate_fingerprint here .
Care to explain how that doesn't require user consent?
Instead of setting a cookie with a unique user ID, we simply count the number of unique IP addresses that accessed your website to determine the visitor count.
To enhance the visitor privacy, we don’t actually store the raw visitor IP address in our database or logs. We run it through a one-way hash function to scramble the raw IP addresses and make them impossible to recover.
To further enhance visitor privacy, we add the website domain to their IP hash. This means that the same user will never have the same IP hash on two different websites. If we didn’t do this, the hash would effectively act like a third-party (cross-domain) cookie.
Network Address Translation allows many unique users to share the same public IP address. For this reason we also add the User-Agent string to the hash, although we don’t store the actual User-Agent string.
> No libraries or frameworks (the exception being analytics)
There has got to be a better way of collecting analytics than to inject a bunch of crappy JS from some company (probably Google).
It's a tradeoff, you get to decide if you run code on your visitor's computers without consent and get analytics that are wrong in one way, or if you run code on your server with your own permission, and get analytics that are wrong in another way and simpler.
Or, better yet: If it's a tiny site, do you really even have to care about time-on-page?
Is there a server-side tracking way to do it?
> During my time using frameworks I've become more and more out of touch with the code I'm writing. For example, when I plonk down a button in the Ionic Framework I get a beautifully engineered and designed button, but it also has 10 CSS classes attached to it that I don't really understand. I sometimes feel like the thing I've created isn't truly "mine".
That seems to be the nature of CSS utility frameworks. Composition over inheritance; or in this case monolithic element styles. Is the utility function to fetch elements from the DOM tree also truly "yours"? Where's the line drawn? I could get the text representation of the DOM tree and write my own parser? How about that?
> I therefore decided to go back to the basics and code my own tiny website. I already knew how to go about it, you probably do too, it's really easy (if you don't know here's how). However, I'd never actually done it, and I'd not made a website without a framework in over a decade.
Ah! And there's the problem! Are you creating a website or an application? I do understand that this feels like a cleansing of sorts. But in my opinion doesn't reinforce the original argument.
I think the more experience you gain as a developer the more often you will come to the realization that technology evolves and also becomes more nuanced over time. When confronted with this fact of course the first instinct will be to feel overwhelmed and return to basics. Which if you're creating a mostly static website in this case is perfectly reasonable. But you can't be an expert in everything. Thus frameworks and tools emerge to alleviate some of the pain that stems from designing applications not websites.
Which again reinforces another often quoted realization: the right tool for the right job.
I'll try centered again.
1. People who usually write content are not developers
2. People tend to trust sites that are minimal like the one OP posted.
Content writers traditionally come from background where they are trained and use tools like wordpress. We are now asking them to write HTML. I believe Jekyll here is a good compromise. We can generate static sites using Jekyll while still allowing simpler tools pre-publishing.
Regarding my point #2. Not a week ago, there was a thread to discuss how cool Stripe's animations are. There were multiple conversations in said thread that attest to this fact. Many people look for a sense of branding on product or service based businesses and relate those to their legitimacy.
I think my sentiment for small/simpler sites is a pretty common one on HN, but I think it all boils down to who your users are and what they prefer.
But it looks very nice and I love the idea.
Now, there are two main steps to making a website public.
* Developing the website (idea, design, development)
* Hosting it (domain, email, host, server)
What you posted is great for emphasizing that the first step can be done in a very simple way, without any complicated frameworks.
The way you did the second part - hosting your site (Using Node.js/npm, Firebase, and npm package) - does not fit well with the simplicity you demonstrated while doing part one.
However, if considered as a how-to for building a tiny website using Firebase, it is awesome.
No, analytics are not an exception. Don't put fucking spyware on your pages. I can't believe it's 2020 and people still need to be told this. It's not okay.
Why? Why are analytics necessary? How does it matter for such a website?
I finally remove analytics and now write what I love without being to rationals. It brings far less pressure.
Somehow I feel differently about articles I've posted, but to each his own.
Remember that it's easy enough to opt-out of Google Analytics, too. I leave it enabled because I consider it to be useful data for webmasters.
The main problem I have is that despite some love for this type of website on Hackernews they are not really popular. Also, they are not accepted in Show HN.
This guide however, suggests using Google Analytics which tracks unsuspecting users' browsing history across almost the entire web.
It hosts the website centrally by a Google owned service (which provides even more tracking to Google). It suggests buying a domain name from Google.
Is this some kind of Google advertisement?
No, but no thanks.
I personally enjoy messing with CSS, it's incredibly fun to try new things and give an otherwise boring page some life and expression. Obviously there is a fine line between tasteful design and obnoxious/unnecessary clutter but a simple website with some character via design is much more memorable that black text on a white background.
If it's only static HTML and CSS, it's just a bucket o' files. You can store things like that on S3 for... I don't even know how little. A tiny amount. You will not find a cheaper hosting option than this.
Now, the one drawback here is that this doesn't have HTTPS, and you'd need CloudFront for that. This link covers how to do that (note: you should buy your domain on AWS, using Route 53, when following these instructions to make your life easier).
Services such as Netlify, Surge.sh, and Firebase Hosting can do this at absolutely no cost and you get HTTPS.
If you just want to host static files with HTTPS and reasonable traffic, there's several free options. Netlify, Github Pages, Gitlab Pages, Neocities, ...
After submitting it checks if there are links pointing at the (new) page. If those are missing the "success!" page gets a link pointing back at the editor with the index appended to it as the query string, the url that must be added and the page title. (the substring between <title> and </title>)
When clicked it loads the index into the editor and inserts a link into the html (so that I can move it around or type text around it)
That same functionality is reused by a bookmarklet to insert www links into pages. (it also inserts highlighted text from pages)
Except the save page everything is still static html.
With proper caching, the output of a CMS can be indistinguishable from a static site, but it can also include RSS, pagination, categories, tagging, responsive images, tables of contents, etc.
Take https://allaboutberlin.com. In my browser, most of its pages load faster than the tiny website above. It has a big header image, custom fonts and other features. You can achieve great things with HTTP2 server push, static caching and gzip.
You should definitely consider making simple, fast websites because they're great for visitors, but how you produce the HTML doesn't matter.
The website is still functional without JS and text-only, though, and without the favicon and syntax highlighting it comes in at ~4kb to load a page, which I'm happy with; with them it's closer to 40kb, which isn't the worst.
Thank you for the idea!
Maybe what people are really looking for is simplicity, but this changes depending on the complexity of your project. Therefore the question becomes, "What is the best level of abstraction for this project?"
That changes every time you do something new.
- Pure html and CSS (except embedded iframes like soundcloud).
- hosted on GitHub.io
- domain name from AWS
Only "server side" thing I had to do was put a special file in the GitHub repo.
Super cheap too. I have a free pro GitHub account (student ftw) so the only real cost is the AWS hosted zone for dijksterhuis.co.uk
I also have subdomains I set up for machines I SSH into because I'm that lazy.
I do HTML from scratch and Flask w/HTML templates at most. Adding a couple of CSS files is all I'm looking towards, not a bunch of weird React code (which imo looks ugly).
BTW: I believe tailwindcss is fantastic at achieving and helping with the OP's frustration.
for me that requires a level of abstraction, I don't want to be monkeying with HTML if I don't have to, I just want to write the damn article. That's why I use Hugo, I feel like it hits a fantastic balance between speed for the user and ease of maintenance for me.
If you enjoy using Vi, you'll likely have a blast using this one.
I did cheat a little bit and used SpectreCSS but have really tried to squeeze down the page size as much as possible.
Still unsure of what the content and goals are, but it has been a lot of fun to work on a very simple website.
Also, the portfolio website of the author is the opposite of tiny, with a 3.6mb load.
"Many were increasingly of the opinion that they'd all made a big mistake coming down from the trees in the first place, and some said that even the trees had been a bad move, and that no-one should ever have left the oceans." - Adams
ie. I just want to output a folder of /html.
Or s3 bucket w hosting
How do you visit this site?
If you see this message in Firefox for many sites (try some random sites you rarely visit) it's probably a local problem such as "Anti-Virus" software or a "security" appliance helpfully making stuff unsafe. Get rid of them if possible.
It is possible that, as the message says, there's actually "Possible Security Issue" just for you, or some subset of visitors, and we can't really help you diagnose that. Maybe your ISP, or your government, or that nerd neighbour kid who said she'd fix your WiFi are trying to snoop you. But probably not.
It's also possible the server hosting this went a bit haywire maybe under the load. It's clearly on some cheap bulk host and those can be a bit flaky.
Alternatively, is there a heuristic that reliably classifies websites in to tiny/not-tiny?
How could we get this started?
If gaming search engines is your Brad-and-butter, is your idea really that novel?
It's about structuring pages and content in such a way that Google et al believe works best for the reader.
Due to Google-side analytics, it's also about bounce-rates. This leads to both more in lengthy pages with depth, as well as lengthy pages of bullshit.
Really makes you think huh?