Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Bear – Minimal blogging platform (bearblog.dev)
685 points by HermanMartinus on May 26, 2020 | hide | past | favorite | 349 comments



Small suggestion -- it seems like you're using the wrong units (or maybe abbreviations) for your displayed average page sizes. You're using lower-case b to indicate bits, but I suspect you mean upper-case B to indicate bytes? Also, the lower-case k is the correct prefix for 1,000, but lower case m is milli, or 1/000. You want M for mega, which is 1,000,000.

Also if you really want to be precise you should consider whether you're using binary prefixes vs SI prefixes, e.g. kB (10^3 bytes) vs kiB (2^10 bytes). That doesn't matter as much because the error is small for these lower values, but the casing errors definitely do matter. "mb" means millibars to me, not Megabytes!


Huh... I've always described 2^10 bytes as a "kilobyte" (kB) but I've always hated the ambiguity, even if the difference between 2^10 and 10^3 is usually not important. Thanks to this comment, I learned there is an formal set of units which are distinct from their SI counterparts[0].

  1000^1 kB, 1024^1 kibibyte (kiB)
  1000^2 MB, 1024^2 mebibyte (MiB)
  1000^3 GB, 1024^3 gibibyte (GiB)
  1000^4 TB, 1024^4 tebibyte (TiB)
  1000^5 PB, 1024^5 pebibyte (PiB)
  1000^6 EB, 1024^6 exbibyte (EiB)
  1000^7 ZB, 1024^7 zebibyte (ZiB)
  1000^8 YB, 1024^8 yobibyte (YiB)
It looks like those units have been around since 1995, but they haven't seen much mainstream adoption. Too bad.

[0] https://en.wikipedia.org/wiki/Kibibyte


Now that you're aware of them you're going to start seeing them everywhere. There's more adoption than you realize.


I've been aware of these for around 20 years, I've not seen that nomenclature growing in popularity.

It doesn't help that it just sounds silly to my ears.


Nail on the head. They're just weird, at least in the context of English phonetics. That's the reason I discourage their adoption. I'd much rather we just all agree that kilobyte means 2^10 bytes. Or find words that aren't weird.

If I remember correctly, a big motivation for this change was the fact that disk manufacturers intentionally used base-10 definitions so they could advertise larger numbers for disk capacity. But presumably they still do that, and presumably people still often don't notice.


Sure; there are a lot of distinctions that serve to frustrate casual users. And this is hardly limited to computers - the rant that sticks in my memory is that of a family friend being absolutely infuriated that nuts and bolts can be the same size but have different threads.

The difference does matter, though, and matters a lot when you're working with storage at any scale. So people tend to use the right labels just to avoid ambiguity.


> I'd much rather we just all agree that kilobyte means 2^10 bytes.

Kilo means 10^3 though. I use a lot of SI units every day and that's what it always means for every one of them, just like Mega is always 10^6. These prefixes shouldn't have different meanings depending on the unit being used; that breaks SI. The SI prefixes were first adopted in 1795, before computing even existed as a concept, let alone computers existing as actual objects. The overloading of already very-well-established prefixes to mean something different was always a mistake, and can probably partially be blamed on the US's failure to adopt the metric system.


Why should we use 2^10? Bits matter in the small—a machine with a 12-bit word and 1024 words of memory or whatever was popular in the ’70s—but at the scale of gigabytes, individual bits don’t matter anymore, so may as well just use decimal because that’s what our number system is based on. I don’t see any point besides retro nostalgia to use base 2 after you move out of the each bit counts space.


SI prefixes are fine for mass/block storage and network speeds, because there's no particular reason they would fall precisely into buckets of powers of 2. But for CPU cache and system/GPU memory in particular, and maybe even some flash memory, it does continue to make sense to use MiB and GiB, because of the particular way that memory itself is addressed and packaged. Memory very much does fall precisely along power of 2 boundaries.

For example, I recently bought two 32 GiB DIMMs for my computer. I guess you could call them 34.3597 GB DIMMs, but that's strictly worse! Knowing that they're exactly 32 GiB makes it makes it obvious that it takes 2^35 bit pointers to address every location by byte in one of those DIMMs (so they obviously require a 64-bit architecture to take advantage of!), or 2^36 bit pointers to address memory locations across both of them.


Sure. I support the distinction of GiB for RAM and GB for everything else.


Kubernetes uses it throughout, so there mist be some increased usage just because of that.


Notably, storage vendors and many operating system vendors use different units to describe capacity, so hard disks will always seem small as a result of this.

[edit: add qualifying "many"]


Who? Not macOS [1], or Ubuntu [2]. Does Windows still use base-2? They probably shouldn't [3].

[1]: https://support.apple.com/en-us/HT201402 [2]: https://wiki.ubuntu.com/UnitsPolicy [3]: https://www.tarsnap.com/GB-why.html


Windows uses KB to mean KiB. Raymond Chen explained it in a blog post.[1]

[1]: https://devblogs.microsoft.com/oldnewthing/?p=17933


That explains why Windows doesn't use KiB to mean KiB. It doesn't explain why Windows doesn't use KB to mean KB.


I distinctly recall GNOME on Ubuntu using kibibytes. This may have been in the GNOME 2 era. Had thuis changed?


Windows does. The df command does by default. Those are the two I use.


This is a hilarious nit-pick. In the context of an internet blog describing average page-size, it's completely obvious that mb == MB ... not millibars...

There's an insane amount of abbreviations and acronyms that have multiple meanings in different contexts. How many Wikipedia pages have [disambiguation] here?...


It is not disambiguous, it is wrong. People will probably figure out what it is meant from the context, but it will slow them down. Are there really any reasons for using the wrong abbrevation other than lazyness or ignorance in this case?


Yup. I definitely got caught doing double takes. Too distracted by millibars to even finish the article.


"mb" is not millibar; "mbar" is.


Maybe not officially, but in practice that's how it's used:

The unit's official symbol is bar; the earlier symbol b is now deprecated and conflicts with the use of b denoting the unit barn, but it is still encountered, especially as mb (rather than the proper mbar) to denote the millibar.

https://en.wikipedia.org/wiki/Bar_(unit)


Yeah, in general, there are many, many "unit collisions" that can really only be exactly interpreted from context. I think it would be great if everyone started using bracket notation (or similar) for prefixes. E.g. [k]B [Mi]B This is the convention used in pqm.js and it works really well. this would go a long way toward units that can be accurately read by a computer.


But is the SI system ambiguous? I almost never have to think about prefix v unit, it is always quite clear. Of course, strange combinations can occur (MNm - Meganewtonmeter as an example perhaps), but even those are unambiguous on second thought.

Collisions seem to be most prevalent in "IT" units or for improper SI usage. But maybe the latter is really the reason for the former in this case: if people separated SI and other units properly for IT units, it seems to be it would also be perfectly fine (but like me, a lot of people seem to have no idea of the correct definitions).


If you are strictly sticking to the SI system of units, you should be fine. However, some of us work in industries and countries (You know where) that don't fully embrace SI, and mixing other systems with SI is common.


> some of us work in industries and countries (You know where) that don't fully embrace SI, and mixing other systems with SI is common

That's actually pretty much everywhere since metric and SI are two different things. Use of the litre is deprecated under SI.


Pressure measurements/specifications are part of my daily work. I have never seen "b" referring to bar, also not in any of the many American (imperial) papers I have come across. So this is either properly old usage indeed, or very specific to certain regions or industries (like usage of relative v absolute pressure). Or Wikipedia being properly pedantic.


mb means millibars to you in the context of file size? How does that make any sense?


Well of course when you have enough 'file' in one place it starts to exert outward pressure


You should learn to understand MB/mb/Mb/mB for megabyte and mbit for megabit. Anything else is inviting error and sorrow, because many people use it that way.


This is flat-out wrong. Most times when I see a lower-case b in software or documentation it really does mean bit, not byte. The standards exist for a reason and you should follow them. Anything else is inviting error and sorrow.

And no one says "mbit".


No, you're wrong.

My network card is 1gbit/s, my disk space is 4TB, my internet speed is 100mbitps and DVD capacity is 4gb.

This is how everyone uses it and it's much clearer and better than B/b.


"""context"""


“Bear” is a well-known note-taking and writing app https://bear.app/


That's what I thought this was - a bit of automation to publish right from the Bear app.


Bear is pretty dumb and limited. If it covers your use-case, great, but if you step outside of the narrow path at all, it fails.

I actually tried to use Bear as a MD-based publisher to my WP blog, and it just ran aground badly.


Ditto. I thought this is a new product by bear. Got me excited for a bit.

I wonder if trademark would kick in later for this.


What a world we live in that we have to worry about trademark issues for the name of one of the most common animal names a child could think of.


I mean, if they wanted to reserve the use of this common animal name across all industries, that’d be ridiculous, but it would seem pretty reasonable to worry about confusion between a cloud notes app and a cloud blog app. Certainly easier to think about it now than if this service takes off.


These are two different products. Blogging != note taking. And it doesn't seem like there's malicious intent. You can't use a generic name and expect others not to come up with the same idea.


Blogging !== note-taking, but you could definitely make the argument that a blog post is semantically a public note.

Apple, Fox, Shell, Target...any of these generic words ring a bell?

I’m sure there was no malicious intent, and it’s very possible that these two products will coexist without any further issue.

But as evidenced in this very thread there is the potential for confusion, which is the whole point of trademarks, and the reason why I’d have an easier time incorporating “Apple Surfboards“ than I would “Apple Keyboards”.


I wasn't saying that it isn't possible. I am saying that it isn't reasonable.


I saw the headline and immediately assumed it was an app by the same Bear company. I would guess that almost every single Bear Notes user would have the same reaction.


I'm going to go start a new shoe company named Puma. Hey, it's a common animal name, no problem.


Is it known among non mac/ios users? I've never heard of it before. Not to mention that by using such a generic name you're just begging for collisions.


I was trying to play on the word "bare". Yeah, I realised this a bit too late


I have to say I like the character based logo. Inventive use of Unicode I'm guessing.


It’s a fairly well known koamoji. Kaomoji is the horizontal counterpart to smileys and they can get very creative.

If you set your iPhone to Japanese keyboard it comes with hundreds of these build in

    ︎('ω'︎ )


Pro tip for iPhone users: juste type かおもじ (kaomoji) and slide the suggestion bar: you have all the kaomojis you want.


Bold to assume this person has an iPhone :)



Apple users seem to think everyone uses Apple products.


I was actually wondering why the note taking app was named "Bear".

Maybe they thought the same way as you do :)


This is a great, great app. I use it and love it.


I agree, bear is great. I miss it - I no longer have an iPhone and am Windows dependent :-(. I have made the best use of OneNote as I can - but the ease of categorisation of thoughts within the bear app will always be the best imo.


Try Joplin. As a heavy user of markdown for notes, i wanted something cross-platform across OSes and devices. Joplin fits my requirements perfectly.


I want to love Joplin (open source! markdown!) but I just can't get past how ugly it is. Its iPhone app is just a horrendous mishmash of colors and non-native UI elements.


Joplin is cool because you can add your own css to customize the desktop app. I realize not every is interested in doing that, but I’ve been an advocate once I tweaked the styles to my liking [0]. Agree with you that the mobile app styles are very rough, but the community is very active and open to suggestions / improvements [1].

[0] https://github.com/amandamcg/joplin-theme [1] https://discourse.joplinapp.org/


I'm only a week into using it, but check out Notion. I really love it so far. The UI has blown me away in both beauty and how it gets out of the way.


Notion is amazing, and they recently removed the cap on the amount of "blocks" you're allowed to use on the free plan[0].

No affiliation, just a happy customer. :-)

[0]: https://news.ycombinator.com/item?id=23236786


I wish Notion had a self-hosted version. I store a lot of things in Joplin (previously Evernote) that I'm just not comfortable having on 3rd party servers :(


Agreed, Notion is awesome.

I have even stronger feelings about Roam though. Notion is "better" than just abt anything else like it, whereas Roam is different -- there is nothing else like it.



Yes!



Might be nice for Emacs / orgmode users?


Joplin looks really interesting - thank you for the recommendation!


Try out Notion. They recently changed their free plan to have unlimited blocks.


I have given them a go - but I had some privacy concerns. I may look into this again, thank you.


It's also a tool that helps generate a JSON compilation database [0] for Clang Tooling.

[0] https://github.com/rizsotto/Bear


Thanks for linking that. I've been using the compile commands for autocompletion in vim, so it's nice to know I'm not tied to cmake for that.


There's about a million different note-taking apps now (and they're about 95% the same).


And zero of them are better than github's gist with dark mode and proper markdown support.


Since HN seems to be on a blog-kick lately, I'll repost the idea that I'm still waiting for someone to build:

A blogverse of some kind that allows for algorithmic discoverability & aggregation (ala Medium) without the bullshit/terrible UX.

The real value proposition of Medium is that a well-made aggregator benefits readers and writers alike. Readers find more authors they like, writers find more audience. There are also network effects with shared comment logins, inter-blog citations, etc.

I really think a blogging renaissance is waiting to happen. These ingredients plus a business model not reliant on ads, massive js overhead, and other nonsense could jumpstart it.


What I want is a Yahoo-style directory for blogs. Blog owners can put their blog in exactly one category. Users can star the blogs they like, similar to GitHub, and identify the low quality click generators/marketing blogs.

I personally find Medium to be a horrible way to find content. Maybe it works for new content, if that's what you're after.


The success of search engines prompted Yahoo to ditch its original product (the curation and categorization of the web) in favor of its competitors automated (and thereby game-able) crawl-index-search approaches.

Now, decades later, there seems to be a shared yearning for the curated web, perhaps in response to the low signal:noise ratio of search. Isn't it funny, how the world works in cycles?

Curated search (domains chosen by a set of humans with no financial conflicts of interests, with some grokkable categorization and full-text search) might be the nirvana we're searching for. I think the GP has a point, that the need for a sustaining business model tends to strongly conflict with this equilibrium.

Wikipedia has sort of evolved to partially fill this niche, but it periodically struggles with funding. I agree there's not a similar filling for blogs, yet: maybe GitHub will evolve there, but it will face the same pressure of other platforms owned by public for-profit companies.


I was probably one of the last users of their original directory. It was my browser homepage until the day they removed it.

I eventually stopped using it because it didn't keep up. These days, search is for many purposes completely useless. If I want to find someone to do work on my house, the last place I'll go is Google. It's truly amazing just how worthless the results are. You'll get results from Michigan and Florida and Oregon all for the same search, in the same town, and claiming to be a local business. I imagine it's a fraud-ridden garbage dump if you actually try to use Google to find businesses to do work for you.

On the topic of blogs - not completely useless, but overrun with shallow, uninformative trash posts by SEO experts. I think Google is more vulnerable now than at any time in the past 20 years.


> You'll get results from Michigan and Florida and Oregon all for the same search, in the same town, and claiming to be a local business.

It's worse if you are non-US. Google seems to think that anywhere in the UK is local to me for businesses, and that's AFTER I've added loads of filters to stop American results from showing.


> Curated search (domains chosen by a set of humans with no financial conflicts of interests,

The problem is that the second it becomes an authoritative source, every spammer and marketer will start trying to game it, just like the SEO does now. Corruption is a huge problem with that approach. I'm old enough to remember how much influence yahoo categories editors had, and that many have asked for money to include you in the list, or they would use the position to simply block all the competition to their own sites. Same story is happening with Wikipedia, just it's more about personal wars and agendas instead of straight down racketeering.


Yeah, this is/was a real problem, and I don’t know if there’s a perfect solution. Anecdotally, @dang and friends do a very good job “editing” hn. They’re presumably paid well for their job, and seem to be passionate about their roles (thank you!).

I feel like the key to good curation might be good moderation. In other words, allow people to submit links to be on the list, but there must be humans to control for SEO and marketing spam (like there is on HN). Making those humans incorruptible is hard but maybe not impossible, if they’re well compensated and hired thoughtfully.

That being said, I’m doubtful that such a strategy of moderation would scale to hundreds of thousands (or tens of millions, shudder) of active voters and submitters. At some point, it would need to federate, with different mods owning different lists, and it would be difficult to avoid a devolution into Reddit.


> Curated search (domains chosen by a set of humans with no financial conflicts of interests, with some grokkable categorization and full-text search) might be the nirvana we're searching for.

Moving away from ads towards human-centric curation is a primary design feature of the decentralized web.


Wikipedia doesn't struggle for funding. It just happens to be like many other organizations that eat up whatever funding they get regardless.


Not sure why you're getting downvotes. Wikipedia looks to be well funded indeed

https://en.wikipedia.org/wiki/Wikimedia_Foundation


I run a Yahoo-style directory for blogs and articles I find: https://href.cool/. I'd personally like to see more personal directories rather than big monolithic directories of everything - those tend to collect spam.


This is great. Will you allow other "friends" to create their own directories to? I'd like to contribute.

Just found: https://www.kleptones.com/blog/2012/06/28/hectic-city-15-pat...

Thanks!


Certainly - if you email me (kicks@kickscondor.com) a link to your directory, I will absolutely include it here: https://href.cool/Web/Directory

There is also some discussion about this kind of thing here: https://forum.indieseek.xyz Good to meet you, Alessio.


I like this idea, but I feel like it's shifting the problem to "how do you discover cool directories made by other people?"

Maybe an idea would be something like what you have, but using some sort of standard that could be pulled down similar to rss feeds?


Well, it’s my opinion that technology can’t solve the discovery problem. I know we want it to. But at some point the technology has to evaluate the content. It can’t - so technology gives the content to humans to evaluate. However, it can’t evaluate the humans’ capabilities. :/

You discover a directory like mine just as you would discover any other link, johntash - by coming across it as you read, perhaps on Hacker News. If there were more directories, they would be easier to discover. They happen to be richer discovery points than a normal blog or profile page.


Fun anecdote -- in the late 90s I maintained several web sites (personal/hobby ones) and by far the best source of traffic for them was Yahoo directory pages. I didn't know the SEO game (if there was even one at the time) and while search engines brought a small amount of traffic, majority of visitors came from Yahoo. When I launched a new hobby site, the goto marketing plan was to apply to add it to Yahoo. Granted, we're talking about the range of dozens to maybe a hundred or two visitors a day, so it's not web scale :)


A question for you (and everyone): how important do you think it is that the algorithmic discoverability be married to a single platform/aggregator? What if you just had better algorithmic discoverability across all writing on the internet, regardless of where it's hosted?

I get that that would miss some of the benefits you mention (shared comment logins for example) but I'm wondering if people think it would capture 80% of the same benefits or, like, <50%. And I don't think for the discoverability to work there's any innate reason it has to be restricted to a single blogging platform.


Personally, I've started seeing algorithmic discoverability as an anti-feature that mainly serves the interests of the attention economy.

For my part, I want more long-form, thoughtful articles that offer an enriching read. Not only have we got a good decade or two's worth of experience showing that no algorithmic discovery system ever favors that kind of thing over content that's morally equivalent to Five Minute Crafts, but I've got a couple decades' worth of experience telling me that, since long-form bloggers tend to link each other quite a lot, never needed an algorithm to help me with discovery in the first place.

If I want anything, it's algorithmic filtering: Take the feeds I'm already subscribed to, and filter out the stuff that I tend to skip over without reading. Because my blog feed already delivers me a couple hours' worth of reading a day, so I can afford to be choosy.


Yeah, this is very true. I may have overstated my initial idea. When I think more about it, maybe what I'm looking for is handcrafted curation, but that let's anyone share their curation speccs and discover relevant things to curate from. To me, that suggests some kind of algorithmic component, maybe in the form of search, voting, etc., but I wouldn't necessarily want the whole thing driven by either machine or mob.


Like stitchfix, but for magazine articles, blog posts and books. I might be cynical, but I think humans do this much better than any algorithm I've ever seen. :)

For myself, I usually read in two "phases". I collect material from sources that are usually high quality for me, and do some superficial skimming to trim out the content I don't particularly care for.

Later, when I'm more "in the mood" or have a long period of time, I pop() and read(). The stack gets pretty large at times, but it only takes a couple days of vacation to blow through most of it. If I pass over something in the stack enough times, it gets free()'d.

I don't know if I'd trust an algorithm to do either of these stages for me. I'd trust an algorithm provide input to phase (1), but never to replace phase (1).


> If I want anything, it's algorithmic filtering: Take the feeds I'm already subscribed to, and filter out the stuff that I tend to skip over without reading. Because my blog feed already delivers me a couple hours' worth of reading a day, so I can afford to be choosy.

I think that Newsblur does this.


"All writing" is probably too broad, but I think there's an interesting "meet-in-the-middle" solution where the site is a kind of syndication machine that is creating traffic between separately hosted blogs.

I appreciate people wanting to own their own writing, having permanence of data assured, etc. so this is probably worth thinking about. At the same time, many outside the tech world don't want the overhead of hosting or setting up their own blog.

I wonder if a hybrid would be best? Host yourself option or host-by-us option, with a pricing structure that accords.

The point is really to maximize ones ability to traverse between writers on subjects that interest you, so the physical location of the data is secondary.


I tend to be skeptical that "$x but better" is a viable business in the VC era or maybe on the internet at all. On the internet, where publishing is easy, but distribution is hard, it seems like there's already a natural tendency toward winner-takes-all. Execution matters somewhat, but at some point, I think it comes down to who can throw more capital at the problem.

I'm interested in this problem space, because current social media tools leave me somewhat dissatisfied. But I'm also skeptical about how you'd build it successfully.


You may be interested in https://able.bio

Myself and a partner are building this, here are a few points:

- It's a community to read and write about building things with technology

- Clean, fast and light UX + Markdown editor (https://able.bio/new)

- Bootstrapped with low overheads. No outside investment removes pressure to grow at all costs and the lapse on integrity that we see more of each day.

- No data lock-in. Export your posts in a single JSON file (containing Markdown + HTML versions) accompanied by all images.

- We're finishing up a big set of data portability / data respect features, which we plan to announce soon.

The aim is to build a community of capable people with a genuine interest in technology and attach a job board to the site. Companies can then pay to display their vacancies on the job board and users can take a look whenever they like. No popups, banners or any of that dodgy/spammy crap getting in between users and the reading/writing. In this age we see integrity towards our users as a differentiator.

Building it is fairly straightforward and fun. Some learnings we've gained in terms of 'jumpstarting' it:

- a lot of people are vehemently sceptical after the Medium debacle.

- creating a feed that prioritises good content without "censoring" dev spam or trivial posts is tricky when coming off a smaller user base. For example, upvotes can have outsized effects.

- getting regular volume of good content so that people use Able as source of news/inspiration/learning.

We've had some great posts from people but we need just that little bit more to get the flywheel going. As soon as someone posts something, activity on the site goes up but then dies down again. It's the classic building vs. promoting trade-off. However, we've chosen to get data portability and respect right first as we believe this is fundamental. We're wrapping that up now and then have loads of learnings and ideas we want write about.

We feel that same potential 'renaissance' you're talking about and this is how we want to try and activate it. If you feel inclined, have a look and let us know what you think.


Regarding the upvoting problem: Try using only downvotes and sort by downvotes*age. This should solve a lot of problems that usually come with upvoting systems, e.g. false negatives (good articles with bad score).


This would seem to just favor new posts with no votes over anything else.


Nice. Is there an RSS feed?


Originally just had feeds for individual users but added a global one just for you: http://able.bio/rss


Thanks! Sorry for the late reply, subscribed!


Personally, many of my good reading suggestions come from a trusted network. I trust a few people to only recommend content of a type that is high-quality and interesting to me.

I do think it's important to have an element of discoverability for new content (otherwise there's no real way for a writer to bootstrap into visibility), but I think an important element is being able to follow and trust content aggregators. Maybe algorithmic curation will be trustable in a decade, but right now it feels too gameable, and too easy to degenerate into thinkbait.


I also felt that I value personal recommendations much more than from huge aggregators or algorithms. I wanted to give these 1-to-1 recommendations a better vehicle than WhatsApp. So I created an app for it: https://onelink.to/listo It's still early, but already usable.


> plus a business model not reliant on ads

Is anyone anywhere close to this?


Medium has a revenue sharing thing where a medium pro users fees are allocated to write depending on what the user reads. I don't know anyone who has medium pro though.


Weirdly, Medium seems to be the closest.


I am gonna work on this evening. Thanks.


Same. So far - https://awesomeblogclub.searchableguy.now.sh

it's not ready yet but I would love to hear more about what features people want. Email is in the profile.


I applaud the effort, looks very nice - I will contact you.


Sure thing. Going to open source it after it's done. I have been thinking of using activity pub and making discovery decentralized so people can host their own curation and share with each other.

I will have to look more into it though.


I just hacked together a little blog directory recently - woozymans.com.


I don't even want to have a medium.com account for their crappy UI and hostile paywalls, but I doubt those who use medium regularly actually discover new blogs. HN is one of my primary sources to read new things, and I'm already blind to medium's footer read-more links.


Things I would like in a blogging platform:

* Generate lightweight static website

* Good clean default CSS so I don't have to mess with it

* Automatically upload website to CDN and trigger expirations as necessary.

* Self-hosted

* Runs on AWS Lambda or any other Function as a Service equivilient

* Has a super lightweight CMS that I can easily use on both desktop and mobile, so if I have ideas I can start writing anywhere, and can also make minor corrections to existing posts while on the go.

* The CMS can be a frontend to git, but git is hard to use on mobile, so I don't want the CMS to just be git.

If anyone knows of something that meets these requirements I'd be super grateful!


If you have static generated website and a CDN, why do you need Lambda/Function? Or do you mean these as a set to pick and chose from?


The lambda would generate the static website after I type words into a webpage and then upload that site to the CDN.

It would also be the code behind the web page where I write the words.


Looking at your profile, I get the sense you are trolling :) but I have a bad time imagining the need for the elasticity of lambda for those single-user actions.


I'm not sure whether it's what they meant, but I could see the "admin" cms portion being hosted on lambda.

You'd load a static editor page, write your content, and then post it to a lambda endpoint. That lambda would then generate static pages for your content and push it to s3 or wherever.


It’s not the elasticity that matters, it’s the fact that I’d only have to pay for it to run when I want to update the site.


Except for being "self hosted" and the "front end to git" we're building that as a commercial service.

https://pluma.cloud/

Not much to show yet but you can follow us on Twitter for updates in a couple of months: https://twitter.com/plumacloud

We're also considering providing our CMS as headless via an API so that you can connect it to your SSG and make your own template, host it wherever you want, etc, but we haven't decided yet on the pricing for that but it would be much cheaper than our main product.


I think the mailto: link on your page is misspelt as mailt:


Thanks!


You have many requirements to blog. Blogging is substance over style. Not much required.


https://blot.im/ might be close to what you want. I think it's run on ec2 instances though.


It's not self hosted.


It’s close to being conveniently self-hostable:

https://github.com/davidmerfield/Blot


Sounds like you just need a way to modify a remote filesystem, then trigger a build step when you're done with the modifications.

Assuming it was wrapped up in a nice UI, would that be missing any features you need?

EDIT: Put more concretely, if you had a site hosted on S3, all you need is a way to modify the source files of that site, either directly or by modifying files in another directory and running a transform step to produce the final result?


For me. Yes, it would be missing features. I have a self hosted blog, the source is on github, there's a trigger on commits.

This sucks compared to wordpress because the UX in wordpress is much better than github and especially because there is no "preview".

There's also no easy way to upload images etc. Certainly not as easy as wordpress.

I'm not going back. I'm just saying "modify the remote file system and trigger a build" is far from what I want.


Sounds like previews is the main thing missing though? I don't consider github to be similar to a remote filesystem, because like you said certain simple things like uploading images and other arbitrary data isn't easy.


That pretty much sums it up!

It needs to have some sort of awareness of file changes to expire items from the CDN too.


There's no simple commercial solution that does exactly this that I know of, but you can get very close with Google Cloud Functions, a GitHub Pages site, and a Google Sheets spreadsheet as a datastore. Not static, but it is fast, and you get a 'CMS' for free (the spreadsheet). Plus all of these services can be used completely for free.


I'd advise just running Wordpress or Ghost. Even a small VPS is more than enough scale and you get a working blog out of the box with easy extensibility for the future. And you can still use a CDN in front.

People think a static site is enough without realizing SEO, RSS feeds, comments, etc, are all things you might need and would have to rebuild yourself.


There’s nothing Wordpress can do that you can’t do with a static site. Even comments can be easily bolted on.

I don’t want to pay $5/mo for a VPS when I can run my whole site on AWS for 32 cents a month.


Then what is "self-hosted" supposed to be? What CMS are you going to use? Where's that going to run?


Control.

With self hosting I can control the code, what it does, the cost, where it stores data, how backups are made, when new versions and updates are deployed, etc.

It’s always a trade off on control, and my comfort point is “on AWS”.

For some people they want their own servers. Some people want to own the network block. It’s just what level of comfort you have with each type of control.

I don’t know what CMS I’m going to use, that was my question. Which one operates the way I want?

It’ll run on AWS lambda or an equivalent.


Then try Netlify's CMS: https://www.netlifycms.org/

It's a single-file app. You can host it yourself, and it edits static files in a github/gitlab repo. The deployment after push and hosting with custom domains and CDN is already handled by Netlify. And it's all free.

If you use Jekyl then you can skip Netlify and Github will build and serve your site.


Netlify is awesome but isn’t self hosted. I don’t control the deployment and I don’t have access to the logs.


I'm struggling to understand what you mean by self-hosted. What does AWS Lambda do in this scenario? Surely not host the blog, so it must be the "cms"? At that point I'm not clear why you'd rule out netlify, as it's no longer "self-hosted." Please clarify! :)


My imagined workflow is

* load a web page

* write words

* hit submit

* this triggers a lambda function that generates a static website and puts it on s3

* this triggers another lambda function that pokes the CDN and expires anything that changed

By self-hosted I mean I control all the parts. I can change the generation code, the CDN upload/expiration code, I can change the output, I can view the logs, all within my own account.


* Use a texteditor

* Hit save

* Let Hugo/Jekyll/Pelican/whatever compile

* FTP/SCP onto a server

* It's a blog, nobody needs a CDN for that


* Use a texteditor

Where? On my iPhone?

* Hit save

Where does it save it to?

* Let Hugo/Jekyll/Pelican/whatever compile

Where does that happen? If I want to write on my phone it won't compile on my phone.

* FTP/SCP onto a server

From my phone? Also I specifically don't want to run a server, or more specifically, I don't want to pay for a server to run all the time.

* It's a blog, nobody needs a CDN for that

If you're writing about high scalability, it's a bit embarrassing and ruins your credibility if your site isn't speedy worldwide.


Write on your phone, commit to git, have some CI build it and deploy. Simples :)


> Write on your phone,

Ok that part's easy.

> commit to git

How? I've never seen a good git client for the phone

> have some CI build it and deploy.

Step one, draw a circle. Step two, draw the rest of the owl. :)

While this might be a fun exercise, it's more involved than deploying some code to Lambda.


I don’t know... I have a Github actions workflow that on a commit, it builds a site (mkdocs) and upload it to a gh-pages branch for hosting on Github. You could also push the generated html to S3.

That coupled with Working Copy could be a pretty good mobile workflow.

For me, this was much easier than deploying Lambda code and managing How all of that worked together. Using Github Actions wasn’t much more than setting up a deploy script.


Fwiw, Working Copy on iOS is a good git client.


Have that. It sucks. UI on github sucks (need something I can access anywhere, not just my desktop, thought github might cut it, it doesn't). Uploading images sucks. No preview sucks. I suppose I could trigger to staging but that means I have to commit just to see a preview then wait for the entire process vs say wordpress which has instant preview.

Managing posts also sucks. I can pre-date posts but I can't at a glance get a list of posts with data about each one. Instead I just get a file listing and have to open each file and read its headers to see its metadata (date, title). Publish vs unpublished also suck. I have folders for my posts and another folder for drafts but moving something from one to the other is no fun compared to just checking a box in wordpress.


I've build something like this using Hugo as a SSG and a custom Micropub-backend that commits new content to a Git repo and pushes it to a Git hosting platform. That then triggers a CI build for the site generation and upload to the webserver as well as a CDN purge.


Awesome! Is your code available anywhere?


Maybe a static blog (e.g. Hugo), hosted on Netlify and making use of their netlify-cms package?


Netlify is awesome and I love their service, but it's not self hosted. In particular you have to pay lots of money to get logs.


I wonder if you can self-host the netlify-cms thing, though. That might be just what you wanted.


From what I can tell from reading the docs on netlify-cms, it only works if you use Netlify as your CDN.

I could very well be wrong though.


Netlify CMS uses the GitHub (or bitbucket, etc) API to modify your repository with the new/updated content. You can self-host the entire system, or have netlify handle the GitHub auth while you self host your website.


But what happens after it's committed to git? What turns a git checkin into a CDN distribution?


You could use GitHub actions to do the build and send it off to some CDN to deploy, or if you were self-hosting the whole thing, you could have a Flask app listen for a webhook and re-build your site when that's received. Or simply pull in from the git origin and rebuild with a cron job.


I guess what I'm getting at is doing that would be a big patchwork of putting all that together. Not a unified system. I'd gladly make a one time payment for a unified piece of software that does everything (or of course gladly adopt something open source).

Also, to your specific example, does there exist a Flask app that runs on AWS lambda that can listen for a webhook and then build a static website?

Does there exist code that I can put into Github actions that builds a static website and uploads it to a CDN?

That's basically what I'm looking for.

Of course I could write my own solution or cobble it together with a bunch of moving parts, but my whole point is that I'm looking for an existing package that already does it.


If you use a tool like the Serverless Framework, you can "cobble together the moving parts" and store it into GitHub and make it easily deployable into the cloud. Putting together pieces is what software development is. If you are used to OOP this is even more relevant as the entire idea of OOP is to create discrete pieces of code and compile them together.

PHP for example used to be a "big patchwork" of seperate files working together, but the right tooling came along to think of it as "a unified system". Same thing with Serverless and the Serverless Framework


Just found this module as a GitHub action:

https://github.com/marketplace/actions/s3-sync

Shouldn't be too difficult to hook this up to a build command for hugo, jekyll, or whichever static site generator you'd prefer. But this gets away from the self-hosted part of what you were saying a bit.


I absolutely love Netlify.


Search around a bit, there's a static website generator that runs on git and is hosted by CDNs.

I can't remember the name, but it's commercial and targets major companies for making their landing pages.


Hugo?

gohugo.io



Quick Question: Why is self-hosted a thing that's highly desirable on HN?


Control.

With self hosting I can control the code, what it does, the cost, where it stores data, how backups are made, etc.

It’s always a trade off on control, and my comfort point is “on AWS”.

For some people they want their own servers. Some people want to own the network block. It’s just what level of comfort you have with each type of control.


Isn't there a downside here?

What if your content doesn't have enough views / you get bored / life changes so can't afford server costs anymore / die. Your server will expire eventually, and there goes your content. web.archive.org might have some sites archived, but many blogs won't have been archived so their content is just gone forever.

I've self-hosted many platforms, and many have died, perhaps due to running costs or lack of need anymore. It's partially why I'd never self-host my email, for example.

You can't know the future, and if your content is gone, it's gone for all of your audience (or potential future audience). Perhaps an ideal solution is some kind of self-hosted platform that mirrors content into a forever-public external repository, and hence preserves it, even if your hosting ceases to exist.


Yeah. There's a wordpress.com blog I frequently go back to and reference and the author passed away in 2014. Thanks to it being hosted on wordpress.com it will stay up there for the foreseeable future. Had it been relying on monthly payments for hosting and yearly payments for the domain it would probably have been down already by the time I realized he was no longer with us (some three months after the fact). And at that point, not all content would've probably been in archive.org and thus lost forever.

I also have a bunch of old blogs that I've just given up on at some point and now they're gone forever.

Your last point is kind of what archive.org's Wayback Machine does. Haven't checked whether they have an API where you can submit URL's you publish* or if you'd have to submit new posts manually, but this could be a decent solution. It wouldn't cover non-tech people though.

*) I just now quickly browsed through their API doc but couldn't find anything clear, other than uploading random files


Cool, thanks for expanding on that man.


You're not dependant on the service, when it goes away or gets bought.


I love that this exists, I wish I'd thought of it first, and I like you for making it.

I think you should improve the default stylesheet a bit, I like http://matejlatin.github.io/Gutenberg/.


Ooh, this looks neat


I think that product template could use either `aria-hidden="true"` or `aria-label="bear"` HTML attributes for screen readers sake (and reconsider title and OG properties); not an expert in this area nor having SR at hand, but I guess that

    ʕ•ᴥ•ʔ
would sound like 'pharyngeal voiced fricative - bullet - letter ain - bullet - glottal stop', what is hardly beneficial for screen reader users. Cool Unicode "picture" though, it's a pity such doodles hurts accessibility (sad smiley).


Given that we can't expect everyone to use aria attributes, shouldn't screen readers just have a list of all widely used smileys with descriptive names (if they don't have it already)?


Good point and interesting question indeed. Finally tried simple speech synthesis demo [0] and in Win10 Firefox it really reads out some "basic ASCII-smileys" as their descriptive translation (and ignores any other set of non-alphabet characters, with few exceptions like underscores and asterisks).

Compiled all predefined textual emoticons offered by Gboard app in Android [1] and let the Narrate speech synthesis in Reader view (F9) read them aloud (uses presumably the same synthesis as example) - quite predictably most of them weren't heard at all, but those simple ones that were have surprised me.

So most probably there is no problem with that particular Unicode bear in screen readers after all.

[0] https://mdn.github.io/web-speech-api/speak-easy-synthesis/ [1] https://gist.githubusercontent.com/myfonj/f6b0ed1c783d16a79d...


Wouldn't twtxt be a better candidate as the most minimal blogging platform?

https://twtxt.readthedocs.io/en/latest/


This is pretty rad


Used this years ago for twitter https://www.floodgap.com/software/ttytter/ It was amazing... everyone thinks you're working with the terminal open ;)


What are the limits on number of posts, post size, images...? What is it going to be priced at? Will there continue to be a free tier if/when it becomes paid? Sorry, I can’t try any platform that doesn’t answer questions about its future. I hope you’d provide more details on the homepage.



I'm slightly amused by the domain hacks, but also concerned about a few of the ccTLDs, especially when it comes to user generated content and blogging/opinions. For example, the leader of .ph routinely calls opponents gay, is not known for human rights or free speech, and I am curious how this might reflect on it.

I have the same concerns with user generated content in/on .de and their Netzwerkdurchsetzungsgesetz law, the mandatory Posting an Impressum on your sites, etc.

I have seen bitly suspended before by Libya for merely providing redirects to content they don't like.


I mean, the leader of the country represented by .us brags about raping women, threatens nuclear wars, etc.... no country is uncontroversial.


> brags about raping women

by what definition of "rape"?


For other bear-themed writing tools:

* https://bear.app/


For anyone who is interested in this - it is great and you can export all your Apple notes.app stuff (inc images) with the mac app Exporter, then import into Bear.app.


I can't recommend Bear enough if you're using all Apple devices. I've used many note taking app but Bear is the only thing that gives me Markdown note taking + not getting in my way of writing. I especially like the infintely nested tag of Bear, such a time saver when you can drop a #hastag/anywhere/with/inifitely/nested/hierachy .

Disclamer: I'm not affliated with Bear in anyway, just a happy customer.


I could have written exactly the above until a Bear sync bug ate my notes. Beat customer support had nothing to offer but condolences.

It was during an iOS beta on one of the devices, so perhaps not their fault—but I’ve been unwilling to pay since.


> It was during an iOS beta on one of the devices

Bear uses iCloud storage which has on multiple occasions trashed people's data during iOS beta periods. It's not "Bear Sync", it's just another iCloud beta sync problem. If you value your data, don't use iOS betas. It's that simple.

If they had the choice, many developers would disable their apps on beta devices because of issues like yours.


I used to use bear too until I discovered notion.so, now I use nothing else (also not affiliated with Notion or Bear, but used to use bear for everything)


I'm surprised nobody has attempted to put the content in the URL yet (to display on a static page with styling using JS [needs a tag filter...] to insert an URL parameter into some node). It would accommodate at least 2KB of text, local caching and fast hosting all in one.



Yes, that's what I mean. But only Hashify still exists apparently...


Yeah, the pages linked from those HN links don't seem to exist, but you can still download it:

https://github.com/lucaspiller/shortly

I was always quite fascinated by the concept, but I suspect liability and lack of control over the content is a fatal issue and why nothing much seemed to come from it.

If someone makes a 'bad' page, which is inevitable, the domain with the hashify/shortly code would be held responsible and the only way the site owner could 'remove' the content would be to stop the service.


> Storing a document in a URL is nifty, but not terribly practical. Hashify uses the [bit.ly API][4] to shorten URLs from as many as 30,000 characters to just 20 or so. In essence, bit.ly acts as a document store! [1]

bit.ly et al. seem to be able to get away with being agnostic processors. I'm surprised there haven't been more stories about their services being abused.

[1] https://hashify.me/IyBIYXNoaWZ5CgpIYXNoaWZ5IGRvZXMgbm90IHNvb...


This reminds me a lot of the old Geico "wehadababyitsaboy" commercial:

https://youtu.be/9JxhTnWrKYs


Not a blog, but:

https://itty.bitty.site/edit

You can get a QR with your text too :-)


Thinking through this, it seems like content-in-URL would work for a website with a single page or a small number of pages, but would limited by the fact that links from one content-in-URL page to another content-in-URL page require content from both pages to be encoded in the first page’s URL. If you have pages linking to pages linking to pages, this cascades into requiring content from all pages encoded in the home page URL.


I guess I'm a bit late to the party, but after reading your comment I hacked something together - https://x.rukin.me It's ugly and I haven't spent any time styling it or improving editor(it's just textarea) but it works as PoC. Also I've started it before I read comments that hashify exists, otherwise I would probably not do it ;)

Compared to hashify:

- it uses zlib to compress data, so it can actually contain much more content(if it's repetitive)

- it also supports password encryption of the content (don't know why, but my friend said it would be cool)

- only supports markdown(no html) as I haven't found any good js lib for html sanitization on client side

Examples:

- With password: https://bit.ly/3c7xkyb pass: wasted

- Some markdown: https://j.mp/2AivRYr

P.S. I still have no idea why anyone will use it


This was posted on HN a while ago: https://github.com/jstrieb/urlpages

Not blogging specific, but it allows you to encode web content into the URL.


This has been done, iirc. I think it was HN where I saw it a couple years back. I couldn't find it today if I wanted to, but it's definitely been done.


If you're sending a 2KB URL to somebody, you can also just copy&paste the text of your blog post.

But yeah, you'd miss out on styling it.


Of those it seems only telegra.ph has a good UI allowing easy link/photos embedding, and the result is really pleasing. All the others need to rely on a third party. Is it open-source ?



https://flipso.com (+ Posterous Features)


I like this idea a lot and the design is wonderful. But, I would have liked to see a link to an example blog on the page just to see what the output looks like.


If I may, I would like to add my own Markblog to the list :-) It is basically a static site generator based on markdown files. (https://github.com/olaven/markblog)


I wish I had the courage to share my projects here, but I'm really afraid of being torn apart. :-)

I made a similar minimal project but targeting github.io, apparently most here want self-hosting.


Add the awesome https://telescope.ac to that list.


I appreciate the no-bs lightweight website sentiment as much as anyone, but I think there's also something to be said about drastically improving readability with some line-height and font styling.


I really recommend https://write.as/ for those looking for minimalism with a bit more styling


FWIW I personally much prefer Bear's style over Write.as. Both from an aesthetic and readability point of view. At least for the landing page.

Everything fits on one screen, the font size is much more bearable, there's no unnecessary columnization or links to other parts of the site.


write.as does support custom CSS and JS, so you can, in theory, make look however you want. It isn't quite as light as Bear is, but it's no Medium either.


oh interesting! Yeah I actually hadn't been to their landing page in a long time, totally agree with your there (and feel like it doesn't do their actual blog product justice - to my tastes at least their blogs are perfect, low-key but elegant and very readable).


It looks simple, but isn't technically minimalistic with 77KB .css and loaded fonts. The first page load was actually visibly slow with the fonts repainting in a different typeface.


There's really nothing more minimal than https://telegra.ph

It just gets out of the way.


It has consecutive urls, relinquishes your drop of privacy.

.../140 -> test test test

.../141 -> My personal depression diary entry no.3 ...


Comcast DNS-hijacks that entire site as a "potential threat". They also block https://ix.io/. Interesting.


My text editor with hand-coded HTML begs to differ


Quite a bit of JS there.


Ironically, that was founded by a Mr. Baer.

https://write.as/about


Coming soon, the http://bettermotherfuckingwebsite.com/ equivalent.



It's so hard to take a site with serif fonts seriously. It would be worth the 8th css declaration if the author added `font-family: sans-serif;`.


I find that a curious attitude; I rather appreciate when I come across a site that uses a serif font. Sans-serifs are so terribly overused.


Sans serifs are overused because they look better on low resolution screens. If you haven't had to use a low resolution screen in a while then you are one of the privileged few.


At small sizes, like 8–13px, sans-serifs look better than serifs. And user interfaces and websites used to be that size.

But for 16px and up, which is what websites of today use, serifs are perfectly fine even on 1× displays (though sure, they’ll look better still on higher-DPI displays, but so will sans-serifs).

I should clarify that it depends on the font. Sans-serifs tend to have fairly even stroke width, but serifs tend to have more variable stroke width, and if the thin is too thin, you get a terrible font. That’s a common shortcoming of serif fonts, and Garamond demonstrates that, being quite unsuitable for screen use below 20px, maybe even 24px. Others like Georgia don’t suffer that weakness.


Are you sure that "websites of today" use 16px or larger universally? HN appears to use 9pt Verdana, which I believe is equivalent to 12px on my Windows system if my math is correct.

Georgia is a terribly underrated font. I'm sure it's heavily hinted to look good at small sizes, but even at large sizes it has an elegance that is lacking in e.g. Times Roman.


HN is not a website of today. It’s a website of 2007. Its visual style has not changed at all since then.


And yet I still like the way it looks. That should tell you something.


This sounds like it ought to be a job for pixel-density media queries [1] in CSS. I doubt this happens often if ever though, because designers. Anyone seen this approach in the wild?

[1] https://developer.mozilla.org/en-US/docs/Web/CSS/@media/reso...


I guess I specifically mean the default browser serif font. There are sites that consciously choose a serif typeface to convey some sense of "we are serious content," but you may notice that none of them use the default browser serif font (except as a last-step fallback).

If I see default browser serif, my immediate thought is "either this is an amateur or something is broken."


I don't have data to back it up, but I've understood from typography gurus that sans-serif fonts are great for signs, short blurbs, etc. But for long (multi-paragraph) reads serif fonts reduce eye fatigue. I think blogs typically fit in that category.


This is true on paper. Sans serifs are more readable at small sizes on lower-pixel-density screens.


The research showing that serifs aid in readability are based on printed samples, which are much higher resolution than most screens.


Not everything should be sans serif.


I think I'd second this, adding some basic improvements to the typesetting would help a lot and wouldn't cost anything WRT performance.


I'm thinking of adding a few small ways the user can adjust it. Maybe a dropdown of classless css frameworks to choose from would be good


But then there can be a smaller solution and this one is not the minimal one anymore.


Just change the font to something other than Garamond like Verdana. It is a nice font for paper, but not on websites.


Georgia is a decent serif that will typically be available.

Or just `font-family: serif` which will most commonly be Times New Roman or similar, but will be whatever we prefer for those few of us that set the default fonts.


Georgia is a fantastic serif font; it's actually the recommended default font for when you're formatting an ebook.


It's why I used Georgia as the preferred font for https://shouldiblockads.com/ (mixed with a sans-serif "default UI font" stack).


Or better yet, `font-family: sans-serif;`


Screens have high enough resolution for serifs nowadays, there's no need to strain the eyes with sans serif anymore online.


Yea, I hate seeing this front on blogs. It's so difficult to read.


Noted :)


Yes, styling is a must for me. If I ever were to blog, I'd also require images and latex rendering. But that's about it.


As long as it's compatible with browser "Reader modes" I don't care because that's the first thing I tap when I'm reading a blog post anyway.


I'm definitely going to add the latex classless CSS pack as an optional.


There is style, it's just so small that it's embedded in the <style> tags instead of an external asset.


This looks exactly like what I've been searching for all this while. Do you plan to release a self hosted/open source version of this? The one thing that makes me uncomfortable about a blogging service - what happens if they shut down?


After being inspired by Bear Blog I build something similar with this exact concern in mind. Mataroa.blog [1] is a minimal blogging platform with export (to static site generators) as a first class citizen.

[1] https://mataroa.blog/


너무 힘이들것 같아요


What does this mean? Why is this here?


Lol. It's someone's language? Looks like Korean

"I think it's too hard"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: