Also if you really want to be precise you should consider whether you're using binary prefixes vs SI prefixes, e.g. kB (10^3 bytes) vs kiB (2^10 bytes). That doesn't matter as much because the error is small for these lower values, but the casing errors definitely do matter. "mb" means millibars to me, not Megabytes!
1000^1 kB, 1024^1 kibibyte (kiB)
1000^2 MB, 1024^2 mebibyte (MiB)
1000^3 GB, 1024^3 gibibyte (GiB)
1000^4 TB, 1024^4 tebibyte (TiB)
1000^5 PB, 1024^5 pebibyte (PiB)
1000^6 EB, 1024^6 exbibyte (EiB)
1000^7 ZB, 1024^7 zebibyte (ZiB)
1000^8 YB, 1024^8 yobibyte (YiB)
It doesn't help that it just sounds silly to my ears.
If I remember correctly, a big motivation for this change was the fact that disk manufacturers intentionally used base-10 definitions so they could advertise larger numbers for disk capacity. But presumably they still do that, and presumably people still often don't notice.
The difference does matter, though, and matters a lot when you're working with storage at any scale. So people tend to use the right labels just to avoid ambiguity.
Kilo means 10^3 though. I use a lot of SI units every day and that's what it always means for every one of them, just like Mega is always 10^6. These prefixes shouldn't have different meanings depending on the unit being used; that breaks SI. The SI prefixes were first adopted in 1795, before computing even existed as a concept, let alone computers existing as actual objects. The overloading of already very-well-established prefixes to mean something different was always a mistake, and can probably partially be blamed on the US's failure to adopt the metric system.
For example, I recently bought two 32 GiB DIMMs for my computer. I guess you could call them 34.3597 GB DIMMs, but that's strictly worse! Knowing that they're exactly 32 GiB makes it makes it obvious that it takes 2^35 bit pointers to address every location by byte in one of those DIMMs (so they obviously require a 64-bit architecture to take advantage of!), or 2^36 bit pointers to address memory locations across both of them.
[edit: add qualifying "many"]
There's an insane amount of abbreviations and acronyms that have multiple meanings in different contexts. How many Wikipedia pages have [disambiguation] here?...
The unit's official symbol is bar; the earlier symbol b is now deprecated and conflicts with the use of b denoting the unit barn, but it is still encountered, especially as mb (rather than the proper mbar) to denote the millibar.
Collisions seem to be most prevalent in "IT" units or for improper SI usage. But maybe the latter is really the reason for the former in this case: if people separated SI and other units properly for IT units, it seems to be it would also be perfectly fine (but like me, a lot of people seem to have no idea of the correct definitions).
That's actually pretty much everywhere since metric and SI are two different things. Use of the litre is deprecated under SI.
And no one says "mbit".
My network card is 1gbit/s, my disk space is 4TB, my internet speed is 100mbitps and DVD capacity is 4gb.
This is how everyone uses it and it's much clearer and better than B/b.
I actually tried to use Bear as a MD-based publisher to my WP blog, and it just ran aground badly.
I wonder if trademark would kick in later for this.
Apple, Fox, Shell, Target...any of these generic words ring a bell?
I’m sure there was no malicious intent, and it’s very possible that these two products will coexist without any further issue.
But as evidenced in this very thread there is the potential for confusion, which is the whole point of trademarks, and the reason why I’d have an easier time incorporating “Apple Surfboards“ than I would “Apple Keyboards”.
If you set your iPhone to Japanese keyboard it comes with hundreds of these build in
Maybe they thought the same way as you do :)
No affiliation, just a happy customer. :-)
I have even stronger feelings about Roam though. Notion is "better" than just abt anything else like it, whereas Roam is different -- there is nothing else like it.
A blogverse of some kind that allows for algorithmic discoverability & aggregation (ala Medium) without the bullshit/terrible UX.
The real value proposition of Medium is that a well-made aggregator benefits readers and writers alike. Readers find more authors they like, writers find more audience. There are also network effects with shared comment logins, inter-blog citations, etc.
I really think a blogging renaissance is waiting to happen. These ingredients plus a business model not reliant on ads, massive js overhead, and other nonsense could jumpstart it.
I personally find Medium to be a horrible way to find content. Maybe it works for new content, if that's what you're after.
Now, decades later, there seems to be a shared yearning for the curated web, perhaps in response to the low signal:noise ratio of search. Isn't it funny, how the world works in cycles?
Curated search (domains chosen by a set of humans with no financial conflicts of interests, with some grokkable categorization and full-text search) might be the nirvana we're searching for. I think the GP has a point, that the need for a sustaining business model tends to strongly conflict with this equilibrium.
Wikipedia has sort of evolved to partially fill this niche, but it periodically struggles with funding. I agree there's not a similar filling for blogs, yet: maybe GitHub will evolve there, but it will face the same pressure of other platforms owned by public for-profit companies.
I eventually stopped using it because it didn't keep up. These days, search is for many purposes completely useless. If I want to find someone to do work on my house, the last place I'll go is Google. It's truly amazing just how worthless the results are. You'll get results from Michigan and Florida and Oregon all for the same search, in the same town, and claiming to be a local business. I imagine it's a fraud-ridden garbage dump if you actually try to use Google to find businesses to do work for you.
On the topic of blogs - not completely useless, but overrun with shallow, uninformative trash posts by SEO experts. I think Google is more vulnerable now than at any time in the past 20 years.
It's worse if you are non-US. Google seems to think that anywhere in the UK is local to me for businesses, and that's AFTER I've added loads of filters to stop American results from showing.
The problem is that the second it becomes an authoritative source, every spammer and marketer will start trying to game it, just like the SEO does now. Corruption is a huge problem with that approach. I'm old enough to remember how much influence yahoo categories editors had, and that many have asked for money to include you in the list, or they would use the position to simply block all the competition to their own sites. Same story is happening with Wikipedia, just it's more about personal wars and agendas instead of straight down racketeering.
I feel like the key to good curation might be good moderation. In other words, allow people to submit links to be on the list, but there must be humans to control for SEO and marketing spam (like there is on HN). Making those humans incorruptible is hard but maybe not impossible, if they’re well compensated and hired thoughtfully.
That being said, I’m doubtful that such a strategy of moderation would scale to hundreds of thousands (or tens of millions, shudder) of active voters and submitters. At some point, it would need to federate, with different mods owning different lists, and it would be difficult to avoid a devolution into Reddit.
Moving away from ads towards human-centric curation is a primary design feature of the decentralized web.
Just found: https://www.kleptones.com/blog/2012/06/28/hectic-city-15-pat...
There is also some discussion about this kind of thing here: https://forum.indieseek.xyz Good to meet you, Alessio.
Maybe an idea would be something like what you have, but using some sort of standard that could be pulled down similar to rss feeds?
You discover a directory like mine just as you would discover any other link, johntash - by coming across it as you read, perhaps on Hacker News. If there were more directories, they would be easier to discover. They happen to be richer discovery points than a normal blog or profile page.
I get that that would miss some of the benefits you mention (shared comment logins for example) but I'm wondering if people think it would capture 80% of the same benefits or, like, <50%. And I don't think for the discoverability to work there's any innate reason it has to be restricted to a single blogging platform.
For my part, I want more long-form, thoughtful articles that offer an enriching read. Not only have we got a good decade or two's worth of experience showing that no algorithmic discovery system ever favors that kind of thing over content that's morally equivalent to Five Minute Crafts, but I've got a couple decades' worth of experience telling me that, since long-form bloggers tend to link each other quite a lot, never needed an algorithm to help me with discovery in the first place.
If I want anything, it's algorithmic filtering: Take the feeds I'm already subscribed to, and filter out the stuff that I tend to skip over without reading. Because my blog feed already delivers me a couple hours' worth of reading a day, so I can afford to be choosy.
For myself, I usually read in two "phases". I collect material from sources that are usually high quality for me, and do some superficial skimming to trim out the content I don't particularly care for.
Later, when I'm more "in the mood" or have a long period of time, I pop() and read(). The stack gets pretty large at times, but it only takes a couple days of vacation to blow through most of it. If I pass over something in the stack enough times, it gets free()'d.
I don't know if I'd trust an algorithm to do either of these stages for me. I'd trust an algorithm provide input to phase (1), but never to replace phase (1).
I think that Newsblur does this.
I appreciate people wanting to own their own writing, having permanence of data assured, etc. so this is probably worth thinking about. At the same time, many outside the tech world don't want the overhead of hosting or setting up their own blog.
I wonder if a hybrid would be best? Host yourself option or host-by-us option, with a pricing structure that accords.
The point is really to maximize ones ability to traverse between writers on subjects that interest you, so the physical location of the data is secondary.
I'm interested in this problem space, because current social media tools leave me somewhat dissatisfied. But I'm also skeptical about how you'd build it successfully.
I do think it's important to have an element of discoverability for new content (otherwise there's no real way for a writer to bootstrap into visibility), but I think an important element is being able to follow and trust content aggregators. Maybe algorithmic curation will be trustable in a decade, but right now it feels too gameable, and too easy to degenerate into thinkbait.
Is anyone anywhere close to this?
Myself and a partner are building this, here are a few points:
- It's a community to read and write about building things with technology
- Clean, fast and light UX + Markdown editor (https://able.bio/new)
- Bootstrapped with low overheads. No outside investment removes pressure to grow at all costs and the lapse on integrity that we see more of each day.
- No data lock-in. Export your posts in a single JSON file (containing Markdown + HTML versions) accompanied by all images.
- We're finishing up a big set of data portability / data respect features, which we plan to announce soon.
The aim is to build a community of capable people with a genuine interest in technology and attach a job board to the site. Companies can then pay to display their vacancies on the job board and users can take a look whenever they like. No popups, banners or any of that dodgy/spammy crap getting in between users and the reading/writing. In this age we see integrity towards our users as a differentiator.
Building it is fairly straightforward and fun. Some learnings we've gained in terms of 'jumpstarting' it:
- a lot of people are vehemently sceptical after the Medium debacle.
- creating a feed that prioritises good content without "censoring" dev spam or trivial posts is tricky when coming off a smaller user base. For example, upvotes can have outsized effects.
- getting regular volume of good content so that people use Able as source of news/inspiration/learning.
We've had some great posts from people but we need just that little bit more to get the flywheel going. As soon as someone posts something, activity on the site goes up but then dies down again. It's the classic building vs. promoting trade-off. However, we've chosen to get data portability and respect right first as we believe this is fundamental. We're wrapping that up now and then have loads of learnings and ideas we want write about.
We feel that same potential 'renaissance' you're talking about and this is how we want to try and activate it. If you feel inclined, have a look and let us know what you think.
it's not ready yet but I would love to hear more about what features people want. Email is in the profile.
I will have to look more into it though.
* Generate lightweight static website
* Good clean default CSS so I don't have to mess with it
* Automatically upload website to CDN and trigger expirations as necessary.
* Runs on AWS Lambda or any other Function as a Service equivilient
* Has a super lightweight CMS that I can easily use on both desktop and mobile, so if I have ideas I can start writing anywhere, and can also make minor corrections to existing posts while on the go.
* The CMS can be a frontend to git, but git is hard to use on mobile, so I don't want the CMS to just be git.
If anyone knows of something that meets these requirements I'd be super grateful!
Not much to show yet but you can follow us on Twitter for updates in a couple of months: https://twitter.com/plumacloud
We're also considering providing our CMS as headless via an API so that you can connect it to your SSG and make your own template, host it wherever you want, etc, but we haven't decided yet on the pricing for that but it would be much cheaper than our main product.
It would also be the code behind the web page where I write the words.
You'd load a static editor page, write your content, and then post it to a lambda endpoint. That lambda would then generate static pages for your content and push it to s3 or wherever.
Assuming it was wrapped up in a nice UI, would that be missing any features you need?
EDIT: Put more concretely, if you had a site hosted on S3, all you need is a way to modify the source files of that site, either directly or by modifying files in another directory and running a transform step to produce the final result?
This sucks compared to wordpress because the UX in wordpress is much better than github and especially because there is no "preview".
There's also no easy way to upload images etc. Certainly not as easy as wordpress.
I'm not going back. I'm just saying "modify the remote file system and trigger a build" is far from what I want.
It needs to have some sort of awareness of file changes to expire items from the CDN too.
People think a static site is enough without realizing SEO, RSS feeds, comments, etc, are all things you might need and would have to rebuild yourself.
I don’t want to pay $5/mo for a VPS when I can run my whole site on AWS for 32 cents a month.
With self hosting I can control the code, what it does, the cost, where it stores data, how backups are made, when new versions and updates are deployed, etc.
It’s always a trade off on control, and my comfort point is “on AWS”.
For some people they want their own servers. Some people want to own the network block. It’s just what level of comfort you have with each type of control.
I don’t know what CMS I’m going to use, that was my question. Which one operates the way I want?
It’ll run on AWS lambda or an equivalent.
It's a single-file app. You can host it yourself, and it edits static files in a github/gitlab repo. The deployment after push and hosting with custom domains and CDN is already handled by Netlify. And it's all free.
If you use Jekyl then you can skip Netlify and Github will build and serve your site.
* load a web page
* write words
* hit submit
* this triggers a lambda function that generates a static website and puts it on s3
* this triggers another lambda function that pokes the CDN and expires anything that changed
By self-hosted I mean I control all the parts. I can change the generation code, the CDN upload/expiration code, I can change the output, I can view the logs, all within my own account.
* Hit save
* Let Hugo/Jekyll/Pelican/whatever compile
* FTP/SCP onto a server
* It's a blog, nobody needs a CDN for that
Where? On my iPhone?
Where does it save it to?
Where does that happen? If I want to write on my phone it won't compile on my phone.
From my phone? Also I specifically don't want to run a server, or more specifically, I don't want to pay for a server to run all the time.
If you're writing about high scalability, it's a bit embarrassing and ruins your credibility if your site isn't speedy worldwide.
Ok that part's easy.
> commit to git
How? I've never seen a good git client for the phone
> have some CI build it and deploy.
Step one, draw a circle. Step two, draw the rest of the owl. :)
While this might be a fun exercise, it's more involved than deploying some code to Lambda.
That coupled with Working Copy could be a pretty good mobile workflow.
For me, this was much easier than deploying Lambda code and managing How all of that worked together. Using Github Actions wasn’t much more than setting up a deploy script.
Managing posts also sucks. I can pre-date posts but I can't at a glance get a list of posts with data about each one. Instead I just get a file listing and have to open each file and read its headers to see its metadata (date, title). Publish vs unpublished also suck. I have folders for my posts and another folder for drafts but moving something from one to the other is no fun compared to just checking a box in wordpress.
I could very well be wrong though.
Also, to your specific example, does there exist a Flask app that runs on AWS lambda that can listen for a webhook and then build a static website?
Does there exist code that I can put into Github actions that builds a static website and uploads it to a CDN?
That's basically what I'm looking for.
Of course I could write my own solution or cobble it together with a bunch of moving parts, but my whole point is that I'm looking for an existing package that already does it.
PHP for example used to be a "big patchwork" of seperate files working together, but the right tooling came along to think of it as "a unified system". Same thing with Serverless and the Serverless Framework
Shouldn't be too difficult to hook this up to a build command for hugo, jekyll, or whichever static site generator you'd prefer. But this gets away from the self-hosted part of what you were saying a bit.
I can't remember the name, but it's commercial and targets major companies for making their landing pages.
With self hosting I can control the code, what it does, the cost, where it stores data, how backups are made, etc.
What if your content doesn't have enough views / you get bored / life changes so can't afford server costs anymore / die. Your server will expire eventually, and there goes your content. web.archive.org might have some sites archived, but many blogs won't have been archived so their content is just gone forever.
I've self-hosted many platforms, and many have died, perhaps due to running costs or lack of need anymore. It's partially why I'd never self-host my email, for example.
You can't know the future, and if your content is gone, it's gone for all of your audience (or potential future audience). Perhaps an ideal solution is some kind of self-hosted platform that mirrors content into a forever-public external repository, and hence preserves it, even if your hosting ceases to exist.
I also have a bunch of old blogs that I've just given up on at some point and now they're gone forever.
Your last point is kind of what archive.org's Wayback Machine does. Haven't checked whether they have an API where you can submit URL's you publish* or if you'd have to submit new posts manually, but this could be a decent solution. It wouldn't cover non-tech people though.
*) I just now quickly browsed through their API doc but couldn't find anything clear, other than uploading random files
I think you should improve the default stylesheet a bit, I like http://matejlatin.github.io/Gutenberg/.
Compiled all predefined textual emoticons offered by Gboard app in Android  and let the Narrate speech synthesis in Reader view (F9) read them aloud (uses presumably the same synthesis as example) - quite predictably most of them weren't heard at all, but those simple ones that were have surprised me.
So most probably there is no problem with that particular Unicode bear in screen readers after all.
I have the same concerns with user generated content in/on .de and their Netzwerkdurchsetzungsgesetz law, the mandatory Posting an Impressum on your sites, etc.
I have seen bitly suspended before by Libya for merely providing redirects to content they don't like.
by what definition of "rape"?
Disclamer: I'm not affliated with Bear in anyway, just a happy customer.
It was during an iOS beta on one of the devices, so perhaps not their fault—but I’ve been unwilling to pay since.
Bear uses iCloud storage which has on multiple occasions trashed people's data during iOS beta periods. It's not "Bear Sync", it's just another iCloud beta sync problem. If you value your data, don't use iOS betas. It's that simple.
If they had the choice, many developers would disable their apps on beta devices because of issues like yours.
(hashify) 2011: https://news.ycombinator.com/item?id=2464213 https://news.ycombinator.com/item?id=3407197
(shortly) 2012: https://news.ycombinator.com/item?id=3834643 https://news.ycombinator.com/item?id=5696127
I was always quite fascinated by the concept, but I suspect liability and lack of control over the content is a fatal issue and why nothing much seemed to come from it.
If someone makes a 'bad' page, which is inevitable, the domain with the hashify/shortly code would be held responsible and the only way the site owner could 'remove' the content would be to stop the service.
bit.ly et al. seem to be able to get away with being agnostic processors. I'm surprised there haven't been more stories about their services being abused.
You can get a QR with your text too :-)
Compared to hashify:
- it uses zlib to compress data, so it can actually contain much more content(if it's repetitive)
- it also supports password encryption of the content (don't know why, but my friend said it would be cool)
- only supports markdown(no html) as I haven't found any good js lib for html sanitization on client side
- With password: https://bit.ly/3c7xkyb pass: wasted
- Some markdown: https://j.mp/2AivRYr
P.S. I still have no idea why anyone will use it
Not blogging specific, but it allows you to encode web content into the URL.
But yeah, you'd miss out on styling it.
I made a similar minimal project but targeting github.io, apparently most here want self-hosting.
Everything fits on one screen, the font size is much more bearable, there's no unnecessary columnization or links to other parts of the site.
It just gets out of the way.
.../140 -> test test test
.../141 -> My personal depression diary entry no.3 ...
But for 16px and up, which is what websites of today use, serifs are perfectly fine even on 1× displays (though sure, they’ll look better still on higher-DPI displays, but so will sans-serifs).
I should clarify that it depends on the font. Sans-serifs tend to have fairly even stroke width, but serifs tend to have more variable stroke width, and if the thin is too thin, you get a terrible font. That’s a common shortcoming of serif fonts, and Garamond demonstrates that, being quite unsuitable for screen use below 20px, maybe even 24px. Others like Georgia don’t suffer that weakness.
Georgia is a terribly underrated font. I'm sure it's heavily hinted to look good at small sizes, but even at large sizes it has an elegance that is lacking in e.g. Times Roman.
If I see default browser serif, my immediate thought is "either this is an amateur or something is broken."
Or just `font-family: serif` which will most commonly be Times New Roman or similar, but will be whatever we prefer for those few of us that set the default fonts.
"I think it's too hard"