FoundationDB is not easy to set up. And it ships with near nothing out of the box; you have to build layers atop it's foundation to do nearly anything. But the polish is through the roof and it's one of the least likely systems to turn into a live hand gernade on the planet.
Etcd is a quite excellent piece of technology that is actually, relatively speaking, quite easy to set up, is quite polished, has few all in all drawbacks. It's great tech. But software alone isn't going to change the fact that you're operating a distributed system.
The biggest problem, in my view, is that there are so few opportunities to get any real experience with most of these systems. You kind of need to be running thousands of decent sized instances all at once to begin to appreciate the weirder sides of what could happen & what you need to do in response. For most people, many of these distributed db systems operate just fine. Until one day they don't, and then they are totally hosted & either suffering extended outages, rollbacks, or worse. Simple things like node rotations usually go smoothly, but don't generate the same kind of hard-fought experience. Your ask about starting out feels like it's asking for the safest most secure route, but only battle hardened rainy-day ordeals are ever going to actually get you to a place of comfort.
As nation's try ever so hard to dynamite the Declaration of Independence of Cyberspace & exert control wherever their citizens travel online, the difficulty in figuring out regulatory compliance keeps skyrocketing. It gets harder and harder to figure out how to meet each nation's requirements. And one nation's requirements for police access might conflict with another nation's requirements for data privacy. There's no international order, and 195 nations and countless provinces each get their say in making everyone trying to have a point of presence on the internet's life difficult.
I'm sympathetic & way more agree than disagree. But DNS doesn't fill me with joy. It's quite centralized, quite a huge organizational vulnerability.
If we had some alternate addressing schemes in the browser that could do trust, I'd be much happier. Like, can dat protocol be a secure origin? Or like, if the goal really is just to secure users, maybe we need to let opportunistic encryption be something users can opt in as secure (even though it can be mitm'ed at the start).
Let's Encrypt has changed the game. It's great that https has so very very suddenly gone from frustrating & business class only to something even the casuals can easily do. But still, I'd love some less centralized systems for trust to be available, some visible known alternative paths demonstrating that there are diverse options at these lower transport/security layers of the network stack.
The main reason why is that Meta is a colossal vast data gathering beast, that for example flagrantly fucks around bypassing the GDPR. https://news.ycombinator.com/item?id=36583651
Now, I personally I think it's trying to swim up the waterfall & ultimately worse for everyone, but: Mastadon specifically has had a strong history of being anti-search, anti-scraping. You aren't supposed to be surveiling folks at industrial scale on the fediverse.
There's widespread skepticism about Meta respecting rules of the road. Having a huge giant shark join the pool of lots of little fish seems like a scary proposition. How we can still protect & have sovereignty over our different fedi-sites is a real question when there's a company with so much technical, economic, and popular leverages.
Mastodon's culture of anti-everything is naive. All posts (except "DMs") are public and can be scraped and made searchable at will for anyone mildly motivated to do so.
I'm honestly pretty skeptical about the fediverse aspect of Threads. It suggests that if I open a new fediverse instance and follow their accounts, I can suck in their timeline and do with it whatever I want. In particular, to bypass ads.
Hence, I could make a "best of Threads" fediverse instance without ads. Or maybe put my own ads on it.
Or, I could build my own client on top of the Threads instance.
> Mastodon's culture of anti-everything is naive. All posts (except "DMs") are public and can be scraped and made searchable at will for anyone mildly motivated to do so.
It's the opposite of naive: it's extremely well thought out and heavily deliberated. Making so many things "public" by default is an invite to people. It's an intentional welcome mat in old school "Internet 1.0" sort of way. But just because you want to welcome people doesn't mean you have to welcome robots (crawlers, etc). Many instances do that deliberately in a very old school "Internet 1.0" way by saying so in their ROBOTS.TXT file (in addition to other places).
In the old web, crawlers were expected to read ROBOTS.TXT and no matter how "public" they thought the website was they found, ROBOTS.TXT was supposed to be the final word.
Anyone scraping or making searchable "at will" random chunks of the Fediverse is easily violating some number of ROBOTS.TXT files. That is an ancient technical convention that isn't new or naive. The internet knew even then that bad actors would ignore ROBOTS.TXT files. The old internet learned to name and shame the bad actors, and in some cases would back that up by force with firewall blocks and in some cases lawsuits. Mastodon does that too. That's why a lot of Mastodon instances are preemptively blocking Threads, because they don't trust Meta to follow good behaviors such as checking ROBOTS.TXT, because Meta hasn't shown a history of being a good actor there and because Thread's privacy policies seem to imply that they don't care to be a good actor for their own users (to the point of not supporting EU users at all because GDPR is "too hard"), so it makes it much harder to assume they will be good actors with respect to all of the conventions around Mastodon data including the classic ROBOTS.TXT.
The Mastodon culture of "public for people, but not for ROBOTS, or only select ROBOTS" is an ancient internet tradition. It's hard to call that naive, when it has decades of history and internet social norms (including good outcomes) behind it. What's naive is thinking that because some major corporations stopped respecting good social norms in the name of increased ad revenue that those norms no longer apply and "anything technically possible is allowable". Read the ROBOTS.TXT in the room and stop being motivated by technology for technology's sake without respecting ethics. Be a good actor in any ecosystem.
You and I agree. I've been on the web since 1996 and the credo you talk about is deeply ingrained in my ethics.
But it's still wishful thinking. We live in the age where AI is so bold as to scrape the crap out of even the largest of other big tech companies without blinking. Without permission, attribution, compensation. So surely a little Mastodon scrape isn't a problem.
There's no need to talk about how unethical it is, we agree. Problem is that it's hard if not impossible to stop. That what I mean by naive.
I don't think it is naive to believe and fight for ethics. I think it takes a lot of courage, especially in a time of disillusionment where you can often feel like the entire industry has lost its mind and put only the most unethical people in charge. I'd rather fight for ethics than say "we can't have nice things because no one is ethical". That takes guts.
I don't think it is is exactly "wishful thinking" to believe that the way we get back to promoting ethics in software is expecting people to behave ethically. We sure are doomed to be disappointed when people turn out to fail us, but that's all the more reason to fight for it, to remind people what ethics are and why a polite society needs them. All of those disappointments are teaching opportunities, if people are open to listening.
(Will Meta learn anything at all from all the Mastodon instances that have pre-emptively blocked them on ethics concerns? Who knows? Mastodon can teach, but it can't force the student to learn. Is it worth Mastodon trying and fighting to teach Meta, no matter what happens? I'd say yes. Ethics are as much a social construct. How we talk about them, how we try to teach them, that says a lot about who we are and what our ethics are.)
I'd rather have even the attempt at ethics than despair that "ethics are technically impossible to enforce". We know ethics can't be programmed, that's why we have to enforce them socially.
Far less offensive but the DC metro got screen ads & it just so cheapens the experience. Used to be a kind of impressive semi-brutalist feeling, which isn't super fancy or anything. But now just tacky.
Im surprised there hasnt been more efforts to hack these things MaxHeadroom style and do subversive propaganda...
I cant wait until someone hacks on of these screens and does deepfakes on them - either some politicians doing crazy things, or celebrities doing crazy things, or more subtle ads that seem real, but are taking the piss.
They're slowly ruining the vibe. Several stations raised the intensity of the lighting, painted the cement coffers, changed the warning lights from yellow to red, and some have even replaced the iconic floor tiles. The new trains in general are a travesty, too. The old ones had a warm 70s color scheme, warm lighting, and carpet that absorbed some of the track noise. The new ones are steel on the outside, white and blue on the inside, with cool lighting that clashes with the stations and hard surface floors that don't dampen noise at all (to say nothing of the redesigned seating, made uncomfortable to deal with a non-existent homeless rider situation). Oh, also, the first set of them were duds with major issues.
Walking into Metro stations and onto the trains used to be a calming experience, stepping out of the stress of the city or suburbs into a chill and welcoming atmosphere. Now, I can feel my blood pressure jump.
It's been pretty amazing how stagnant the monitor space is. I too am really craving an 8k@120 monitor, although there's a decent chance I'll balk at the price.
It’s crazy how much of a regression there was in resolution and picture quality when we went from CRT to LCD displays. In the late 90’s you could get a CRT that did 2048x1536 no sweat with great color and decent refresh rate. Then suddenly LCD displays became the standard and they looked awful. Low resolutions, terrible colors and bad viewing angles. The only real advantage they had was size. It took a decade or so to get back to decent resolutions and color reproduction.
LCDs didn't replace CRTs because they offered better quality to consumers. They were worse for all the reasons you mentioned and then some. LCDs were cheaper to make, much lighter and less frail so they cost less to ship, and they took up much less space while in transport, and while sitting in warehouses, or on store shelves. We were sold an inferior product so that other people could save money. Gradually, some of those savings made it to consumers, especially when it became possible to generate profit continuously though TVs by collecting our data and pushing ads, but it was always a shitty deal for consumers who wanted a quality picture.
I imagine that in the future, people will look back at much of the media from recent decades and think that it looks a lot worse than we remember because it was produced on bad screens or made to look good on all of our crappy screens.
While I appreciate a bit of sarcasm, I'm not sure if this is what actually happened. In the CRT era, you either had good monitors which were expensive or a bunch of actually crap monitors. I had the former, but most of the people had latter and using those monitors for any extended period of time would give you headaches and dry eyes because of poor refresh rates, and terrible flicker.
As a personal anecdote: when I was choosing components for my first desktop computer (instead of using dad's work laptops), I selected components which are affordable. Also, as a coincidence a local IT magazine had a big test of desktop CRT monitors. So I've chosen some inexpensive one which wasn't terrible and as every kid asked parents for money. My mum who was already working on computers on her job had a look through that magazine and said that she'll pay for the whole computer only on the condition we buy the best monitor on that test. So we did (it was a trinitron Nokia @ 100Hz which was a lot), and I think with that move she saved my eyes long term, as I'm in my early 40ties and the only healthy thing I still have are my eyes. In any case, I've soon realized when I got that monitor is that I'll never save money when buying stuff which I use all day long.
Back to the topic. CRT monitors also were space heaters, and had a large volume which was only fine when being permanently placed on a geek's desk.
When LCDs arrived they actually were considerably better than average CRTs. The picture was rock solid without flicker or refresh rate artifacts, perfectly rectangular (a big problem with an average CRT as a matter of fact) and very sharp and crisp. All for a little bit more money. After two or three years they were actually even cheaper than CRTs. And I forgot to mention, they took much less space so you could place it on a POS counter or wherever. It took much more time to replace the top end CRTs, but I guess this is always the case when talking about some tech product.
It's still not reached a point where you can just choose high resolutions with no drawbacks.
2048x1536 19" (135ppi) at up to 72Hz was common at reasonable prices in the late 90s if my memory is correct. Although OS scaling sucked and text looked weird due to the shadow mask at that size. 1600x1200 (105ppi) was the sweet spot for me. And actually in my first job in 2004 I had two 20" 1600x1200 (100ppi) LCDs that I recall were reasonably priced and they were nicer overall. This was around the time LCDs became the default choice. Then "HD" became a thing a couple of years later and you are right, for the next ten years virtually all monitors were "widescreen HD", which was 1280x720 if you fell for marketing of the lower-priced units or or 1920x1080 at best. Anything higher was very expensive.
In 2012 the retina macbooks came out and I got a 13(.3)" with 2560x1600 resolution (227ppi). This was the first time for me that LCDs were finally great. But you couldn't get a resolution like that in an external display. So at that time I mostly just didn't use external monitors until 2016 when suddenly 4K 27" (163ppi) became reasonably priced. So I used 2 of those for years and they were good enough but still left me wanting.
Now still to this day, 4K is the max for external monitors at reasonable prices at any size. About 2 years ago I got an M1 macbook and realized it only supported 1 external monitor. I felt like I needed to keep the real estate I was used to and anyway, with the pandemic and WFH, managing multiple monitors with multiple (work and personal) machines sucked. All I could really find at a reasonable price was 32"/4K and 49" ultrawide. I begrudgingly downgraded to a 49" 5120x1440 monitor (109ppi). I will admit that going from 60Hz to 120Hz was nicer than I expected.
So in 2023 my laptop screen is great and has been great for 10+ years but this was my story about how I am still using the same pixel density as I did 25 years ago.
You are way too optimistic about the weight. ViewSonic p225f with 20" visible display, that reportedly was capable of 2560x1920/63Hz weighted 30.5 kgs!
I am not sure with that dot pitch of 0.25mm it was worth it.
I'm struggling to remember how much they did cost, but with $2000 price tag for the top of the notch machine the monitors on the low end tended (well, AFAIR, don't take my word for it) to be less than $150, and hi-end were like $700 for not the ultra-uber-special cases.
And Moore's Law. LCDs are semiconductors so their price goes down by a factor of 2 every 18 months.
However, even size would be enough. CRTs were ridiculously heavy. My GDM-FW900 was almost 100 pounds. And I used two side by side. I had to shop specifically for a desk that wouldn't collapse when I put them on it.
I agree, but we got LG's 16:18 DualUp monitors a year ago. Having a 43'' monitor in the middle and these two on the sides creates a better setup than it was than what was previously possible.
That's basically what I do. 24" 4K in the middle and 2 17" eizos beside it. They're 1280x1024 though so I have 200% scaling in the middle and 100% at the sides. This causes some OS issues in FreeBSD (I mitigate with xrandr) and Windows which is still screwy to this day. On Mac it works perfectly but I don't use Mac much anymore.
Intel is such a hero of this world. They make so many amazing things happen. And they do the job well.
There's obvious/huge ecosystem examples: there was a whole industry around ultra-proprietary SSDs, and Intel rolled up their sleeves & developed & standardized NVMe to commoditize the technology, and then dropped absolutely killer products at a much more competitive rate.
But Intel's work pushing the Linux kernel forward is just amazing. They have so many teams doing so much, optimizing, & adding support. And so many of their products have great & readily available documentation, which is something very few other companies have.
Excited to see another very cross platform terminal!
I've been using Alacrity but copying to clipboard has been a nightmare time and time again, in all kinds of circumstance, and I switched out in despair just this weekend, but after two hours of debugging.
I first went to cmder (Windows only) but after an hour trying to figure out why astrovim rendering is a fucked up & getting nowhere, I gave up & tried Windows Terminal, which has been acceptable. Would love to be back on something more cross platform though.
Apple has been smart to present it as a desktop replacement (as opposed to some ar/vr/metaverse device), which is something they can deliver that will let users bring their existing apps to the new medium without adaption.
Ideally yes apps should update themselves, tailor the experience. But the main focus so far has been pretty conventional app like experiences, which happen to be hovering in space. Where-as most headsets have tried to create entirely new ecosystems from nothing, and that would have been a huge mountain to climb.
I don't think presenting it as a desktop replacement was "smart" so much "the only possible way to justify the asking price", but I don't think its going to work out that way personally. Having tried many VR headsets I can't think of any I want to wear all day.
They can sell 100k of them and crow about being sold out, but I don't think even apple has figured out who this is for - thats what I got from their presentation on it. The only answer they have so far is "people who will spend $3500 on a vr headset" which isn't a use case.
For the most part we got marketing level "watch people emote joy as they do things with this device" and all the use cases sucked.
>Apple has been smart to present it as a desktop replacement
It means that now you are competing against desktops which have been iterated upon for decades and have a lot of value already. Instead of standing out by having apps that are only possible in VR people will way if they would rather use the app outside of VR.
Imho this is the foible of many technical people in that they want to advertise unique use cases.
But the problem is:
1. The people who aren’t already engrossed in the field, don’t have a good view on how to bridge between their current world view and the new one.
2. The people who are already in the space don’t need to be sold on unique cases.
Very few post-jobs-return Apple products show dramatic new use cases even if the product then goes on to enable it, and even if Apple themselves have clearly thought of it.
Their marketing is: this is how you take what you’re already doing into this space. Unique VR experiences only matter to a fringe set of users. The every day mundane stuff is what matters to the rest.
Take the ability to run iPad apps on it natively. VR enthusiasts will scoff at it. The real trick though is that it means you aren’t having to switch devices to do a mundane task, which means more time on each device. That’s what appeals to the bigger market, and has been proven time and time again , because it’s not making them do contortions to use it.
Another issue is thinking that the demographic for sales has to be the demographic for ads.
People will reply and say: well the price isn’t for the lay person. To which I’d say, who cares? They’re not the early adopter but they’re still the demographic for who the people buying this will be developing apps and content for.
>Unique VR experiences only matter to a fringe set of users
I disagree with this. Why would someone buy a headset instead of use an ipad or desktop? If there is nothing unique to VR why should people put a heavy thing on their head for about the same experience?
>Take the ability to run iPad apps on it natively. VR enthusiasts will scoff at it
No, where have you seen this? Everyone likes the ability to run these apps, but my point is that these apps are not a draw. Most people find it more convenient to use these apps on their phones or tablets.
>To which I’d say, who cares?
Developers care. Most big developers don't care about devices unless they have a large amount of users. Their time is better spent on devices / platforms which have hundreds of millions of users.
> Why would someone buy a headset instead of use an ipad or desktop? If there is nothing unique to VR why should people put a heavy thing on their head for about the same experience?
Because the experience can still be enhanced by the form factor and be compelling. Watching a movie isn't unique, but watching it on a 100ft display while trapped in a plane is compelling. The entire pitch is progressive experiences, and has been for every product they've shown since the iPhone. Take what you're used to doing and make an experience that progressively scale to the form factors.
> No, where have you seen this?
Countless posts here on HN and in the virtual reality community (like /r/virtualreality) that bemoan the device as a glorified iPad, and hate the amount of 2D windows shown.
> Most big developers don't care about devices unless they have a large amount of users
Part of that is that the visionOS allows for progressive experiences, which is not something other HMDs allow for. Developers aren't building an app for visionOS. They're adding to their existing codebases for iOS. Their investment therefore isn't a niche new platform, but the entire ecosystem.
There's a huge first mover advantage in software on these platforms as has been shown by the iPhone where people were going ga-ga over fart apps, and beer drinking apps. Those people made bank because they delivered fun knick-knacks before the market got saturated.
Even looking at other HMDs, it's often the big players and the indie players that move first. The middle of the spectrum are the ones who move last when the ecosystem is there. The Oculus Quest launched with a Star Wars game available.
To me it comes down to which Apple can more reliably deliver, that will see regular use. I have a hard time knowing what unique VR experience would keep people coming back day after day. But we know for a fact people use screens for desktop-like concerns for many hours a day. And we have lots of experience developing those experiences.
FoundationDB is not easy to set up. And it ships with near nothing out of the box; you have to build layers atop it's foundation to do nearly anything. But the polish is through the roof and it's one of the least likely systems to turn into a live hand gernade on the planet.
Etcd is a quite excellent piece of technology that is actually, relatively speaking, quite easy to set up, is quite polished, has few all in all drawbacks. It's great tech. But software alone isn't going to change the fact that you're operating a distributed system.
The biggest problem, in my view, is that there are so few opportunities to get any real experience with most of these systems. You kind of need to be running thousands of decent sized instances all at once to begin to appreciate the weirder sides of what could happen & what you need to do in response. For most people, many of these distributed db systems operate just fine. Until one day they don't, and then they are totally hosted & either suffering extended outages, rollbacks, or worse. Simple things like node rotations usually go smoothly, but don't generate the same kind of hard-fought experience. Your ask about starting out feels like it's asking for the safest most secure route, but only battle hardened rainy-day ordeals are ever going to actually get you to a place of comfort.
Hence... maybe just use postgres instead.