Hacker News new | past | comments | ask | show | jobs | submit | jwr's comments login

Huh? I've installed my own charger in our apartment complex underground garage. Many people do that, permitting takes a while and could certainly be made easier, but it's not impossible, and recent laws make it impossible to refuse the installation of a charger. (EU, PL)

I'm PL based. The law made it impossible to refuse, but the fire law renders chargers unusable unless there is a facility. https://moto.infor.pl/prawo-na-drodze/ciekawostki/6604742,el...

Of course it is. The campaigning for fossil fuels is very sneaky and uses great talking points. There is always a grain of truth to every such talking point, so they are very difficult to debunk and discuss.

Yes. There are many myths spread around by the fossil lobby and, as always, there is a grain of truth in every myth, that's why they are so hard to debunk.

See for example https://www.carbonbrief.org/factcheck-21-misleading-myths-ab...


I think a major part of the problem is Apple's attitude towards bug reports: they basically DO NOT WANT TO HEAR FROM YOU. Which means that rare bugs go unnoticed and get swept under the rug.

I know that it's difficult to triage and process bug reports at scale, but I guess that's where some of those hundreds of billions of dollars could be put to good use.


This rings so completely true to me. Every time I notice a reproducible bug and try to report it to Apple I'm stunned by how difficult they make the process. Even reporting something as basic as incorrectly transcribed podcasts is an awful experience.

Triaging and categorising bug reports at scale really feels like something LLMs should be able to assist with significantly.


I actually think the Claude 3.7 Sonnet summary is better.

yeah I liked it too, especially for 10x less the price lol

This is incredibly useful, thank you for sharing!

Is it safe to uppercase the protocol part of the URL?

I decided to use segments and stick to lowercase "https", because I didn't trust various implementations out there to handle "HTTPS" correctly. Should I?


RFC 1738 and 2396 both say schemes are lowercase but implementers "should" treat uppercase as equivalent.

RFC 3986 explicitly says schemes can have uppercase letters but are canonically lowercase, and says that they are case-insensitive. But it also still says that implementations "should" accept uppercase letters as equivalent.

WHATWG's URL standard mandates case-insensitive matching.

In practice anything you scan a QR code with will probably not have a problem with the uppercase scheme.


On this exact issue my work did extensive testing and researching various standards.

Although we found browsers were out of alignment with standards on all sorts of matters, we found broad compatibility with upper case. (Of course, meaning everything before the path. The interpretation of the path is delegated to the server which may or may not be case sensitive, up until octothorpe, #, which is then solely interpreted by the browser.)


How many implementations do you care about? All major phones do very well at QR scanning these days including uppercase URLs.

Leaving a naked domain name like "www.example.com/1234" was not quite as good, but at least iPhones, Pixels and Samsungs worked well IIRC.

I used this trick to allow printing of QR codes at smaller resolutions for 4+ years, and phones have gotten noticeably better in that span at handling QR codes printed at smaller sizes, with uppercase, nonstandard shapes, borders, you name it.

If you make https: lowercase and the rest uppercase, with QR encoding that does smart segmentation, I'm pretty sure you can still get most of the benefit... but, exercise for the reader.

The smallest v1 QR code is 21 modules. That can fit 20 uppercase letters.

The 25 module size, is not that much bigger and can get 38 uppercase or 26 lowercase letters.


Strangely not including the scheme doesn't seem to consistently work on an iPhone when in uppercase, a string like "FOO.COM/BAR" does open as a URL, but a string like "FOO.UK/BAR" does a search. I think it's best to include the full HTTPS:// prefix (and I don't think it being uppercase really matters, I'd be surprised if that breaks anything).

I did a lot of trial and concluded the same: for non .COM address we had to use the full HTTPS:// prefix otherwise iOS won't open it. On Android it opens any TLDs, even unusual ones like .MD or .NZ.

> If you make https: lowercase and the rest uppercase, with QR encoding that does smart segmentation, I'm pretty sure you can still get most of the benefit... but, exercise for the reader.

That is actually exactly what I ended up doing. I care about all mobile phones and tablets, and I was worried whether any implementers actually tested uppercase protocol names.


In which case, you can still uppercase the domain to get a partial size reduction as QR code does allow for switching to other encoding modes in stream.

If you control the destination server, you can also upper case the path.

In the old days, Yahoo's build of Apache (yApache) included an option to automatically lower case urls before matching them. Super handy because lots of urls were coming in from print publications and you could never get publications to show urls properly, nor get users to type them in properly.


sigh

See https://jepsen.io/analyses for how MongoDB has a tradition of incorrect claims and losing your data.

Distributed databases are not easy. Just saying "it is web scale" doesn't make it so.


Are you aware:

1. That PgSQL also has issues in jepsen tests?

2. of any distributed DB which doesn't have jepsen issues?

3. It is configurable behavior for MongoDB: can it lose data and work fast, or work slower and do not lose data. There is no issues of unintentional data loss in most recent(5yo) jepsen report for MongoDB.


Distributed databases are not easy. You can't simplify everything down to "has issues". Yes, I did read most Jepsen reports in detail, and struggled to understand everything.

Your second point seems to imply that everything has issues, so using MongoDB is fine. But there are various kinds of problems. Take a look at the report for RethinkDB, for example, and compare the issues found there to the MongoDB problems.


> Take a look at the report for RethinkDB

RethinkDB doesn't support cross document transactions, problem solved lol


PgSQL only defect was anomaly in reads which caused transaction results to appear a tiny bit later, and they even mentioned that it is allowed by standards. No data loss of any kind.

MongoDB defects were, let's say, somewhat more severe

[2.4.3] "In this post, we’ll see MongoDB drop a phenomenal amount of data."

[2.6.7] "Mongo’s consistency model is broken by design: not only can “strictly consistent” reads see stale versions of documents, but they can also return garbage data from writes that never should have occurred. [...] almost all write concern levels allow data loss.

[3.6.4] "with MongoDB’s default consistency levels, CC sessions fail to provide the claimed invariants"

[4.2.6] "even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations"

let's not pretend that Mongo is a reliable database please. Fast? likely. But if you value your data, don't use it.


In attempt to understand your motives in this discussion, I would like to ask question:

* why you are referring on 12yo reports for very early MongoDB version?


This discussion refers to the entire history of MongoDB reports, which shows a lack of care about losing data.

If you wish to have a more recent MongoDB report, Jepsen is available for hire, from what I understand.


No, discussion started with question "Why choose Mongo in 2025?" So, old jepsen reports are irrelevant, and most recent one from 2020 is somehow relevant.

As a data point: I've been running my solo-founder SaaS business for 10 years now using Clojure. It changed my life. It would not have been possible without Clojure and ClojureScript, building and maintaining an app of this complexity would have exceeded my limits.

The article is excellent and I agree with everything in it.

The stability of the language is unbelievably useful. I look around and it seems it isn't valued in many other ecosystems where people have to rewrite their software regularly. I can't afford to rewrite my app.

There will be plenty of armchair critics here, with cliché knee-jerk reactions (parentheses, JVM, startup time, etc). If you intend to form an opinion, I would suggest you read only into the insightful posts, from people who actually used the language, or from critics who present well thought-out criticism, not just a shallow knee-jerk reaction.


You will find that "whatever language you are most familiar with" is the only one that you will perceive makes it possible for you to build whatever you are working on.

That is a shallow take and I disagree — at least in my case, I make a conscious effort to avoid being a Blub programmer and regularly try different languages and look at different ecosystems. The choice of Clojure and ClojureScript for this application was not accidental, and it wasn't the thing I was most familiar with at the time.

There are good reasons why I said it was only possible with Clojure and ClojureScript. Conciseness, expressiveness, long-term stability, the ability to share business logic code between the client and server, same language used for data serialization (e.g. no JSON), good async code support (core.async), transducers for code reuse and performance, and more.


Hard disagree: I never used Datomic and likely never will, and I don't feel like I'm "losing some of the language's value". Datomic is a database. You can use any database you like.

Datomic feels like a natural extension of Clojure for storing data—you get no impedance mismatch, you continue working with Clojure's data structures for storing, querying, and writing data instead of dealing with clunky query builders(or SQL strings), and you gain immutability. Sure, you can use any database you like, but then you're playing the same game as every other programming language, ultimately getting less value out of Clojure and making it feel like just another language—at least for applications that require a database.

I disagree. I tried to use datalog and it was not a natural extension at all, because of the lack of nil handling. I also found that I do not need (or want) full immutability.

I would not compare it to SQL: I do not use SQL because there is no distinction of in-band and out-of-band data (your data and your commands/queries travel in the same channel, which causes a world of pain). But there are other database approaches that work very well. It's not a problem at all to serialize your data and store native Clojure data structures, even better than datomic does.

In other words, I disagree that Datomic is somehow natural or superior: it is "a" database, excellent for certain applications, not necessarily the best choice for every application.


Not really. Database is where immutability may not be a good fit, at least not all the time.

In many use cases, database is where application state resides, hence a mutable database is a better fit.

There are Datomic flavored database in the Clojure ecosystem that is specifically design to address this point, e.g. Datalevin.


Wow I didn't know about datalevin. There are a few Datomic like databases around by now, this one looks interesting. I have been following Datahike and XTDB.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: