Hacker News new | past | comments | ask | show | jobs | submit | braiamp's comments login

I along with others, years ago, have said that answers that don't address the question asked should be removed. In this case, malicious actors are using packages that are not related to the problem presented in the question. This will become more prevalent than package squatting.

This is something that without some pain inflicted on host and network operators, the needle will not move. The CA/B Forum could start asking that it's a requirement for CA's to issue a certificate that the validation will try AAAA records first, and warn if they can't validate, then try A. Then later require validation through both AAAA and A records. This will prepare most of the hosts to make sure every page is accessible through both networks. Then require AAAA records only, and not bothering to check A records.

Let's Encrypt does try AAAA records first and then A. That's what leads to the phenomenon that I described where many people end up deciding to delete their AAAA records. :-(

I think Let's Encrypt's behavior is correct, and I don't think anyone inside Let's Encrypt or CA/B Forum would like to inflict "pain [...] on host and network operators" beyond the current practice. Let's Encrypt's current behavior could be described as cooperating with IPv6 adoption rather than compelling IPv6 adoption, which seems like a reasonable place for a certificate authority to be.

There are cases where Let's Encrypt has made choices that potentially slightly reduce compatibility in order to encourage what it considers technically correct behavior. The first one that comes to mind is

https://community.letsencrypt.org/t/adding-random-entries-to...

but I know there are others. On the other hand, all of those decisions can be justified in terms of improving the correctness of PKI software (like not hard-coding things that should not be hard-coded). I can't think of any example that's like "we wish this other technical thing would happen outside of the PKI, so we'll try to force it along", and I don't foresee the broader community getting comfortable with that.


The power that Lets Encrypt has over domain operators is limited. If it becomes a requirement by the CA/B Forum, then every CA would have instructions on how to make sure you have AAAA valid records and it won't be a matter that "LE doesn't work because they fail on this scenario". I'm a behavioral scientist (aka economist), and if you want the public to change behavior you have to motivate such change and here "pain" is defined as what is necessary to effect that change, by increasing the cognitive load.

There was a comment below that in the LE forums instead of asking people to fix their AAAA records, they ask them to remove it. That's exactly the behavior you don't want to reward, so you must cause pain.


How then, can we know that the treatment worked/did anything at all?

Timing mostly. If the thing doesn't budge for two years and then gets better in a week with treatment, it probably worked.

In many cases very difficult to know for sure

The room still exists and has all the messages https://chat.stackexchange.com/rooms/64083/discussion-on-ans...

Your link brings me to a "page not found".

Is it gated by reputation? If so, could you please post the full comments somewhere?


That or being logged in. It was “not found” for me, too, until I logged in. (I don’t have any rep on the Ubuntu site, aside from the 100 for being on other SE sites).

I am logged in and have 191 reputation on Ask Ubuntu.

Had no idea that the cast of NG were wookies.

Chatrooms are freezed, not deleted, if there's enough content. The only reason why they are deleted is when there are less than 10 messages or 3 participants (don't quote me on that).

Actual rules:

> Rooms will exist indefinitely, so long as there is at least one person actively talking in the room. A room is considered worth retaining if it has more than 15 messages by at least 2 users.


So did this room get lost in a middleground where there were too many messages for a comment section but too few to retain the chatroom?

That's obviously not a good outcome. Either the threshold for moving comments to chat needs to be moved up, or the threshold for deleting a chatroom needs to be moved down.

Edit: Wait, none of this makes sense! We can tell from the Internet Archive that there were at least 15 comments. So where is the room?


The room is still there. There may be a need to log in however https://chat.stackexchange.com/rooms/64083/discussion-on-ans...

I'm logged in, and I have 191 reputation on Ask Ubuntu, and that link brings me to a "Page Not Found".

You need to be logged in Stack Overflow.

> Now, who defines "addictive" using which aspect?

Addictive is whatever that creates a dependence that doesn't allow you to function correctly and productively, because it produces a natural reward.

In this case, design patterns that promote and provoke addictions, like gamification, attention grabbing, etc. that kids don't have tools to be aware of, need to be regulated. In fact, sugar content regulation in foodstuff have shown to be very effective at reducing obesity on kids. It's not a "ban", it's a regulation.


libicu is deliberately doing this because one of its purpose is to offer Unicode ordering. If consumers try the new ordering on non-rebuild objects, it could lead to data loss. That's what happened with glibc 2.28. [0]

If postgres prefers explicit library versions to prevent data corruption, then icu is doing things correctly and making sure that downstream consumers don't shoot themselves in the foot.

[0]: https://wiki.postgresql.org/wiki/Locale_data_changes


Honestly, the more I was researching about what happened in that glibc version update, the more I thought "self inflicted".

If you're using NULL terminated strings everywhere, and not UTF8/16, it's your own fault for doing so. Especially if you are building a database which should always contain a length of bytes or at least a CRC check before each entry's contents in its serialized file format to prevent exactly this from happening.

(Let alone the question why is there no schema version that should have prevented this from happening in the first place)


How can Mozilla guarantee some privacy? Well, they would be using separation of content and the source. The content is "broad categories" of search queries. They take your query, match it to a category (this is important, because it has to be local matching to make sense), encrypt it with the public key and send it to a broker. The broker knows where it is coming from, but not what it says. The broker then forward to Mozilla, which decrypts it with their private key.

I don't see a problem as long as there's not enough bytes of information that can infer a person. Aggregating the data in the client would allow this, so I see some kind of time shifting to protect the activity patterns.


No. Your queries aren't send to Mozilla, just the broad categories. Chrome is sending full queries to Google.


If you're using Google for searches, it also doesn't matter if you use Firefox or Chrome, obviously they have access to the full queries you do on their properties.


It does matter becauae you can use an anonimizer for seach on Firefox just fine, but if you use Chrome then Google can collect everything you enter in a form and see where it's sent. Not sure if they do, but they could. One element of security is comparmentalization: avoid using a client and server from the same adversary, if you can, to minimize correlations.


Also you can opt out of this, unlike (I believe, correct me if I'm wrong) the Chrome telemetry.


Signal self publishes their apk. You can drink directly from the source.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: