It's been weeks since the initial TeleMessage revelation... has the Signal Foundation responded in any way to the news? They condemn open source third-party clients and threaten trademark litigation when people use the "Signal" name in interop projects. Meanwhile, total silence when a defense contractor does the same thing.
The charitable answer is that organizations across US society are currently all trying to be very still and quiet and not do anything to provoke a vindictive assault by this administration.
The less charitable one is that Moxie was the opinionated and uncompromising core of the Signal Foundation and has been removed from the board and completely vanished from the public eye. What it stands for now is a touch less clear.
Signal has done nothing wrong here. There's nothing they could meaningfully say that would do anything except draw heat from people looking for a scapegoat.
This mess is entirely the fault of Telemessage and the people who chose to use it for top-secret comms.
I recall Whittaker talking about it in an interview, mainly complaining about how mainstream media kept referring to Signal as an "insecure messenger" when that was not at all the issue. Can't seem to find that interview now, though.
Probably not much they could do, because I'm sure that's why TeleMessage didn't call their app "Signal", but "SGNL".
I'm annoyed by moxie vs fdroid as the next guy, but this is way above his desire to make a buck from his honest work.
this is about an overseas elite who profited from US war aid for decades holding the US presidency by the balls, and everyone think this is just incopetence.
think for a second, if any other administration was using a telephone or a communication software made by a never heard before company overseas, would you think it was just incompetence? why these traitors clowns get a pass?
> if any other administration was using a telephone or a communication software made by a never heard before company overseas, would you think it was just incompetence?
One interesting thing I saw in the original article was that the US was using TeleMessage since February 2023. If that's true, it means we have two administrations who are responsible for this choice.
Protecting your name is perfectly fine. You're allowed to make a fork of Firefox, you just can't call it Firefox or use any of Mozilla's branding. You're allowed to fork the open source part of VS Code, you just can't call it that or use Microsoft's branding. etc. etc. - you're free to do with open source whatever the license allows, but you're not allowed to use the original name or branding because you have zero rights to those unless the license explicitly stipulates how the name may be used by forks (like how tons of folks use the "Linux" name, and all of them do so with explicit written permission from the Linux foundation, as they own that name as a trademark)
That's not the issue here. VSCode and FireFox are false equivalents. Even if you'd rebrand the fork, Signal forbids non-official clients/builds from connecting to their servers. Enforcement has been selective but the last official word AFAIK is that you are not allowed to fork, rebrand, and distribute a client which alllows you to chat with Signal users.
Mozilla still allows you to install and download add-ons and use other Mozilla services like VPN and Relay from your LibreWolf build.
Two wrote a two-part complaint, one part about clients, and the other part about Signal going after people using the Signal name. My comment was only about that second part (hence why it starts the way it starts).
I don't disagree generally, but it should be noted that the TeleMessage federal contracts predate this administration.
> According to Padgett and government records reviewed by NBC News, government contracts (some of which are still current) involving TeleMessage go back years, predating the current Trump administration. One current contract that mentions TeleMessage allocated $2.1 million from the Department of Homeland Security and FEMA for “TELEMESSAGE MOBILE ELECTRONIC MESSAGE ARCHIVING,” beginning in February 2023, with an August 2025 end date.
Sure, but was it being used to send secure military messages in the past? Or was it being used as a slightly more secure text messaging replacement by agencies that weren’t subject to the same security requirements as the Secretary of Defense?
It is my understanding that the normal procedures mandate that government supplied locked down devices be used for classified communications, not personal phones running Israeli cloud-connected messaging apps.
This is comparable to everyone using Hillary's email server for classified messaging, except also controlled in a foreign country, and oops very insecure.
Even office drones working at a bank aren't allowed to do such things.
> Bluesky’s moderation team reviews each verification to ensure authenticity.
How is this compatible with Bluesky's internal cultural vision of "The company is a future adversary"[1][2][3]? With Twitter, we've seen what happens with the bluecheck feature when there's a corporate power struggle.
The problem with Twitter (before the whole blue check system was gutted into meaninglessness) was that not enough verification badges were handed out. It's not exactly a dangerous situation.
Bluesky's idea of verified orgs granting verification badges to its own org members would be an example of a much more robust and hands off system than what Twitter had.
The dangerous scenario is what happened to Twitter after the Elon takeover: verification becomes meaningless overnight while users still give the same gravity to verification badges which causes a huge impersonation problem. But that possibility is not a reason to have zero verification.
The problem I had with twitter was the check was supposed to mean one thing and one thing only: that the person was who he or she claimed to be.
What twitter starting doing was removing blue checks from people who were causing problems for the platform (but not behaving bad enough to kick off). This made no sense because people still needed to know if a person was who he claimed to be (e.g., Milo Yiannopoulos) even if the person was controversial or problematic or just plain nasty.
Blue Checks weren't "gutted". Now they just mean something else -- you're a premium subscriber.
This is absolutely correct—I remember quite clearly how it all went down. When Twitter first rolled out verification, it was supposed to ensure that the person you were following or interacting with was the person they claimed to be.
This was also because there were so many people setting up fake accounts using real celebs. The most famous of which was probably the Dave Chapelle, Kat Williams story that Chapelle tells where a fake Chapelle account was feuding with a fake Kat Williams account.
The problem is that X (formerly Twitter) is still calling blue checks "verified". Even though nothing about the account is verified. It's deliberately misleading.
I use the word "gutted" to refer to the level of trust in the old system that was abandoned in the identical-looking new system.
The correct way to have rolled that out would have been a brand new icon, but they wanted to cash in on the reputation of the old system. "Now you can pay for this once-coveted badge!"
Nothing: anyone can hand out verifications to anyone they'd like.
Now, how those are displayed is up to the display software. BlueSky themselves get to decide who gets a blue check based on verification records, but if you wrote your own software, you could do whatever you'd like. There's a bsky fork that already has an account option to let you hide blue checks in your own view.
If anyone can hand out verifications to anyone they like then we’re back to square one. Why do we need these “Blue Checks”? It feels like BlueSky is trying to reclaim the lost clout for some of their influential users.
That’s a really weak argument because normie users want TikTokified social media with a clout based caste system. I thought BlueSky had a different goal in mind.
BlueSky wants to make a social media platform that normies want to use. The whole idea is to design a protocol that can achieve huge scale and behave a lot like existing social media apps (which people "like") while still being open and flexible.
I'm not sure about the details of the protocol (maybe someone can check?), but the client software can probably distrust certain trusted verifiers or even use a public list of such revocations. If this opportunity is exploited by malicious actors, there must be a simple and fast escalation path to revoke verification (you complain to verifier, then to maintainers of revocation list).
> Revocation is deletion, so it's hard to enumerate revocations.
In decentralized design it is not. A server can continue to list verified accounts, a 3rd party revocation list may mention some of them, a client will show only accounts not in the list as verified.
We need the blue checks to combat impersonation attacks. If a malicious actor pretends to represent a New York Times journalist, or your bank, or a government agency, you don't want users to be tricked into believing them.
Verification is a different thing than a "blue check". I might be against blue checks, but for sure I'm for anyone being able to hand out verifications. It creates a web of trust, and ideally one can choose who they trust and how many transitive levels.
Presumably a contractual agreement with BlueSky. Trust needs to stem from somewhere, so you’re either looking at a web-of-trust model where somebody (BlueSky or BlueSky clients) makes decisions on what sign-offs to trust, or you trust BlueSky to perform due diligence on partner orgs that provide this service and to hold them accountable when that trust is breached.
The WoT model works but as GPG has shown it requires your end users (people? BlueSky client developers?) to manage who they trust as an authority on anything.
Your profile says you're a security engineer, so I'm hoping you can help me understand.
What was the problem with the current DNS system? I definitely think there could be improvements like displaying domain instead of TLD but still.
And why not move into a system like multiparty keys? Keys assigned by domain holder, need to be signed, and verified accounts must login with a private key that validates. That way you don't just get that the account is validated but the post is too. Yeah, this would require more technical expertise to do but the organizations we're usually most concerned about would have no problem with that. Besides, tooling gets easier when there's meaningful pushes to make it available to general audiences
There was once Handshake, but they failed mainly because they didn't want to work along side the current system, they wanted to replace it completely with a blockchain. It's actually one of the good use case IMHO, but their leadership didn't want to play nice with what we already have so transition could happen smoothly
What is appropriate here really depends on what properties BlueSky wants to assert about it's ecosystem.
> What was the problem with the current DNS system? I definitely think there could be improvements like displaying domain instead of TLD but still.
As commenters earlier in this thread have noted, one property BlueSky claims it wants to develop/maintain is resistance against BlueSky itself becoming an adversarial party--say in the event it is bought out by an eccentric multi-billionaire who may take steps to discredit certain parties or reduce their reach or reputation.
I don't think the current DNS verification methodology helps or hurts in a BlueSky is hostile scenario. I think the only issues with the current DNS username verification system as I understand it are the same issues with any DNS based verification system in that vanilla DNS was not a protocol designed to be resistant to adversarial use and as a result there are many ways for people to tamper with DNS records, DNS queries, and confuse systems that incorrectly trust DNS records to be trustworthy or well-formed. DNS cache poisoning is a thing. Domain takeovers have and will continue to happen.
Now, if we're talking about what technologies are suitable for username verification in a word where BlueSky is adversarial, we have a very different conversation. I think the scope of such a scenario extends a lot further than usernames. If your primary interface into the AT Protocol feed is the BlueSky website or the official BlueSky application, you are trusting BlueSky to validate usernames. An adversarial BlueSky could easily decide if your interface marks someone as trusted or untrusted without your knowledge.
The only way I could think to avoid this would be to use a different interface to the global feed, but then you run into new problems that are more difficult to avoid: the fact that most BlueSky users don't host their own data (though this is doable with https://atproto.com/guides/self-hosting). BlueSky also manages the global feed, so barring a competitive global index, you'll still be consuming a feed curated by BlueSky's AT Protocol services and implementation.
I'd close this by saying I am _not_ an expert in the AT Protocol, BlueSky's systems, or this space in general. This is a very fast and loose risk assessment I made after reviewing the AT Protocol and a bit of research on how BlueSky says it provides it's services. My current assessment is that if the community wants to be robust against BlueSky becoming adversarial, there needs to be a lot more support for self-hosting of PSPs and similar data stores, alternative global indexes, and likely also independent governance of the AT Protocol itself.
Thanks for the insights and disclosure. While maybe not a expert in the AT Protocol, certainly you are closer to the issue than myself and presumably most users.
Were you given the ability to make a trustless or more decentralized system is there some route you would pursue? Maybe ignoring the AT protocol so I (and others?) can get a better sense of how such systems might be employed to ensure that an arbitrary organization can build defenses against themselves becoming adversaries (seems like a fairly useful mindset in general).
> Maybe ignoring the AT protocol so I (and others?) can get a better sense of how such systems might be employed to ensure that an arbitrary organization can build defenses against themselves becoming adversaries (seems like a fairly useful mindset in general).
There are ways to do this, but being trustless in the context of a social network is a thorny problem I'm not sure I can provide an answer to. The whole purpose of such networks is to enable communication with people you may not necessarily know (or trust). Furthermore, you implicitly trust the medium in which you use to communicate (BlueSky, Twitter, SMS, Zoom, etc.).
There are ways to make things difficult for a service provider; you see many of them in high security contexts. For example, when storing data on AWS GovCloud or similar, you have the option have AWS use user or company provided encryption that they do not have control over. Should AWS be compromised in some way, they would still need to have your cryptographic keys in order to access unencrypted data (https://docs.aws.amazon.com/AmazonS3/latest/userguide/Server...).
Another approach is message signing. An example of that is PGP signing of emails or files. You can use these cryptographic signatures to verify a message (email) or binary blob (file) has not been tampered with.
Another common approach, which you have alluded to, is some kind of multi-party scheme. Many cryptographic blockchains are good examples of this: You need a majority of parties in such schemes to agree on something for it to be considered valid or true.
A combination of these things can be used to make it more difficult for a service provider to be compromised or act against their user's interests. Sadly, this does not make it _impossible_ for them to do so. A user or customer who doesn't want to tolerate the worst case scenarios here still needs to make their own back ups, decide what entities to trust, and ensure they have robust procedures for doing things like trusting new entities or managing cryptographic material.
I'll also note that there are in fact live examples of such systems if you look for them. See IPFS for one such example: https://ipfs.tech/
> For example, the New York Times can now issue blue checks to its journalists directly in the app. Bluesky’s moderation team reviews each verification to ensure authenticity.
It's no more than any client only showing the information it wants to; anyone can verify anyone else, without permission from the company. Any client can surface this information in any way they'd like, without permission from the company.
Regardless of the intent or future this is an incredibly neat rhetorical trick that Bluesky's designers have pulled. Any semi-motivated contrarian can make the required arguments for either concern. It even happens to be true!
Meanwhile every mastodon discussion required (requires?) someone who deeply understands the system to spend "10 comments deep" energy just to arrive at a much less amenable position.
Nothing, however AppViews like Bluesky decide which verifiers they trust. An AppView could also allow for user choice, like how algos and moderation work.
The problem with Twitter (before the whole blue check system was gutted into meaninglessness) was that verification badges were merit and nepotism and not identity based
Here is probably the most well-known instance of Pre-Musk Twitter removing blue checks from the accounts of public figures for reasons other than the account not being who it claims to be:
Not pictured: innumerable other accounts which were never granted a blue check in the first place, despite being the easily verifiable real accounts of journalists and public figures.
It was de facto a caste system of political favor and connections.
Nearly everyone I interacted with on Twitter (pre-Musk) got their verification badge by essentially being in the San Francisco tech community.
No matter how prominent someone might be on this side of the Atlantic, it never mattered, meanwhile mid-managers and coders at no-name startups had blue checks because, I mean, they knew someone at Twitter- and who else is verifying people? I don’t really fault them for verifying people they personally knew.
But it meant that you had a situation where (and no offence meant to them) “nobodies” (as in, non-promenant figures) were “in the club” with heads of state, companies and heads of industry.
So there was a definite whiff of nepotism, because it was a de-facto status symbol.
Yep, same with "journalists". Publication companies were given a fast-track process that basically allowed them to hand out verifications to anyone who ever had a byline with them. The most reliable way to get verified was to sell some words to an online news website. It didn't matter how notable you or the publication were.
When GP says "that’s not true though" I can't even tell which part he is talking about. This is fairly recent and well-documented stuff.
Except journalism was happening on Twitter, so by restricting verification to more or less legacy media—-an elite circle of Columbia/NW/Ivy grads that is what, 5% merit-based—-it becomes somewhat of a caste system.
Because the original point of it was to distinguish journalists and public figures, so you could tell imposters apart from real people. Now its purpose is to show who has a premium subscription. Totally different feature using the same name.
But not all "journalists and public figures" received the checkmark. It was entirely up to the ultra-woke Twitter management who gets a checkmark and who doesn't. I frankly don't see why the current, deterministic setup is better than the former non-deterministic one.
Now it's ultra-racist ultra-fascist Twitter management. Do you actually think that's better? Why are you whining about "woke" people who have long since been fired, but embracing the racists and fascists who are now running and being platformed and amplified on Twitter?
Yes, "Twitter". If Elon Musk can publicly abuse, humiliate, lie about, deadname, and misgender his own daughter to his millions of followers, than I can deadname Twitter.
Same as the current labeling/moderation service: any participant can verify any other participant. Which verifiers gets a check to appear is a property of the AppView.
If Bluesky becomes evil, you just configure your AppView not to trust their verifications.
Of course, that's the problem: right now we mostly have one AppView (bsky.app), which is the current SPOF in the mitigation plan against the "Bsky becomes the baddies" scenario.
That'd certainly be a neat feature: national/regional/local governments running their own verifier accounts and providing Bluesky/ATproto verification to their residents.
Can confirm. Once dang pinged me directly by email saying that my story was re-upped. The story went again to the frontpage and the date was adapted (IIRC), but the comments were kept:
---
Hi denysvitali,
The submission "PostmarketOS-Powered Kubernetes Cluster" that you posted to Hacker News (https://news.ycombinator.com/item?id=42352075) looks good, but hasn't had much attention so far. We put it in the second-chance pool, so it will get a random placement on the front page some time in the next day or so.
This is a way of giving good HN submissions multiple chances at the front page. If you're curious, you can read about it at https://news.ycombinator.com/item?id=26998308 and other links there. And if you don't want these emails, sorry! Let us know and we won't do it again.
1) OSI says that public domain and open source are not the same thing ("Here’s why it’s a mistake to treat the two terms as synonyms"), not that public domain software cannot be open source.
2) It is simply not true that the SQLite distribution terms "contain[] a prohibition on using it for evil". That is not in the text you linked.
The OSI post concludes that "an open source user or developer cannot safely include public domain source code in a project". Has SQLite done something that makes it an exception to this?
I will concede that the exhortation against use for evil in the license text is probably not legally binding.
An advisory blog post warning people not to assume that "public domain" code is actually unencumbered is not the same as saying that actually public domain code is not open source.
> It is an attempt to dedicate a work to the public domain (which, taken alone, would not be approved as an open source license) but it also has wording commonly used for license grants.
It's clear OSI considers this extra wording, above and beyond the public domain declaration, to be what qualifies it as truly unencumbered. SQLite's license does not contain similar language, and has not been similarly qualified by the OSI.
Regardless of their dumb nonsensical rejection of public domain, you did the opposite of what was asked. I was asking for examples of open-source software that is not free in the Free Software sense, and you gave an example of something that is too free for the OSI.
> the LICENSE.md contains a prohibition on using it for evil
No it doesn’t and now I feel like you’re trying to waste my time on purpose. It contains a “blessing” exhorting (i.e., requesting) people not to do evil, not a “prohibition” of any kind.
Both of these sources contradict your somewhat bizarre thesis that sqlite, one of the most commonly used free software in the world, somehow is not open source.
It doesn't fail to meet the OSI's definition of open source. As you elsewhere conceded, the "blessing" in the SQLite source doesn't have legal weight and doesn't violate the OSD. Public domain software has always been considered open source. For instance, the Debian project, famous for their exacting standards for free software, accepts public domain software:
As they mention here, it is theoretically possible that code dedicated to the public domain might still be encumbered in a way that makes it not open source: "we are unaware of a case where a jurisdiction has upheld a copyright claim to a work which has been dedicated to the public domain everywhere".
Trademarks are the legal framework we use to protect phrases like this. In 1999, OSI applied for, and was denied, a trademark on the phrase "Open Source".[1] Perhaps there is a moral argument to be made to this effect, but there is not a legal one.
The entire dictionary is nothing but consensus on terms with no trademarks (and from which government?) anywhere.
OSI merely presents a definition as a service, for others to have something already thought-out to refer to rather than have to write a 10 page definition every time someone wants to refer to the concept.
It is well established by now, and the only people trying to argue about it are simply uneducated or have some deliberate agenda where they somehow benefit from artificially clouding an issue that has already gone through a process of being hashed out and recorded long ago. There is no reason to give them any air.
I'm not trying to make either a moral or a legal argument, only to warn against using the term in a way that is liable to cause confusion or needless antagonism.
reply