I haven't dived into the specs, but how does solid solve bad actors getting access to your pod?
Usually today your data is fragmented across platforms (so damage is reduced) which have centralized authorities who can step in and fix bad actor issues.
Honestly, I'm gonna be super lazy and just quote the front page of the site:
> Anyone or anything that accesses data in a Solid Pod uses a unique ID, authenticated by a decentralized extension of OpenID Connect. Solid's access control system uses these IDs to determine whether a person or application has access to a resource in a Pod.
Of course, as a data owner, you could accidentally grant a bad actor access to your data, but presumably you can also revoke that access as well.
But that's just it though - if bad actors gain control, you lose the ability to reject OAuth creds (which is what OpenID is). Things like social engineering or phishing of credentials, which happens at scale today.
They need a way to handle situations when bad actors take over, because other solutions handle this with centralized authorities who step in and rectify the issue.
I'm now confused by what you mean when you say "gain control".
Are you talking about literally exploiting a bug and hacking the underlying service that is providing access to the pod?
In that case, it's a question of who owns and operates the pod. Solid is conceived as a set of standards that can be implemented by either individuals, or by companies on behalf of individuals. Think "data ownership as a service".
So you can still have centralized entities that implement the spec and provide support and other services for users, including dealing with security incidents.
If ethereum ever achieved adoption even close to a fraction of what stock trading has, then you'd have the same problems and overhead.
As it stands, the apples to apples comparison has any centralized, trusted network being more efficient than [attempted] trustless, proof of work like ethereum.
> wuhan lab theory... calling for increased land/property taxes
I feel like these are probably on two different levels, but given that you called it a theory and not a conspiracy, I think you'll also probably disagree.
I haven't got much opinion or knowledge on the matter, but the main discussion [1] about the Wuhan Lab here was pretty civil. It wasn't overly conspirational and there were a good few people in opposition.
You seem to imply it is some crazy far right conspiracy, but that wasn't the impression I got from skimming the Wikipedia article [2]. There's a good few serious people who consider it possible and while I'm sure some distorted version of it is weaponized by politicians, I don't see why that's reason to discard it.
If I'm thinking about the same post that the commenter was thinking of [1][2], the article was not saying the theory as correct, but was discussing issues with the discourse around the theory.
Edit: I posted the wrong vanityfair link, fixed. Also regarding my terminology, "conspiracy" is a bit of a loaded term, and given the information in the vanityfair article I decided to keep my terminology more neutral
It used to be Reddit would let users block trolls, and then the user would never have to deal with seeing the troll posts ever again, even if they continued to reply.
The new way - blocking someone and restricting that person's ability to reply - is an aggressive UX.
That article is about a tech-illiterate governor using his station to threaten prosecution of the press... for discussing a 10-yr old bug that's already well known publicly.
If anything, that's more to the discussion of giving social security numbers to inept IT departments.
Eh, if only people were never bad actors and only used drones for that.
I was at a soap box derby a couple weeks back and they had to pause the entire event for fifteen minutes because someone was flying a drone down by the course area.
A couple hundred people were waiting on one person to remove their drone. Even after the announcers were asking the person to remove the drone they continued to keep it there.
Nepotism too. Anecdotally, I know someone who is skilled and became a top creator on one of those early audio-focused platforms, but she ran into a couple toxic members of the community who knew the FB PMs, and managed to get her and others they didn't like removed through moderation.
Usually today your data is fragmented across platforms (so damage is reduced) which have centralized authorities who can step in and fix bad actor issues.