Copy-pasting the json of the request into chatgpt, then just sending prompts to it normally does appear to make it enter the same role-play mode as this site appears to have been in
Build from source AND run an Ai agent that reviews every single line of code you compile (while hoping that the any potential exploit doesn’t also fool / exploit your AI agent)
Wouldn’t be surprised that the ssh auth being made slower was deliberate - that makes it fairly easy to index all open ssh servers on the internet, then to see which ones get slower to fail preauth as they install the backdoor
The service seems to be under massive load from the influx of users over the weekend, and they've passed 200k users, posting activity is a few times higher vs "the normal"
So they probably could get even more users, but I'm guessing it's already sufficiently crazy for the team right now
They (Bluesky) blew it. They had one job and that was to be ready for this moment and what did they do? Continue with the invite system (which is from Gmail era) and then disable account creations...
If they opened the floodgates with a not-ready product and also fell over, that's a much worse look.
Besides, 'limited' account creation to form communities is a pretty well-established scaling strategy. They have scaled from 0 users to 200k users in 5 months - Facebook originally took a year to scale to 1m users so it's not unrealistic that Bsky are growing/scaling at approx the same pace at this stage in the curve. Worth noting that Facebook also just couldn't have handled scaling to 1 billion users on Y1, and that required years of engineering effort (and lots of money/resources).
Considering they can only have limited users at their current stage, invites to form close communities is a strategy to ensure that a social graph is created early. Having 20k users that all have a few friends with accounts is more important than having 20k users where nobody knows each other.
Remember Facebook also originally scaled campus by campus - which was a process that let it 'scale' in a way that encourages social graph creation (i.e. 10k signups in 1 university is probably WAY better for a social network than 10k signups in 100 universities).
They will have a different set of scaling problems to what Facebook had in 2005 - their decentralised architecture will throw out new challenges that will need time to be worked through.
Has anyone else created a decentralised social network at this scale? Arguably matrix but they also grew slower, potentially have a simpler product, and had scaling challenges along the way.
Remember scaling doesn’t just mean technical for a social network - it also means support and moderation.
Would've been easier to capture all of the Twitter refugees now. I already saw a bunch of people opt for Mastodon or Misclick because Bluesky just wasn't an option.
You can download and keep your data and keep a backup, it's all content-addressed (like git repos), and just upload to another instance
Like moving from GitHub to GitLab.
Each instance then has its own moderation policies, again, kind of like GitHub. But your identity is still your identity, and you can keep a copy of your data.
> You can download and keep your data and keep a backup
> and you can keep a copy of your data
Is this really a selling point / concern for anyone? I’ve never heard anyone express that the problem with tradition social media is that they can’t download and keep a backup of their data. Its about a central corporation being able to decide what is allowed to be said.
That specifically is not the selling point, but it is how one of the selling points works.
You can just take your data to another instance whenever you don’t agree with the policies of your current one. And all your connections/interactions/data should stay intact.
If it works as well as it seems to in the federation sandbox, you shouldn’t even be able to tell that you’re using a different service, the app just sends requests to a different server, and the web url may be different, and your default feeds are generated somewhere else.
Now, you may say that users won’t care about backing up their data, but that can be solved with some open (or paid) archival services.
I have this controller, and I was able to mostly workaround the dying issue by having a cron script ping a network device every minute, and when that fails it restarts the link - `ip l set enp70s0 down; sleep 6; ip l set enp70s0 up`.
But that's acceptable only because that machine has a workload which can tolerate not having network access for a few minutes per day or so.
Kind of both? If I were to come up with a formula, I'd say 'prices = GDP / SupplyOfGoods', where GDP is just 'MoneySupply * MoneyVelocity'.
If supply goes down, prices go up. If money supply grows, prices go up. If money velocity (number of times money changes hands in a given period) goes down, prices go down, etc.
(It's worth noting that in 2020 when money supply exploded, money velocity fell by a lot, which is why GDP fell, and why there wasn't that much inflation)