Not particularly enthusiastic about a website once I see that people can pay fifty bucks to get a "boosted position". Seems like a quick slop cash grab.
If I open it, click on the background to activate the physics and just keep the tab open, pretty much all of the blocks that can collapse do eventually collapse.
I feel like having them as a single brick is a bit hyperbolic, since undersea cables are pretty redundant in most of the world. Get rid of one and traffic just routes around it. Ships have been routinely destroying cables in the Gulf of Finland and the Baltic Sea in the past couple of years without causing significant disruptions.
"most of the world" is doing a seriously large amount of heavy lifting in this sentence.
There are many regions that are served by a single line, more than you think.
Even "well connected" places have fewer cables than you expect, and the frustrating thing is that you don't know that you can route around an issue until you try.
BGP is really resilient, which is great, but if your path is not clear then you'll only realise it when the failover doesn't happen, you'll think there's a redundant path.
Only mildly. There's not huge amounts of dark capacity just sitting around waiting to take over so if a major fiber connection goes down the remainder will get congested with the extra capacity. It won't cascade like a power outage but the remaining lines will slow down.
The whole Internet was designed for precisely this use case. If there is an outage, the distributed system will try to find another path. No actual central point of failure. As you say, the single brick is hyperbolic. But yea, those sharks can certainly be disruptive at times.
Well that depends on how much traffic that cable was supporting, how much free capacity is available on other cables heading to the same area, how much additional latency the rerouting will add and how sensitive to latency the rerouted traffic is doesn't it?
I've tried to use a local LLM on an M4 Pro machine and it's quite painful. Not surprised that people into LLMs would pay for tokens instead of trying to force their poor MacBooks to do it.
Local LLM inference is all about memory bandwidth, and an M4 pro only has about the same as a Strix Halo or DGX Spark. That's why the older ultras are popular with the local LLM crowd.
This would be an absolute game changer for me. I am dictating this text now on a local model and I think this is the way to go. I want to have everything locally. I'm not opposed to AI in general or LLMs in general, but I think that sending everything over the pond is a no-go. And even if it were European, I still wouldn't want to send everything to some data center and so on. So I think this is a good, it would be a good development and I think I would even buy an Apple device for the first time since the iPod just for that.
And while it is stupid slow, you can run models of hard drive or swap space. You wouldn’t do it normally, but it can be done to check an answer in one model versus another.
Try a software called TG Pro lets you override fan settings, Apple likes to let your Mac burn in an inferno before the fans kick in. It gives me more consistent throughput. I have less RAM than you and I can run some smaller models just fine, with reasonable performance. GPT20b was one.
What models are you using? I’ve found that SOTA Claudes outperform even gpt-5.2 so hard on this that it’s cheaper to just use Sonnet because num output tokens to solve problem is so much lower that TCO is lower. I’m in SF where home power is 54¢/kWh.
Sonnet is so fast too. GPT-5.2 needs reasoning tuned up to get tool calling reliable and Qwen3 Coder Next wasn’t close. I haven’t tried Qwen3.5-A3B. Hearing rave reviews though.
If you’re using successfully some model knowing that alone is very helpful to me.
We didn't do multi-region deployments, but we did store database backups in a separate region just in case something really bad happened and our AWS region became unavailable. Also had a plan/some ready Terraform stuff in order to start setting up a deployment if it became apparent that the region wasn't coming back anytime soon.
IMO, if you're using AWS and not replicating your data somewhere else, this should be an eye-opener for you.
Not sure why everyone read this as me doing anything here, I'm a fractional CTO, which is kind of an advisor. Nothing invaluable will be lost tho. It's not the core platform, just a localized version for specific customers in the region.
Pretty much the same in Finland. You are allowed to film/photograph as much as you want in a public place, but publishing the material might be against the law depending on the contents. Particularly the law regarding "dissemination of information that violates privacy". It's fine to publish a photo of people walking on the street, but you'll probably get into trouble for uploading an arrest to YouTube where the suspect is recognizable.
07:40 still sounds pretty early when compared to 66 degrees where we could expect the civil twilight after 09:00 in December. You'd go to school at 08:00 in the dark and go home at 15:00, also in the dark.
reply