I have two iPhones and a MBP. I have to keep Bluetooth disabled on the MacBook otherwise it randomly triggers while I'm between podcasts or whatever and squeeze the AirPods to resume, instead it launches Apple Music, or some browser tab starts playing audio.
This is far from solved if you have more than one Apple device.
There is no option for me to say: never use AirPods for anything but podcasts, and absolutely never automatically select them as an audio source for zoom/teams. AirPods microphones just don't work for my vocal range, they sound horrible and underwater. The microphone on my MBP works great, the mic on my iPhones works great.
AirPods are fine if you only ever use one device at a time. If you use more than one at the same time, it becomes extremely annoying.
Let's not even get into the annoying ways which it becomes hard to manage when you have multiple AirPods, multiple iPhones, and multiple MacBooks.
ubnt has been the ubiquiti default login at least back to 2010 when I started using their products, before UniFi was a brand. I always assumed it was short for Ubiquiti Networks.
A lot of companies in that space did then. I was at a robotics company at the time and we experimented with mikrotik routerboards + the various long-range Ubiquiti wifi modules, some of which are even still listed on the website: https://techspecs.ui.com/uisp/accessory-tech/xr (though not the 900 MHz XR9, which was arguably one of the most interesting for long range comms)
I had a bash spaghetti code script that I wrote a few years ago to handle TLS certificates(generate CSRs, bundle up trust chains, match keys to certs, etc). It was fragile, slow, extremely dependent on specific versions of OpenSSL, etc.
I used Claude to rewrite it in golang and extend its features. Now I have tests, automatic AIA chain walking, support for all the DER and JKS formats, and it’s fast. My bash script could spend a few minutes churning through a folder with certs and keys, my golang version does a few thousand in a second.
So I basically built a limited version of OpenSSL with better ergonomics and a lot of magic under the hood because you don’t have to specify input formats at all. I wasn’t constrained by things like backwards compatibility and interface stability, which let me make something much nicer to use.
I even was able to build a wasm version so it can run in the browser. All this from someone that is not a great coder. Don’t worry, I’m explicitly not rolling my own crypto.
It seems to me like it would be a rather useful exercise to have the smaller model make the routing decision, and below certain confidence thresholds, it sends it to a larger model anyways. Then have the larger model evaluate that choice and perhaps refine instructions.
That's a cleaner implementation than what I described. Small model as meta-router: classify locally, escalate only when confidence is low. The self-evaluation loop you're suggesting would add a quality layer without much overhead — the large model's judgment of its own routing is itself a useful signal. Haven't shipped that yet but it's on the list.
I really would like to see a cost and cooling breakdown. I just can't see how you can do radiative cooling on the scales required, not to mention hardening.
I thought this was a troll by Elon, now I'm leaning towards not. I don't see how whatever you build being dramatically faster and cheaper to do on land, even 100% grid independent with solar and battery. Even if the launch cost was just fuel, everything else that goes into putting data centers in space dwarfs the cost of 4x solar plus battery.
I'm surprised we have not seen more investment into RISC-V from Chinese firms. I would think they want to decouple from ARM and the west in general as a dependency. Maybe they view the coup of ARM China as having secured ARM for the time being and not as much pressure?
Either way, it's currently hard to be excited about RISC-V ITX boards with performance below that of a RPi5. I can go on AliExpress right now and buy a mini itx board with a Ryzen 9 7845HX for the same price.
I'm surprised we haven't seen more investment from Indian firms. India is really trying to raise their game in the tech economy beyond the services industry. You don't need the most cutting-edge chip fabrication equipment to manufacture these processors.
The Indian government was actually a very early adopter, they had one of the first RISC-V processors that wasn't based on the vanilla Rocket designs, starting back in like 2016 or so. Unfortunately they made some other weird decisions, like not supporting compressed instructions so the chip wasn't compatible with any mainstream Linux distros, and I don't think the project really went anywhere. Although looking at Wikipedia the project seems to still exist.
> I would think they want to decouple from ARM and the west in general as a dependency.
Why would you think that? ARM is not like x86 CPUs where you get the completed devices as a black box. Chinese silicon customers have access to the full design. I guess it's not completely impossible to hide backdoors at that level but it'd be extremely hard and would be a huge reputational risk if they were found.
They also can't really be locked out of ARM since if push comes to shove, Chinese silicon makers would just keep making chips without a license.
I did catch one vendor using a HAL across a whole SoC product line, a very low-level HAL that sat between SoC hardware registers and kernel drivers. It effectively made the drivers use scrambled register locations on the AHB etc, but if you resolved what the HAL did, the registers matched ARM's UART etc IP. So I figured they were ducking license fees for ARM peripherals.
LoongArch is a weird mix of MIPS and RISC-V. There's not much that would be gained by investing a whole bunch into LoongArch that couldn't also be done to RISC-V, if at all.
Thing about RISC-V is that there are technically no barriers on doing as you see fit, but there's very clearly defined lines where community support isn't guaranteed, like custom extensions or screwing with already ratified extensions.
We actually don't know a lot about the UR-DP1000 chip, while we do know quite a bit about the 3A6000 because of Chips and Cheese. This makes a thorough analysis of the crimes committed within the core architecture of the UR chip more of speculation than a coherent discussion.
But we do know:
1. The UR-DP1000 does not have any Vector instructions
2. UR only has a 4 way OOO design, while the 3A6k is 6 way OOO
3. The cache architecture of the UR is more complicated, with 4 cores sharing 4MB L3 per cluster (2 clusters total), and a 16MB global L4 where the 3A6k doesn't have this
4. The UR chip doesn't have SMT
Everybody sane will want to move away from them, there is nothing chinese specific.
The most performant RISC-V implementations are from the US if I am not too mistaken.
Wonder if that hardware can handle an AMD 9070 XT (resizable bar). If so, we need the steam client to be recompiled for RISC-V and some games... if this RISC-V implementation is performant enough (I wish we would have trashed ELF before...)
Is there an actual U.S. RISC-V CPU that achieves competitive performance? I think the performance leaders are currently based in China.
There's a difference between announcement, offering IP for licensing (so you still have to make your own CPUs), shipping CPUs, and having those CPUs in systems that can actually boot something.
For instance, SiFive in the US, but last time I did check them, their RVA23 CPUs on their workstation boards did not have cache-line size vector instructions (only 128bits aka sse grade I think), RVA23 mandates the same "sweet spot" for a cache line size than on x86_64: 64bytes/512bits.
Yep... but RISC-V large and performant implementations won't access the latest silicon process for a good while... better to push for native support to help.
It's not better architecture so the gain is few pennies more per chip at cost of A LOT of work... work that can't just run Android or much else out of the box.
Isn’t that kind of like saying automated testing (for apps written without testing in mind) isn’t worth it because you have to spend time getting code into a state that is testable?
I do agree that it takes a lot of work to get something usable, and so I think we are a ways off from mainstream risc-v. I do also think there is a lot more value for low power devices like embedded/IoT or instances where you need special hardware. Facebook uses it to make special video transcoding hardware.
Getting the chip stable is one thing. Getting all the various software applications that one uses test their chip is another, and a lot of hardware companies won’t have experience there. Then there is customer code…
One of my consulting customers has been half India, half not for a decade. There is a real push over the last year to wind down the not India half and shift to mostly India.
India based folks cost 50-75% less. I realize that quality India hires would be closer to US rates, but management is ignoring that aspect.
If they're lucky they'll find one solid worker who's going to watch everyone else's hands. I've had one criminally underpaid unofficial[0] tech lead like that. He was herding a team of 11, where like four people at most really cared about the outcome of this project.
[0] Because otherwise a raise would be in order. Can't have that.
I use ACME with Google Public CA for this reason. No one bats an eye at GPCA. Also, their limits are dramatically higher than LE.
Good news for your manual renewal friends, renewals drop to 197 days in February, halving again the year after, halving again until it reaches 47. So they will soon adopt automation, or suffer endless renewal pain.
Yea couldn't install gps, then realized the package manager only had maybe 10% of what most gli.net routers have because of the 'special' chip in this one.
Still a great travel router, but had to buy a BerylAX for what I wanted to do with the usb gps.
I have two iPhones and a MBP. I have to keep Bluetooth disabled on the MacBook otherwise it randomly triggers while I'm between podcasts or whatever and squeeze the AirPods to resume, instead it launches Apple Music, or some browser tab starts playing audio.
This is far from solved if you have more than one Apple device.
There is no option for me to say: never use AirPods for anything but podcasts, and absolutely never automatically select them as an audio source for zoom/teams. AirPods microphones just don't work for my vocal range, they sound horrible and underwater. The microphone on my MBP works great, the mic on my iPhones works great.
AirPods are fine if you only ever use one device at a time. If you use more than one at the same time, it becomes extremely annoying.
Let's not even get into the annoying ways which it becomes hard to manage when you have multiple AirPods, multiple iPhones, and multiple MacBooks.
reply