I really cannot say Uber's use of Go is particularly idiomatic to me, having started writing Go more than a decade ago now. It just strikes me as overwrought, and I've worked on big services.
UEFI itself is way too complex, has way too much surface (I'm surprised this didn't abuse some poorly written SMI handler), and provides too little value to exist. Secure boot then goes on to treat that place as a root of trust, which is security architecture mistake, but works ok in this case. This all could be a lot better.
Electrolytics are usually nothing too fancy, but it is proprietary. Water and electrolytes, hence the name. PCBs are in the big transformers and what used to be called bathtub caps which looked like this https://i.ebayimg.com/images/g/VjwAAOSwfGJjYtHx/s-l400.jpg (think 1950s electronics stuff)
I don't see how enabling secure boot helps here, since UEFI is responsible for enforcing that and is compromised. I'm sure some might recommend more roots of trust and signing down and verification that starts at the chipset, but I'd recommend an alternative with less attack surface and better user control: a jumper.
The article specifically says this is self-signed so won’t work with SecureBoot enabled.
This is technically a bootloader, so it has to find a way to get loaded by the UEFI. The article doesn’t say it’s able to do that, the guys has to manually trust the signing certificate or disable secureboot.
It's heartwarming to see that the spirit behind Azureus is still alive. SWT might not be what the Duke himself wants in a Java GUI framework, but it's practical and I remember the "chunks bar" in the Azureus GUI fondly. It'll enjoy firing up BiglyBT after all these years. Using a largely memory safe language makes a lot of sense for P2P software.
Potentially worth pointing out that Go is memory safe only when single threaded (races can corrupt memory), and this kind of application is very likely to use multiple threads.
But I do also generally expect it to be safer than C++. The race detector prevents a lot of badness quite effectively because it's so widely used.
Go is safe from the perspective of RCEs due to buffer overflow, which is what matters here. Happy to be enlightened otherwise, but "I broke your (poorly implemented, non-idiomatic, please use locks or channels ffs) state machine" is a lot better than "I am the return instruction pointer now"
Voting for new legislators, personally. I wish they'd do something about PG&E or housing instead of criminalizing software development of chatbots. Truly useless, and I wish we had more choice of non-insane candidates.
This was also my reaction almost immediately. Tattoos can have extensive correlation with social and lifestyle factors that could easily mean the difference between correlation and causation here.
I find it hard to believe that a Canadian company's model contained an undertrained token related to hockey (albeit in German). In all seriousness, this is pretty cool and am excited to see understanding of tokenization impacts on models improve. One notable finding is that a lot of the earlier open source models have issues with carriage returns, which are not that uncommonly introduced depending on where the data is coming from etc.
It wastes taxpayer funds on enforcing a moat for Sam Altman, it establishes a fixed computational bound in a legal regulation, it tries to police a free speech activity because of possible harms (but not the harms directly), and it is likely to have negative national security implications as other (less regulated) regions deal with fewer lawyers as they advance the state of the art.
Nice concise summary. The fundamental problem with all of these proposed "AI" "safety" regulations is that they adopt the corporate version of safety where LLMs refuse to talk about things that sound scary, mean, or even just controversial, while completely ignoring that these systems will be used to harm people at scale by turning gradually creeping corporate-individual power imbalances up to 11.
This exactly. I would be much happier if the regulation was "don't use GPT-4 to decide when to kick Grandma out of the hospital" or "don't use a Llama finetune to make policing decisions", which is where I see the most certain need of regulation in the near future.
I don't think this is a hot take at all, it's matches my understanding. One of the reasons language itself is so difficult (miscommunication, etc) is we have a mostly similar but not identical "compression table" of ideas that words map to, and why we spend so much time aligning on terms, to ensure correct "decompression".
We need compression because internally cognition has a very "wide" set of inputs/outputs (basically meshed), but we only have a serial audio link with relatively narrow bandwidths, so the whole thing that allows us to even discuss this is basically a neat evolutionary hack.