How far away is this from the top speedruns? The readme mentions OpenAI beating Dota 2 pros with the same models, so who knows??
It seems like an AI would not "play by the rules" and can easily detect weird edge cases and shortcuts. But as far as I can tell from the small GIFs, it's not doing any particular glitches.
It can’t complete all the levels and actually does worse on the first level than a normal human (for example falling down a hole and having to get out again).
I'm really looking forward to get one of these things when they come with decent warranty and without dead pixels. I'm as eager as anyone to get away from the walled garden duopoly, but since it's not suitable as a daily driver yet, maybe I'll be able to restrain myself and not buy it.
Except that one of Linus' most vocal offenses on C++ was due to operator overloading and how basic, seemingly native things like + can actually do a lot of hidden stuff unknown to the programmer. He must have softened on this since Rust offers the same facilities for operator overloading.
Operator overloading in Rust is less flexible than in C++, so there are fewer possibilities to mess it up.
Overloading is done indirectly via specific named traits which enforce method signatures, invariants, immutability rules. Assignments and moves can't be overloaded. Borrow checker limits what indexing and dereferencing overloads can do (hardly anything beyond pointer arithmetic on immutable data). Comparison operators can't be overloaded separately and can't return arbitrary values (like C++ spaceship operator). Generic code has to explicitly list what overloads it accepts (like C++ concepts), so even if you create a bizarre overload, it won't be used in generic code.
Linus’s point was also that C++ makes extensive use of function/method name overloading (is polymorphism) and this can make code hard to read. Not only is this not possible in C but the feature tends to be overused in C++.
Rust generally uses function/method polymorphism based on traits (aka type classes), that's far less prone to overuse and abuse than overloading in C++.
There are two ways of doing it. The way most people use most of the time does not, it's statically dispatched. The other way is dynamically dispatched, though there are some differences from the way that C++ virtual works. They should be roughly the same in that case, though.
It depends, you have access to both capabilities. By default if you're only dealing with an impld trait then it will use static dispatch. If you are passing around that same object as a trait objects, it will use dynamic dispatch and you can also impl Traits on other traits, which are always used as trait objects (vtable) but that have some restrictions on what they can look like. If you rely on the Deref trait to make it look like you have inheritance (but in reality it is composition) it will be the same as the later using dynamic dispatch.
I agree with this critique, but a good IDE helps, as does lint rules preventing egregious misuses (or even outlawing operator overloading entirely). As one example, many Java/C# linters prevent the use of non-curly-braced single line control statements entirely. The language allows them, but many codebases do not.
Could you give me a pointer to a discussion of type soudness in Rust?
I recently watched perhaps 3 years old video about Rust where Simon Peyton-Jones asked about this, and Niko Matsakis answered it was ongoing work back then.
I think that was after others took over the UI development. The back end of that program also was still C as far as I remember from their presentation and the move was mostly motivated by the GTK community, the documentation and different priorities on cross platform support.
Did he have an "attitude" about C++ in general? I thought he only commented on it with respect to operating system development. He did make much more general statements about Java, though.
I think I watched an interview with him where he talked about Subsurface project. In that interview he said that C++ is not all that bad but he still prefers C because he's so used to it and will continue write C code forever. I don't remember title of the video though.
Perhaps the possibility of rust improving kernel security/robustness makes the idea of rust integration seem like it carries its own weight, where C++ has more downsides (perceived or real) and fewer upsides.
Rust's history/origins - loosely, being designed to allow replacing Mozilla's C/C++ with safer Rust that performs well - feel like a good fit for kernel drivers even if the core kernel bits will always be C.
Yes, very much so. Not even only in the context of Rust but the insight, to fail fast, integrate early and do work in the open, instead of some hidden work, failing after a long time, when revealed.
How do these models address overfitting? I'm sure this is taken good care of, but from just reading the article it almost sounds like some details were added to make it fit recent changes better.
You don't really optimise parameters for large scale climate models, you put in your best guess at the initial values and then wait for weeks to months depending on your goal. With function evaluations that costly, I don't really see how you have time to overfit.
The big worry is bias, a lot of assumptions about how physics can be approximated are needed to make these models, but there isn't time to test every combination of them at the full scale. So you do small scale tests and hope nothing unexpected happens in the full run.
For scale, a performance basline paper from 2018 reports achiving 0.23 simulated years per wall clock day using 4900 GPU cores [1].
Open source semiconductors is such a broad area, and sadly only a tiny spec of it is open source: The HDL code that describes the digital circuit.
Everything else is very experimental at best. These days the FOSS FPGA tools are finally getting some traction, with Yosys and Nextpnr. But AsicOne tried to make an ASIC with open source, and faced endless troubles.
For ASIC there is basically QFlow, which is quite old, but used successfully to tape out a chip in the past, and there is OpenRoads, which is very new, experimental and ambitious. There are still major gaps in these tools, so in the end you inevitably have to sign an NDA and use proprietary tools and libraries.
And that's just talking about DIGITAL semiconductors where you compile HDL to pretty much generate the transistors from foundry cell libraries. So you have to sign an NDA to get the cell library, but you can at least release your code.
For analog chips, you can't do anything. An analog design highly depends on the parameters of the transistors you use, so before you even BEGIN designing, you have to sign an NDA to get the transistor models and you can NEVER open source an analog design.
The small dot of light at the end of the tunnel are projects like Minimal Fab, who make more accessible fabrication lines with open transistor models.
The crazy thing is that back in the days there were lambda rules, which were open rules anyone could use to design and model with. But with sub-micron devices, these scalable rules no longer scale, so fabs started producing secret models for their specific process.
I'm hopeful that after FPGA, and digital ASIC, analog will be next to be revolutionized.
> And that's just talking about DIGITAL semiconductors where you compile HDL to pretty much generate the transistors from foundry cell libraries. So you have to sign an NDA to get the cell library, but you can at least release your code.
Yes you can release your code, but you can't release your netlist or GDS-II. There's no guarantee that somebody else will be able to take the same HDL and close timing, even with the same foundry libraries (say if they are using a different tool, or different options). You'll also need things like clock-gating cells, memories, IOs (at a minimum) and those are foundry specific, so those would need to be abstracted out in some way.
> For analog chips, you can't do anything. An analog design highly depends on the parameters of the transistors you use, so before you even BEGIN designing, you have to sign an NDA to get the transistor models and you can NEVER open source an analog design.
Now this is where I disagree. Sure you can't open source your analog GDS-II, but maybe that's not the way to go. In my opinion what you want to do is build a foundry independent PDK for a generic 28nm, 40nm or whatever node using PTM models. A well designed analog circuit needs to be relatively independent of specifics, otherwise it's not going to work across all corners (this is more true for modern nodes than the kind of nodes the old textbooks talk about) and it'll be difficult to port to another process. So there's a good chance that analog circuits built for 'generic 28nm' or 'generic 40nm' could be ported to any foundries process (of course the PDK needs to be well designed). Yes you won't be able to push things to the limit as the DRC will be wider, but analog rarely needs to go to the limit. You could probably take the same approach for digital, but that's a lot more open source stuff to build.
Check out OpenRAM and FreePDK45 for academic projects taking this approach. Unfortunately FreePDK45 is only available to those with an academic email (despite being called 'open source'), which makes me very sad.
Yea, I think this is an interesting approach. But FreePDK45 slides mention that it's not designed for manufacture, and there does not appear to be an easy upgrade path. So you could design a chip with FreePDK, release the design, and then redo the whole thing in a vendor PDK.
I talked to someone who worked on AsicOne, and he said that even if you make your own PDK and draw your own transistors and everything, you'll still have to sign an NDK to do the sign-off and what not. I'm not intimately familiar with the whole process myself, but from what I understand it is basically impossible to have an open source analog design that you can actually manufacture. (sure, you can make a theoretical toy thing, but if you can't manufacture it, who cares?)
I'm quite familiar with the process and I believe it's entirely possible.
You will need to run foundry DRC decks, but the company you're taping out through will do this for you (I presume that you're not big enough to deal with TSMC directly). This is because a design that fails DRC could actually break other people's chips if you're sharing a wafer.
Of course if you really want to know that it'll work you need to also run foundry LVS and stimulate corners with foundry spice models and foundry PEX. But if you're gutsy you could skip this, if you believe you've put enough margin into your PDK corners.
Certainly there is zero need to redraw your transistors. Transistors are transistors, a few layers (od, poly, contact, implant, ...), there's no magic, no magic sauce. The foundry wants an SVG with overlapping rectangles (of some minimum size), nothing more.
I find it difficult to evaluate an Android for my phone. Currently running AOSP (actually phh-treble) rather than any fork that adds any significant features.
In particular it's hard to tell if any particular fork is just a one man hobby project, a fork of a fork, or has any significant work behind it.
Of course the big one is LineageOS, but they seem to make specific phone images, mostly for high-end phones, rather than generic system images for my low-end phone.
I ended up chosing phh-treble because it doesn't add any BS, promises quick updates, and serves as a base for many other roms, lending it some reliability. It's pretty much AOSP with hardware fixes.
That said, it doesn't do some pretty basic things like automatic brightness and night mode, or a PIN timeout. So maybe that's eventually drive me to try another ROM...
This looks really cool, but on Arch it does not compile because libgnomeui is not a thing that seems to exist anymore. Hasn't anyone made some modern port of this?
libgnomeui-dev was available in wheezy sources. But I had an issue with #include<asm/page.h> in src/linux/proc/ps.h. Replace it by #include<sys/user.h> for successful compilation.
const_str.hh: In static member function
‘static void const_str::safe_free(const char*)’:
const_str.hh:33:61: error: ‘free’ was not declared in this scope
static void safe_free(const char *s) { if (s) free((void*)s); }
^
I'd love to get this working - currently running Ubuntu 14.04 LTS, soon to be updated to Ubuntu 18.04 LTS.
I really want Signal to succeed. Or rather, I want anything that has decent cryto and is not FAANG to succeed.
The problem is not which messaging app I want to use, it's which messaging app my friends are using.
That said, if I had to choose, I think Matrix has a slight edge in my books because it's a protocol rather than a silo. Even though Signal is private and open source, they are hostile towards people running their own Signal builds on company servers, and unwilling to federate with other servers.
Essentially, you run the official Signal app on the official Signal servers, or GTFO.
They provide good reasons for doing so [1]. I share your hope that "anything that has decent cryto and is not FAANG" will succeed and I would prefer it to be a federated system but I also see what moxie describes in the article.
Basically federation only works on a lowest common denominator. This means progress is very slow or impossible.
Comparing the current state of the Matrix ecosystem and Signal I find it a lot easier to convince friends and family to join Signal. After all, that is what makes a messenger useful.
Matrix and Signal aren't comparable from a security perspective. Because Matrix is a protocol rather than a silo, many (most?) of its implementations don't even support E2E, and because Matrix has its roots in an ecosystem where E2E was a nonstandard add-on, Matrix will never be as safe as Wire or Signal.
Matrix project lead here; fwiw we’re aiming to turn on E2E by default for private rooms by end of Jan. It’s not really a non-standard add-on; it’s in the core of the protocol and has been designed for from the outset. It’s a pain in the ass to get right in a decentralised world though, hence the delay in forcing it on for everyone.
p.s. support for ephemeral msgs was released on the server in RC yesterday.
The way these conversations are structured, I'm always going to come across like I'm rooting for Matrix to fail, which is not at all the case. Like I said, I use Slack more than any other group messaging system, and while Slack does have some security assets that Matrix lacks, nobody can say that it has a more coherent encryption story. I wish Matrix all the best; I just don't think it's reasonable to suggest it as an alternative for people who need secure messaging that reliably works in groups of people.
Never seems a bit strong? Surely over the next decades we could have a Matrix 2.0 that is still federated, but mandates e2e (especially with Signal doing some of the research)?
True. On the other hand, there are some aspects in which Signal will never be as safe as Matrix. The big one is SMS verification. If someone loses their keys and has to reauthenticate over SMS, Signal notifies their conversation partners, but legitimate users do this all the time (in part because Signal lacks good key migration mechanisms), so said partners usually don’t see this as suspicious and often don’t bother reverifying the user’s identity. On Matrix’s side, I’m not sure how well it handles key migration (I don’t use it, for unrelated reasons), but it’s almost certainly less vulnerable to account theft in the first place. Matrix’s identity servers could of course be hacked or legally compromised, but they’re probably not as willing as cellular carriers are to hand over accounts to random people on request! Signal could improve its situation by getting better key migration support, but as long as it’s rooted in phone number identities, it will ‘never’ be as resistant to account theft.
Another aspect is that Matrix, if you’re technical enough, lets you set up a custom server for your secret group, which is somewhat less vulnerable to centralized metadata interception (though there are holes, like centralized mobile notification relays). Admittedly, this is mostly out of scope for Signal, which focuses on security for non-technical users.
Finally, to state the obvious, for many use cases, pseudonymity is safety. Along the lines of the “$5 wrench” XKCD, in practice the single most likely way for your secure messages to be disclosed is not through some clever protocol hack, but by their being pulled at rest from some conversation participant’s device – often with their active cooperation. Similarly, Signal’s deniability feature is cool, intentionally allowing users to forge cryptographically valid messages supposedly sent to them by others. But in practice, messages are typically leaked via screenshots, with no attempt made to detect forgery in the first place.
In such an environment, the most effective defense overall is probably self-destructing messages, which Matrix... apparently doesn’t support, but will soon. (Yikes – like I said, I don’t use it.) But in cases where the people you’re talking to don’t need to know your real identity, pseudonymity is a close second. Its weakness is that people are bad at separating identities and maintaining opsec, but it’s still better than nothing. It’s strongest in cases where you’re part of a large group (say, of protesters): this greatly increases the chance that the adversary will be able to read your messages (with a mole in the group), but also means that they probably don’t care about you personally and would prefer to go after low-hanging fruit. Or even if everyone is equally protected, it increases the amount of time they have to spend going after each person, reducing the number of people they can find.
Anyway, I don’t want to be too negative. The world is certainly better off for Signal’s existence. Maybe Signal will add non-phone-number account support someday, solving two of the issues I mentioned in one blow. Maybe it won’t, but it’ll still be useful to many people, and its continuing cryptographic research will strengthen other messengers, including ones that target use cases Signal does not.
Still, I feel like there’s some dissonance. From a cryptographer’s perspective, Signal is head and shoulders above the pack; they really know what they’re doing, to an extent that practically nobody else does. But in other areas, Signal is just okay. Not bad, often better than average, but rarely outstanding. And that includes areas that impact security, like key transfer and the other things I mentioned.
Interesting. I wonder what design they'll come up with. The thread links to a tweet from Moxie from a few months ago, which (along with some other tweets in the thread) is interesting to think about:
In theory they could keep using the native contact list and just stuff Signal usernames in there; iOS does have the APIs to do that, and I'd assume Android too.
The one thing I don't like about Signal is that it's tied to a phone number. Sure, you can tie the account to a VoIP number but that's not the same as Wire which allows you to sign up with an email address and your account id based on a username, which cannot be SIM-attack hijacked.
It seems like an AI would not "play by the rules" and can easily detect weird edge cases and shortcuts. But as far as I can tell from the small GIFs, it's not doing any particular glitches.