The drifting of code from comments is a problem I would love to see solved.
I've seen tools that can compare the git commit dates of code with nearby comments and that's a good start. However, there are potential problems with that, such as code and the comments that discuss the code not being near each other; or the code being updated and there being no need to update the comment
I think literal programming might help here, but that's an entirely different topic really.
Looking for more advanced tools that that and I suppose we're into the world of AI - asking the tool to understand both the code and the comment and to compare the underlying meaning.
Code review is an option but outside of an organisation that's difficult to do and besides, I think the problem would be best solved by something that is repeatable and part of the build process. And I'd love to be able to have a git commit hook that can say, "hold on! you've updated code but there's a comment that now looks old". That's the dream.
GCHQ was the cover name for Bletchley Park but the organisation's name at the time was GC&CS which was established in 1919. So very old and surprisingly predates WW2
I'd have to check my notes from a 40 year old interview with a former Bletchley Park code breaker .. as I recall ...
The official legit name of GC&CS was Government Code and Cypher School .. but IIRC at the time (1919 post WWI) the official name was coined it was putting out letterheads and contracts as G<something> Copper & Cable Services.
That's a dim recollection of what may have been one of many inside jokes | chuckles from WWII Bletchley as retold to an Australia some 30 odd years later so YMMV.
It's in keeping with keeping secrets from the general public & foreign agents via a Boring Name.
The British | Commonwealth WWII company front for their pre Manhatten Project nuclear programme was Tube Alloys .. so they did like a dull metals related cover name.
I've always been amused by these boring sounding cover names. In the USSR, nuclear related work was administered by the "Ministry of Medium Machine Building".
And The Manhattan Project was originally the "Manhattan Engineering District". Although that sounds a little bit exotic, at least to my ears.
Unfortunately, it's not compatible with DPC or DPC+, CDFJ+, etc. The emulator works by "dumping" the ROM binary from the inserted cartridge. This is fine for simple cartridges but for more advanced bank switching technologies it's difficult to do correctly and for processor-cartridges it's impossible.
As you suggest, one way around that is to fingerprint the cartridge and then to load a binary into the emulator from internal memory (if they're using Stella then the emulator is capable of emulating the processors on those chips) but I don't believe that is an option provided by the 2600+. And if it was, it's a very limiting option for the end user because new homebrew games wouldn't work.
Are processor cartridges at all common, even among homebrew games? Just curious.
I think we need to be reasonable about how much magic we really expect them to put into this thing, since running Atari and Activision cartridges, like the 2600 (and 7800!) of old, is what most people would expect the machine to do. On the other hand... if they make the machine easily hackable, that might be pretty great, and earn this version of Atari a little respect.
Yeah. Many of the most popular games released in recent years have contained an ARM chip. As it happens Atari Age have removed many of those games from distribution (in readiness for this takeover by Atari) because they weren't properly licenced. A good exception to this rule is LodeRunner which has been licenced and uses an onboard ARM.
If the market for this product is people who only want to play the older ROMs on a HDTV then I'm sure the 2600+ is a fine product.
I am not sure what the real market is then, though. If it's for people that want to re-live their memories, then a system with built-in games (of which plenty exist) makes much more sense, or one with a digital store (so, the VCS, or digital collections like Atari 50). Whereas people that still held on to their cartridges and want to play them seem to be more the crowd that kept up with newer and homebrew games as well?
Maybe I'm projecting from myself, but if I'm that kind of enthusiast that held on and wants to play their actual cartridges, I'd expect compatibility with any cartridges I might still buy in the future.
(Then again, both the RetroN and Polymega seem to do well enough, so what do I know, apart from not liking the idea of these systems as half-baked in-between solutions.)
I agree with you. I don't begrudge the 2600+ but it's not one for me.
It seems to me that compatibility with any cartridge would require a recreation of the actual VCS hardware. That sounds expensive to me and is probably why Atari haven't gone down this route. But as you say, that leads to a half-baked solution.
The other option is an emulator that can communicate with the cartridge in real time - but that doesn't exist and I doubt Atari have gone to the trouble of making one (we're all assuming they're using Stella at this point, but I don't think that's been confirmed).
You've hit the nail on the head for me. From my point of view, vim is simpler than neovim. I value that and wouldn't like to see neovim subsume it (although I'm sure people are happy with neovim for very good reasons)
It sounds to me like this is downloading the toolchain as though it were any other Go package and compiling it. I'm not sure it's downloading binaries, as the article suggests.
From the Go Toolchains documentation:
"When using GOTOOLCHAIN=auto or GOTOOLCHAIN=<name>+auto, the Go command downloads newer toolchains as needed. These toolchains are packaged as special modules with module path golang.org/toolchain and version v0.0.1-goVERSION.GOOS-GOARCH. Toolchains are downloaded like any other module, meaning that toolchain downloads can be proxied by setting GOPROXY and have their checksums checked by the Go checksum database."
Curious what your concerns are? As of Go 1.21 the toolchain will be fully reproducible [https://github.com/golang/go/issues/57120] and since the binaries will be distributed through the module system it'll be possible to verify that everyone is getting the same binaries. So you can be pretty confident you'll end up with the same bytes as if you downloaded the source and compiled yourself.
In principle, I like the idea of it. Whether or not it's a binary or source to be compiled locally is irrelevant, I suppose.
My concern is that it's not clear to me how this will look on my computer. Currently, I download the source tarball and build with the latest Go toolchain I have previously installed. I then change the path to point to the newly built toolchain.
So, in the future, what is it I'm downloading? How does the toolchain get updated exactly? Where does the new executable reside? What happens to the toolchain that I explicitly built and installed? How does Go1.21 change my path?
As I say, I like the idea in principle but there are details that I think are important, that don't seem to have been explained anywhere.
Thank-you. This is important point. It's not even an official proposal yet. And even if it becomes a proposal there's no reason to think it'll actually happen.
The speed at which this topic has escalated since the idea was raised last week is astonishing. The biggest mistake Cox has made is assuming people would at least read the three blog posts and debate the topic as presented.
I'm not thrilled at the idea of opt-out telemetry but some of the rhetoric is well and truly beyond the pale.
> The biggest mistake Cox has made is assuming people would at least read the three blog posts and debate the topic as presented.
Yes, I agree he was perhaps a bit naïve about this. On the other hand: I also don't really know how to do this type of discussion better. Russ obviously went out of his way to come up with a nuanced solution, which one could reasonably disagree with, but I have the impression many people didn't bother reading it, especially on HN here but also in the Go discussion about it. This thread is full of basic misunderstandings, or, if we want to be less generous about it, misinformation. I don't think most people do it on purpose, it is what it comes down to in the end.
One of the really great things about internet discussions is that anyone can join in. One of the bad things about internet discussions is that anyone can join in.
> Russ obviously went out of his way to come up with a nuanced solution, which one could reasonably disagree with, but I have the impression many people didn't bother reading it,
Telemetry is one of those topics that comes with a lot of baggage. Coupled with Google's baggage and we see the inevitable response. I have a similar impression as you - people saw the word "telemetry" and thought they'd read enough.
I thought Cox's posts were great. I came in thinking, "telemetry? hell no!" but came away thinking he made a good case. People can still disagree over it but some of the misinformation is absolutely unbelievable.
People who write the software that controls a nuclear reactor are definitely applying engineering principles (If they're not, then the world has a problem). Does that make them engineers? I would argue it does.
The problem with the programming profession is that it's possible to get away with not applying good engineering principles. It's only in domains like the nuclear industry, or the military (fighter planes etc.) where we can really imagine the consequences of buggy software.
Software Engineer is a title that reminds of how we should be approaching the programming problem.
I've seen tools that can compare the git commit dates of code with nearby comments and that's a good start. However, there are potential problems with that, such as code and the comments that discuss the code not being near each other; or the code being updated and there being no need to update the comment
I think literal programming might help here, but that's an entirely different topic really.
Looking for more advanced tools that that and I suppose we're into the world of AI - asking the tool to understand both the code and the comment and to compare the underlying meaning.
Code review is an option but outside of an organisation that's difficult to do and besides, I think the problem would be best solved by something that is repeatable and part of the build process. And I'd love to be able to have a git commit hook that can say, "hold on! you've updated code but there's a comment that now looks old". That's the dream.