My school started with a 6502 lab and then followed it with a 68k lab. That still seems like the right order to me all these years later. Starting with the smaller, more cramped chip and learning its limitations makes it all the more interesting when you can "spread your wings" into the larger, more capable one.
You scale up the complexity of the programs you need to build with the complexity of the chips. In the 6502 lab it was a lot about learning the basics of how things like an MMU works and building basic ones out of TTL logic gates. In the 68k lab you take an MMU for granted and do more ambitious things with that. There's useful skills both from knowing what the low level hardware is like as intimately as a 6502 can get versus the skills from where it is easier to program because you have more modern advantages and fewer limitations.
The other thing about that order was that breadboarding the 6502 hardware was a lot more complex, but it made up for it in that writing and debugging 6502 assembly was a lot easier. There are a ton of useful software emulators for the 6502 and you can debug your code easily before ever testing it on lab hardware. At one point in that lab I even just wrote my own mini-6502 emulator specific to our (intended) breadboard hardware design. On the other side, there a lot fewer software emulators for the 68k and the debug cycle was a lot rougher. The 68k breadboard hardware was a bit more "off the shelf" so it was easier to find an existing emulator that matched our (intended) hardware design, but emulator itself was buggier and more painful to use and ultimately testing on the real hardware was the only trustworthy way to deal with it. I also wasn't going to try to write my own 68k emulator. (Though some of the complexity there was the realities of hardware labs in that the hardware itself starts to pick up its own localized quirks from being run only on breadboards for years.)
Deno 2 was a major release entirely about shoring up Node compatibility. That's arguably a big reason for why Bun seems to have a higher velocity right now of the trio. Deno put in a ton of engineering into Node/npm compatibility, it shows, and right now I would recommend trying Deno as having suprisingly few compatibility/ecosystem issues remaining.
(Deno has not stayed entirely still, other new features were added outside of that compatibility effort, but Bun doesn't have Node compatibility as much as a goal today and so gets the fast mover award.)
I started using Deno in hobby projects because I like the out-of-the-box defaults a lot better than Node (deno.json is a lot simpler than a lot of the cruft that package.json has acquired, but also includes more things in one place like out of the box eslint support [deno lint] and prettier-equivalent [deno fmt]). Also, Deno Deploy has a generous free tier and that's a healthy incentive for hobby projects that want a modest database (Deno KV) and basic task processing queues.
I also used Deno for some side projects before 1.0 and I really like it. Especially its focus on web stanrards, and yes, being an all-in-one tool. But I don't have the time or inclination for a lot of side projects these days.
I actually feel much more warmly to Deno than Bun, and would use it if I felt like it made sense. I tried to avoid advocating either way in my former posts in this thread. But regardless of my personal feelings, at work pragmatism rules the day.
The one job where I got to experience an Oracle database in action, back around ought four (allegedly one of the good periods for Oracle's db tech):
A) Performance was so terrible by default the DBAs were cops and had given up on real RDBMS performance tools such as indexes and keys (primary and foreign) and instead used a ton of arcane low level file management tools like Oracle was some sort of "build your own DB kit" and the out of the box one was terrible so they needed a bespoke hand built one
B) I was once in a CLI running some sort of very basic Select statement (no joins, because again, no keys; single table; maybe a couple of columns) and watched in horror as an entire Java-based debugger IDE not installed on my extremely locked down work machine launched, spun up, and dived deep, deep into a terrifying stacktrace comprising an awful mix of C++ and Java code. To this day I don't know how or why a Select statement (that worked the second time) crashed so hard into Oracle's own source code. It is possibly related to part A above, but I'm still not sure. I don't know why the debugger symbols much less actual source code from Oracle were even available when it did crash that hard. I did know at the time that actually debugging Oracle's code was very far above my pay grade and in addition to sending the debugger IDE they should have also attached a large check.
I think of that experience often when people tell me that Oracle actually has good tech. If it weren't obvious from how awful their tools are to use as a user (a different previous employer used Oracle's expensive time/attendance/payroll tools and those were the clunkiest, worst web apps to use), that brief, weird horror story of the Select statement in a CMD.EXE window REPL bringing up the wildest stack trace in a debugger that didn't exist on my machine will always leave me feeling doubtful about that.
Microsoft lost that fight so badly long ago against Sun that Internet Explorer and Windows documentation for like a decade referred to the language as "JScript" (and also tried to make VBScript a viable alternative partly to avoid even accidentally using Sun's trademark) to the bemusement of everyone. Interesting to wonder if the web would have been a little better if Microsoft had won that trademark battle at the time or if Sun had donated the trademark to ECMA so that official standards didn't have to be named EcmaScript.
The way I read it, the difference between existing UAC and "Adminless" is that the user is always in the Administrators group and UAC just unlocks an Administrator token/ACL temporarily to bestow the actual powers of the Administrators group. In "Adminless" the user is only a less privileged/low privilege user, a new system-managed Admin User is created, and the new security boundary prompts instead of unlocking a temporary token/ACL are more "runas" the system-managed Admin User. It's similar to Linux sudo sending commands to the root account, where Linux doesn't have a token/ACL model that allows temporarily upgrading the existing user "in place". It's also similar to how Windows Admin security was managed pre-UAC in places that separated standard accounts and Admin accounts, and similar to how many corporations still manage security, with the difference being that the new "Adminless" admin account is system owned (like the various internal service accounts), supposedly does not allow interactive login, has no password only a hardware security key (hence why the new security boundary requires Windows Hello unlocks every time, versus UAC can be as subtle as Yes/No, depending on configuration/group policy).
"Adminless" is a funny name given that there's still an admin account involved, it's just an admin account that is much more than before not a user account but more like a service account.
> I dont know that they are right (eg if the #1 exposure for home users is ransomware, does a tpm help at all?), but I am prepared to give them dome grace.
One particularly generous view is that the TPM requirements catch PCs up with the TPM requirements of modern phones. (Both iOS and Android have had very strict TPM requirements for a while now.) With a lot of industry interest in moving to hardware security-backed Passkeys to replace passwords, it would help to have PCs on an equal security footing with phones.
Passkeys are a pretty big deal to reduce home user exposure. Phishing and all of its variants are as much or more a home user problem as ransomware.
Passkeys are a multi-vendor standard. Because Windows is no one's phone vendor today, it's generally a good idea that Windows has strong Passkey support because it can be an intermediate between the two major phone vendors and help even average users avoid vendor lock-in by pushing a majority of users to try keeping keys with at least two vendors (their phone, and their Windows device) in their common accounts.
Steam's recommendations, and their actual "SteamOS", moved to Arch-based around the launch of the Steam Deck.
I've been told Bazzite is a really nice Steam-focused distro: https://bazzite.gg/
Haven't tried it yet, I'm procrastinating moving a gaming desktop to Linux as October gets closer (end of Windows 10 support; end of Windows support for that desktop's CPU/TPM).
The article mentions in the future being able to use Apple Intelligence to create "splash artwork" for invites rather than just uploading a photo or using emoji/memoji backgrounds.
"Image Playground is built right into Apple Invites, giving users an easy way to generate original images that make their invitation even more unique."
I look forward to the aberrations this will inevitably produce.
Some of the timing of this app seems coincided with something of a large exodus from Meta/Facebook's apps. WhatsApp never quite caught on in the US, but Facebook Events did. (They are both Meta/Facebook apps today. The feature sets are, somewhat similar here.) A lot of my friends have been looking for replacements for Facebook Events, so this is somewhat timely for those that like Apple apps as a replacement.
You scale up the complexity of the programs you need to build with the complexity of the chips. In the 6502 lab it was a lot about learning the basics of how things like an MMU works and building basic ones out of TTL logic gates. In the 68k lab you take an MMU for granted and do more ambitious things with that. There's useful skills both from knowing what the low level hardware is like as intimately as a 6502 can get versus the skills from where it is easier to program because you have more modern advantages and fewer limitations.
The other thing about that order was that breadboarding the 6502 hardware was a lot more complex, but it made up for it in that writing and debugging 6502 assembly was a lot easier. There are a ton of useful software emulators for the 6502 and you can debug your code easily before ever testing it on lab hardware. At one point in that lab I even just wrote my own mini-6502 emulator specific to our (intended) breadboard hardware design. On the other side, there a lot fewer software emulators for the 68k and the debug cycle was a lot rougher. The 68k breadboard hardware was a bit more "off the shelf" so it was easier to find an existing emulator that matched our (intended) hardware design, but emulator itself was buggier and more painful to use and ultimately testing on the real hardware was the only trustworthy way to deal with it. I also wasn't going to try to write my own 68k emulator. (Though some of the complexity there was the realities of hardware labs in that the hardware itself starts to pick up its own localized quirks from being run only on breadboards for years.)
reply