I used to help build the CTFs for BSides Orlando. I ended up moving to another con, and at our last event we collected extensive logging for post mortem analysis.
We found that AI usage is basically guaranteed now, but certain challenge designs did thwart it. Challenges built with temporal visual elements made AI fall flat on its face, as it could not ingest/process the data fast enough to act on them in time. We also found that counterfactual challenges (ie. the result you get did not match what we suggested you'd get) made AI-assisted solve time slower compared to pure humans, indirectly penalizing over-reliance on AI. Multimodal challenges combining audio and visual elements were also very effective, but were not as accessible to players.
For our next event we figured out a way to thwart AI in our CTF: embed the CTF in a game engine. The loop essentially becomes something like this: Connect to a simulated access point in the game, the K8s cluster connects their attack container to a private network with the challenge box(es). Hacking the boxes doesn't render a flag, but rather changes in game state. AI did very poorly coping with this in our testing, as it can't derive the spatial state of the game world very well and it soft decouples the inductive reasoning loop it relies on to know if it is on the right track.
The downside to this approach is it is far more labor intensive for CTF organizers, and requires players to have a computer capable of running the game. We are also betting on AI to not advance enough by the time we ship to be able to just ingest the entire game state in realtime and close the loop that way.
You should not consider Tim Sweeney's comments on the matter as a reliable source. He was veiling his true motivations behind that statement. The Switch does not run Linux either, it's a custom OS descending from the Wii's iOS.
The cheating issue isn't really a matter of being able to run custom kernel code. You can do the same thing on Windows, which is why remote attestation is a thing for some games. As someone who has developed games for Linux (and Windows / Mac), it's an endless cat and mouse game. So long as the system can execute code that is not yours, you never really are getting perfect anticheat. Ease of loading custom kernel code isn't really a hurdle to that.
I find that client and server based in combination is the robust approach. I once implemented anti-cheat in which the server lied about game state, which a regular client without cheats would act predictably on. Deviation from that behavior is a useful heuristic to build a suspicion score.
I have unlimited access to every single frontier model, I've tested all of them, they are not good at writing software.
They are basically slot machines, sometimes you win a little bit and sometimes you win a lot but usually you just burn a ton of time and money sitting and staring at a screen (and frying your brain).
In theory, the market should be pricing in based on future potential. As it has become increasingly clear this past decade, the market is not rational.
My experience was the same while helping to adapt a Steam Deck game for wider Linux support. The issue wasn't Waylandisms, most of those have already by figured out. It was GNOME. Their preferred resolution to issues seems to be dropping support rather than bug fixes, and they go out of their way to adopt implementations that are against the momentum of the wider community. I can get why they make some of their decisions, but things like killing the tray indicator or server side decorations are insane. To be an outlier in name of a greater or grander goal is one thing, then there is whatever GNOME is doing.
It might be wordsmithing to skirt around "robot" as a fully autonomous entity. Much like their FSD, I expect they aren't going to deliver full autonomy anytime soon.
So much of it is a problem of execution. If people could use Linux without ever having to know what a terminal is (much like the average Windows user doesn't know what PowerShell is), then it would actually be quite successful. It has gotten better over the past decade, but it still suffers from endless paper cuts and the odd issue that requires a shell session to fix. I will say that Valve's SteamOS has come the closest to avoiding this trap. You can use a deck without ever having to touch a CLI.
It's been an unfortunate re-occurring issue for me as well. Recent hardware is much better about this, and I too have seen the performance bumps at the cost of software compatibility. I feel like if Adobe brought their CC suite to Linux I'd have no reason to ever use Windows outside the random game that _needs_ it.
At least on the latest Sequoia, there has been no hard requirement for an online account. They nudge you towards it, but you can decline and continue. As far as I can remember, macOS has never required an online account to set up a Mac.
You might need it for the App Store if anything, but even then... You don't need the app store for installing software. Mac is at its peak currently, though the new glass UI stuff is a little over the top for me. I miss the old simpler UI. I'm sure I'll get used to it eventually.
We found that AI usage is basically guaranteed now, but certain challenge designs did thwart it. Challenges built with temporal visual elements made AI fall flat on its face, as it could not ingest/process the data fast enough to act on them in time. We also found that counterfactual challenges (ie. the result you get did not match what we suggested you'd get) made AI-assisted solve time slower compared to pure humans, indirectly penalizing over-reliance on AI. Multimodal challenges combining audio and visual elements were also very effective, but were not as accessible to players.
This paper gave us some ideas about designing those challenges: https://arxiv.org/pdf/2308.02950.
For our next event we figured out a way to thwart AI in our CTF: embed the CTF in a game engine. The loop essentially becomes something like this: Connect to a simulated access point in the game, the K8s cluster connects their attack container to a private network with the challenge box(es). Hacking the boxes doesn't render a flag, but rather changes in game state. AI did very poorly coping with this in our testing, as it can't derive the spatial state of the game world very well and it soft decouples the inductive reasoning loop it relies on to know if it is on the right track.
The downside to this approach is it is far more labor intensive for CTF organizers, and requires players to have a computer capable of running the game. We are also betting on AI to not advance enough by the time we ship to be able to just ingest the entire game state in realtime and close the loop that way.
reply