Opinionated design was great back when Apple's Human Interface Guidelines were based on concrete user testing and accessibility principles. The farther we get from the Steve Jobs era, the more their UI design is based on whatever they think looks pretty, with usability concerns taking a back seat.
It was good because it was both Opinionated (in other words, the path to write software that follows the design was easy, and the paths to write software that violated the design were hard), and also well-researched by human interface experts.
Now what we appear to have is "someone's opinion" design. A bunch of artists decided their portfolios were a little light and they needed to get their paintbrushes out to do something. I don't work at Apple, but my guess is that their HI area slowly morphed from actual HCI experts into an art department, yet retained their power as experts in machine interaction.
So here we are, we still have Opinionated design, but it might just be based on some VP's vibes rather than research.
I don't like to paint Apple as being completely incompetent (but damn have they been screwing stuff up), but I do think trying to solidify the experiences around a common codebase has become untenable. The idea is great thought - write one app that works on macOS, iPadOS, iPhoneOS, visionOS, etc. What a time saver that is for developers - but the problem is that screen sizes and interactions with those different platforms vary. Yes, resizing a window with your clunky finger needs a bit more wriggle room, while a pixel precise mouse or touchpad is a lot different.
The flaw in trying to detect AI by its use of particular idioms is that it would have learned these idioms from its training corpus, which consists of writings from actual human beings.
In other words, some people actually write like this.
Key word here being “some” people. Not nearly at high enough frequency that this way of talking was noticeable before. AI uses this pattern CONSTANTLY and it’s very fucking irritating.
Have you ever met human beings that constantly reuse a certain idiom/figure of speech/linguistic pattern?
The valley girl using "like" every other word, for example?
Or I had a colleague who would use the expression "we can say" (in French, because we were speaking in French) basically every couple sentences for a bit.
Humans also repeat speech/linguistic patterns, therefore "repetition of the same pattern" is not sufficient to mark text as produced by AI :)
Yes but there are a lot more "idiom personalities" in humans (you just mentioned several) than there is in AI. Basically every English-language interaction with AI anywhere in the world produces more or less the same argot and style. Its like (heh) we're all talking to the same valley girl stereotype.
I find takes like this very strange. Whether or not it gives the correct information, it's clearly not designed to give false information to factual queries.
The design of it is based on the intention of the people creating it, not the actual outcome, and it's pretty clear from all available information, plus a general understanding of incentives, that it's designed to be as accurate as possible, even if it does make errors.
Humans update their model of the world as they receive new information.
LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.
I've often thought, "If AI is so great, how come all these tech companies are shoving AI features down our throats for free, instead of charging real money for them?" I'm actually glad that MS is doing this, and I hope it starts a trend of more companies gating their AI features behind paywalls, and a noticable reduction in the number of popups I encounter badgering me to use AI features that I never asked for.
IntelliCode was first released in 2018, well before the current AI landscape where running each model costs a neighborhood's worth of power. Indeed, it runs using a small local model that costs essentially nothing in comparison to the rest of the machine running it.
In fact, the intent here is exactly the opposite of what you're hoping for (less AI badgering). They're trying to get people to actually use Copilot after recently missing internal adoption goals on all the AI products they're trying to shove down people's throats. The badgering is only going to get worse, and they're going to continue removing functioning, free features to do so. You should not be glad that Microsoft is killing a free lightweight product for a bloated, ecologically harmful and economically wasteful one.
I believe this comment is nuts. How the hell are you justifying the removal of a free and common IDE functionality for something that it's rate limited based on usage? In any other context, this would have been called enshittification...
So, I gather that you treated your solutions as throw-away code, rather than keeping them? Kind of surprising, considering that some problems build off of each other, or otherwise benefit from sharing code; you never know when the code for one solution could be useful later. For example, a prime number generator/tester is necessary for many of the problems.
(I have all my solution code, in source control no less, so if I ever lost my account, I could just run them all and re-enter the solutions.)
> So, I gather that you treated your solutions as throw-away code, rather than keeping them?
I kept the code that I found clever or useful, but I had a very borderline approach to archiving my stuff in general back then. I was still in high school.
But Word Mastermind (like regular Mastermind) only tells you how many letters are in the correct spot, and how many are present but not in the correct spot. Whereas Wordle tells you specifically which letters fall into those categories. So it's not quite the same. (That's why Wordle only gives you 6 guesses, while Word Mastermind has 10 rows.)
reply