The implication is that the gatekeeping has become marketing dollars, when it used to be skill at making a fun game. I don't think we're in a better situation today.
There are fun games that succeed without marketing, e.g. Balatro, and there are bad games that fail despite it, e.g. Highguard.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.
> The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off.
So basically, kill the productivity of senior engineers, kill the ability for junior engineers to learn anything, and ensure those senior engineers hate their jobs.
Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.
It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.
What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.
> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.
That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.
> even if they immediately throw the output in the metaphorical garbage bin.
Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...
In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!
If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.
Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.
From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.
I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.
My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.
What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.
Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.
edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.
I'm sorry what? Junior engineers can't learn anything without using AI assistants (or is the implication that having seniors review their code makes them incapable of learning?) and senior engineer would hate their jobs reviewing more code from their teammates? What reality do people live in now?
I thought the implication was that juniors would continue to use AI to stay "productive" (AWS is not a rest and vest job for juniors, from what I've heard) and seniors would no longer have time to do anything but review code from juniors who just spin the AI wheel.
There's a lot of learning opportunity in failing, but if failure just means spam the AI button with a new prompt, there's not much learning to be had.
> senior engineer would hate their jobs reviewing more code from their teammates
Jesus, yes. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. It's not terribly high, either. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.
Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that.
If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?
I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.
If I'm understanding the issue correctly, an action with read-only repo access shouldn't really be able to write 10GB of cache data to poison the cache and run arbitrary code in other less-restricted actions.
The LLM prompt injection was an entry-point to run the code they needed, but it was still within an untrusted context where the authors had forseen that people would be able to run arbitrary code ("This ensures that even if a malicious user attempts prompt injection via issue content, Claude cannot modify repository code, create branches, or open PRs.")
It can have better defaults but that's about it. If LLM tells user the LLM needs more permission user will just add them as people that are affected by bugs like that traded autonomy and intelligence to AI
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
One big difference is that with unwrap in Rust, if there is an error, your program will panic. Whereas in Go if you use the data without checking the err, your program will miss the error and will use garbage data. Fail fast vs fail silently.
But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.
Client side? i think not. 25 years ago we were told web sites were going to make their data available in nice machine readable XML form which would be transformed by xslt etc into presentation form and available for machine use without the presentation form. Same promise as semantic HTML but earlier, and same promise as webmcp now.
the CNC machine I'm working retrofitting right now has XML definitions for basically the entire thing from GPIO setup to machine size parameters. Kinda crazy but at least it isn't a cursed hex file
Kagi had a post discussing this which made the front page of HN about a month ago [1]:
> Google does not offer a public search API. The only available path is an ad-syndication bundle with no changes to result presentation - the model Startpage uses. Ad syndication is a non-starter for Kagi’s ad-free subscription model.
> Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page). These providers serve major enterprises (according to their websites) including Nvidia, Adobe, Samsung, Stanford, DeepMind, Uber, and the United Nations.
> This is not our preferred solution. We plan to exit it as soon as direct, contractual access becomes available. There is no legitimate, paid path to comprehensive Google or Bing results for a company like Kagi. Our position is clear: open the search index, make it available on FRAND terms, and enable rapid innovation in the marketplace.
For the purposes of the discussion at hand, yes some results do ultimately come from Google, just via third-party SERP providers rather than Kagi paying Google for access since Google doesn't offer their own public API (and neither does Bing anymore).
reply