Hacker Newsnew | past | comments | ask | show | jobs | submit | sethops1's commentslogin

AI slop. Get lost.

The implication is that the gatekeeping has become marketing dollars, when it used to be skill at making a fun game. I don't think we're in a better situation today.

There are fun games that succeed without marketing, e.g. Balatro, and there are bad games that fail despite it, e.g. Highguard.

The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.


Balatro did marketing and were extremely successful at it getting gigantic content creators to play their game.

This was challenging enough pre AI. Now that everybody has an AI slop button, the life of an effective code reviewer just got so much more miserable.

> The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off.

So basically, kill the productivity of senior engineers, kill the ability for junior engineers to learn anything, and ensure those senior engineers hate their jobs.

Bold move, we'll see how that goes.


Juniors could just code things the old fashioned way. It isn't hard. And if they do find it too hard, they aren't cut out for this job.

But aren’t companies enforcing AI usage? If noy, wait for it

Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.

It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.

What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.


> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.

That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.


Vibe code a side project at work. I’m willing to bet the tools aren’t mapping the code contribution locations to business impact (hard problem).

> even if they immediately throw the output in the metaphorical garbage bin.

Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...


In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!

I wonder if Copilot can write a commit and backout routine for them.

If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.

Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.


From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.

I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.


My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.

"Can't fix stupid"

What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.

> What's stopping someone from just having the AI churn out garbage all day long?

In my case it's morality.


Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.

edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.


I would argue that making the company experience the consequences of its choice of metrics / mandates is in fact a moral imperative.

Aren't these companies mandating the use of these tools at first place? Juniors aren't the problem.

Well, not when they are mandated to use AI tools and asked for justification about their usage!

I am saying in General, I've never worked in Amazon


Accelerate a person speed toward being burned out..

..and you lower overall engineering salary spend by rotating out seniority-paid engineers for newly-promoted AI reviewers with lower specs

But Amazon is something you tolerate for a year or two early in the career, before moving somewhere better (which is anywhere else)?

I'm sorry what? Junior engineers can't learn anything without using AI assistants (or is the implication that having seniors review their code makes them incapable of learning?) and senior engineer would hate their jobs reviewing more code from their teammates? What reality do people live in now?

I thought the implication was that juniors would continue to use AI to stay "productive" (AWS is not a rest and vest job for juniors, from what I've heard) and seniors would no longer have time to do anything but review code from juniors who just spin the AI wheel.

There's a lot of learning opportunity in failing, but if failure just means spam the AI button with a new prompt, there's not much learning to be had.


> senior engineer would hate their jobs reviewing more code from their teammates

Jesus, yes. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. It's not terribly high, either. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.

Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that.


If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?

I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.


Yes, this is part of why AI tools are bad

Code Review is hard and tiring, much moreso than writing it

I've never met anyone who would be okay reviewing code for their full time job


Why should Github do anything?

If you execute arbitrary instructions whether via LLM or otherwise, that's a you problem.


If I'm understanding the issue correctly, an action with read-only repo access shouldn't really be able to write 10GB of cache data to poison the cache and run arbitrary code in other less-restricted actions.

The LLM prompt injection was an entry-point to run the code they needed, but it was still within an untrusted context where the authors had forseen that people would be able to run arbitrary code ("This ensures that even if a malicious user attempts prompt injection via issue content, Claude cannot modify repository code, create branches, or open PRs.")


I'm just wondering if there's a possible way to prevent this that wouldn't be intrusive or break existing features.

It can have better defaults but that's about it. If LLM tells user the LLM needs more permission user will just add them as people that are affected by bugs like that traded autonomy and intelligence to AI

The only vibe I get from Altman is that he's a weasel, willing to say anything or burn whatever to get what he wants.

Nobody should feel safe using the TikTok client, period.

Not just the TikTok client, anything made by Oracle is risky.

Neither Instagram/Facebook's Messenger/WhatsApp.

And signal

What do you use for messaging?

Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.

I have been using simpleX for some time now.

Are you aware of the creator's political beliefs and the E2EE leak baked into the app?

Matrix.

When you see .unwrap in Rust code, you know it smells bad. When you see x, _ := in Go code, you know it smells bad.

> But if you don't know Go, it's just an underscore.

And if you don't know rust, .unwrap is just a getter method.


One big difference is that with unwrap in Rust, if there is an error, your program will panic. Whereas in Go if you use the data without checking the err, your program will miss the error and will use garbage data. Fail fast vs fail silently.

But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.


> When you see x, _ := in Go code, you know it smells bad.

What if it’s a function that returns the coordinates of a vector and you don’t care about the y coordinate?


Haven't jumped into rust for a while. Had to read up on what .unwrap() does.

   x, _ := 
With the topic of .unwrap() _ is referencing an ignored error. Better laid out as:

  func ParseStringToBase10i32AndIDoNotCare(s string) {
     i, _ := strconv.ParseInt(s, 10, 32)
     return i
  }
Un-handled errors in Go keeps the application going were rust crashes .unwrap().

Ignoring an output data value or set is just fine. Don't always need the key and value of a map. Nor a y axes in vector<x,y,z> math.


If you think XML is a failed technology you haven't stepped foot anywhere near a serious enterprise company.

It's a failed technology for websites.

How is it failed? Just compared to, like, the prevalence of HTML?

I've worked in web dev for almost 20 years. Almost every year has had some kind of work with XML.


Client side? i think not. 25 years ago we were told web sites were going to make their data available in nice machine readable XML form which would be transformed by xslt etc into presentation form and available for machine use without the presentation form. Same promise as semantic HTML but earlier, and same promise as webmcp now.

We are using HTML and not XHTML. I have not used XML on websites in over 15 years when HTML5 got stable.

the CNC machine I'm working retrofitting right now has XML definitions for basically the entire thing from GPIO setup to machine size parameters. Kinda crazy but at least it isn't a cursed hex file

Kagi sources their search results from Google.

This is false.

Kagi had a post discussing this which made the front page of HN about a month ago [1]:

> Google does not offer a public search API. The only available path is an ad-syndication bundle with no changes to result presentation - the model Startpage uses. Ad syndication is a non-starter for Kagi’s ad-free subscription model.

[1]: https://news.ycombinator.com/item?id=46708678


Some very dodgy wording here.

> Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page). These providers serve major enterprises (according to their websites) including Nvidia, Adobe, Samsung, Stanford, DeepMind, Uber, and the United Nations.

> This is not our preferred solution. We plan to exit it as soon as direct, contractual access becomes available. There is no legitimate, paid path to comprehensive Google or Bing results for a company like Kagi. Our position is clear: open the search index, make it available on FRAND terms, and enable rapid innovation in the marketplace.

https://help.kagi.com/kagi/why-kagi/kagi-vs-google.html


For the purposes of the discussion at hand, yes some results do ultimately come from Google, just via third-party SERP providers rather than Kagi paying Google for access since Google doesn't offer their own public API (and neither does Bing anymore).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: