Hacker News new | past | comments | ask | show | jobs | submit | overgard's comments login

Yeah, to me his description of how programmers think didn't really jive with either senior or junior. I think with senior developers when they look at a code review, they're busy, so they're looking for really obvious smells. If there's no obvious smells and it's easy to understand what the code is intending to do, they usually let it pass. Most of the time if one of my PR's get's rejected it's something along the line of "I don't know why, but doing X seems sketch" or "I need more comments to understand the intended flow" or "The variable/function names aren't great"


To be fair, most senior developers don't have any incentive to put this amount of analysis into a working codebase. When the system is working, nobody really wants to spend time they could be working on something interesting trying to find bugs in old code. Plus there's the social consideration that your colleagues might not like you a lot if you spend all your time picking their (working) code apart while not doing any of your tasks. Usually this kind of analysis would come from someone specifically brought in to find issues, like an auditor or a pen-tester.


The right incentives would motivate bug hunting, it entirely depends on company management. Most competent senior devs I’ve worked with spend a great deal of time carefully reading through PRs that involve critical changes. In either case, the question is not whether humans tend to act a certain way, but whether they are capable of skillfully performing a task.


Unfortunately some senior devs like myself do care. Too bad no one else does. Code reviews become quick after a while, you brain adapts to being to review code deeply and quickly


I think the "this is not AI!" people would argue that these things should not be brought in to make important decisions in government, healthcare, etc. We don't need guard rails, we need people to clearly understand how stupid it is to entrust LLMs with important life altering decisions.


I mean, sure, but that’s just wishful thinking. It’s very obvious how everyone and their grandma is deep down the AI rabbit hole. No government will rollback either, because competing ones will go ahead and win through the economical warfare.


I think if LLM's weren't subsidized with investor money and they had to charge what it costs to keep these things trained and running, then the economic "value" of replacing a human in most contexts would slip away. (Especially since you need human oversight anyway, if the decision is at all important)

It's just madness to me. Even if you think these things can reason well with enough training (which, to be clear, I don't), the main unsolved issue with them is: hallucinations. I can't think of any way to justify entrusting important decisions to a person/system that routinely hallucinates.

Not to mention that the other important thing you'd want in any decision maker is an ability to report its level of confidence in its own findings.

LLMs regularly hallucinate and present those hallucinations with exceptional confidence. And these aren't small hallucinations, these are, for a human, fireable offenses; like inventing fake sources, or spreading false rumours about actual humans (two things that have occurred already).

Also, even for things like filtering resumes or flagging something for further review, you have to consider that these things have biases and are sometimes accidentally racist or discriminatory in unexpected ways. I could easily imagine a company facing a discrimination lawsuit if it let AI filter resumes or do similar tasks.


Again, I agree with you at all points, but the market disagrees with our point of view. So these models are deployed left and right everywhere.

We’re way past the point of return, as we have very semi-ok functioning open source models. Anyone who doesn’t play the game is being shunned out of the economical future, unfortunately. It sucks, but it is what it is.


Agreed, I think the high quality of the output prose makes it easy to believe it understands what it's saying. This kind of conversation is probably easy to role play and generate tokens for because of the existing philosophical and AI related conversations.

My experience with generative models is as soon as you ask it to do something novel that requires understanding of the context, that's when it breaks, not when you just ask it to perform in scenarios that are well known. Good/evil tests are well known. Also slight nitpick, but asking Claude to describe how to perform a ransomware attack isn't evil. What if you want to understand how it's done to protect your business from it? Asking "how" to do something is value neutral.


Well, those server farms don't pay for themselves.


sure, but once it's trained there isn't a running maintenance cost


If you offer an API you need to dedicate servers to it that keep the model loaded in GPU memory. Unless you don't care about latency at all.

Though I wouldn't be surprised if the bigger reason is the PR cost of releasing with an exciting name but unexciting results. The press would immediately declare the end of the AI growth curve


Of course running inference costs money. You think GPUs are free?


Well if it takes a ton of memory/compute for inference because of its size, it may be cost prohibitive to run compared to the ROI it generates?


There definitely is, storage, machines at the ready, data centers, etc. Also OpenAI basically loses money every time you interact with ChatGPT https://www.wheresyoured.at/subprimeai/


The article definitely has issues, but to me what's relevant is where it's published. The smart money and experts without a vested interest have been well aware LLMs are an expensive dead for over a year and have been saying as much (Gary Marcus for instance). That this is starting to enter mainstream consciousness is what's newsworthy.


Gary Marcus is just an anti-AI crank to balance out the pro-AI cranks. He's not credible.


Gary Marcus is continuously lambasted and not taken seriously


By whom? He seems highly credible to me, and his credentials check out, especially compared to hype men like Sam Altman. All youre doing is spreading FUD by an unnamed "they"


He only criticizes ai capabilities, without creating anything himself. Credentials are effectively meaningless. With every new release, he clamors for attention to prove how right he was—and always will be. That’s precisely why he lacks credibility.


He started and then sold a machine learning startup to Uber. He's also written multiple books about the construction of the human mind and he has a PhD from MIT. I would hardly call that creating nothing. He's not clamoring for attention, he's asking that AI be regulated and pointing out a lot of the glaring issues with the field.


I think the error here is assuming mind and body are separate things. In reality our body, even our gut bacteria, has a huge effect on how we think and feel. There's a reason why sensory deprivation or dissociative anesthetics like ketamine have such a huge effect on cognition. I just don't think the human mind is designed to be a brain in a jar through virtual reality or whatever. I think whatever attempts we make at doing that would probably drive a person insane.


To that I can add from personal experience, that indeed cognitive capacity and muscular health are tightly linked. I remember that on the peak of my form in a gym, I was able to code better, longer, and keep larger mental models in my head while doing it. Compared that to the period after back injury, where I was unable to do any physically challenging activities, I noticed a slump in cognitive abilities as well. So I can't see, how evolutionary, population with weaker and weaker bodies can keep producing highly capable individuals. And yes, of course there are plenty of cases such as Stephen Hawking and alike, but these probably are exceptions that prove the rule.


I generated a Rick and Morty set that was completely hilarious and played against cardboard. It was pretty confusing though because cardboard was white and the R&M set was dark. That isometric view is unplayable though!! It's hard to even click on the right piece and analyzing the board becomes confusing.


I don't think it's about BSP's per say, it's about being able to use Brushes. Current Unreal Engine still supports these I think (it's been a minute since I looked). It can be a nice prototype workflow.


I hadn't heard of this feature before, but WTF? Who thought AI parsing everything you do on your system would be a good idea?! I want zero to do with this "feature".


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: