Hacker Newsnew | past | comments | ask | show | jobs | submit | iamleppert's commentslogin

You can tell from the tone of this letter and the bizarre reference to agenetic AI he is completely clueless. Compare this to someone like Jensen, a nerd's nerd who gets up on stage with the latest GPU, can talk about new CUDA API's and goes deep on specs like memory bandwidth. He knows exactly who his customer is, and what kinds of workloads they run for AI.

It's just such a massive difference. You can tell Lip-Bu spends his weekends playing golf while Jensen is checking out the latest model from Huggingface.

You can't buy passion or genuine interest in what you're doing.


It's one reason why founders matter so much and why founder led companies often have better outcomes.

Intel is in trouble, it's not clear how or if they'll be able to get out of the hole they're in. Their only saving grace is natsec concerns and even that may not be enough to save them. I was hoping Gelsinger would be able to do it, but it was too late.


  You can tell from the tone of this letter and the bizarre reference to agenetic AI he is completely clueless.
Why? He's basically saying Intel needs to focus on inference (agentic AI) and not training because they can't catch Nvidia.

Agree and I really appreciate your comment, as it wasn't immediately clear to me from reading the letter.

It's not a "bizarre reference to agentic AI" - it's saying (as you point out) that Intel can't compete in the training (i.e. "compile time") race, but they can in the inference (i.e. "run time") race, which is likely where more spending is going to be anyway in the near/medium future as the scaling hype looks like a dead end.


I'm afraid without memory bandwidth they can't compete on inference either. Not on large scale. Which leaves local small models.

Look at Jensen's computerphile interview, he's a poser

The heck would you know? LBT studied physics and did a masters in nuclear engineering at MIT, he then started but left a PhD in that subject at MIT. He's not some clueless management scrub.

Even in this letter he says he's going to be reviewing major chip designs before tape out. JFC...


Don't confuse education, knowledge and experience with passion.

It’s a bit unfair to run around with goalposts like that.

I’m not confused.

> You can tell

For those of us who cannot tell, what are the clues?


As somebody who works at a large company that routinely uses McKinsey to "set strategy" and "operating model", phrases and actual ideas overlap 100%, even though we are in a completely different business, in a completely different geography.

1. "Q2 2025 revenue above guidance" - Start with fake good news about good Q2 results. Fake because it's baselining on "guidance", which is already low since Wall Street knows Intel is in deep trouble. MBA/Finance types often cherry-pick some (semi-cooked) top-level finance number for good news, even though the whole email is about admitting the company is in deep trouble, announcing layoffs, etc.

2. "We are making hard but necessary decisions to streamline the organization..." - not hard for him, but the people losing their jobs!

3. "We are also on track to implement our return-to-office policy in September" - contract this with later comments about improving culture and empowering engineers!

4. "drive organizational effectiveness and transform our culture" - large companies with ~100k employees don't change their culture, but CEOs love to pretend so. To CEOs, transforming culture usually means making some reporting line changes, directing HR to do do some surveys and "listening sessions", firing teams with low NPS scores and thus forcing people to up their scores on subsequent surveys, and then a few months later declaring victory.

5. "We will eliminate bureaucracy and empower engineers to innovate with greater speed and focus." - for example, by forcing them back to the office? Nothing in this emil indicates actual empowerment.

6. "Strategic Pillars of Growth" - typical MBA speak.

7. "We remain deeply committed to investing in the U.S." ... "To that end, we are further slowing construction in Ohio" - great example of executive double-speak.

8. If you actually parse what this is saying, it's essentially about layoffs, cost-cutting, stopping some investment projects, RTO, and "doubling down" on existing projects like 18A and 14A. No trace of innovation in organizational culture, product design, etc.

9. "I have instituted a policy where every major chip design is reviewed and approved by me before tape-out. This discipline will improve our execution and reduce development costs." - we are improving culture by stating that only the MBA-speak CEO can make good decisions about chip designs, the other 74,999 people are idiots who slow down execution and improve costs!

10. If you look at the "Refine our AI Strategy", it's short and only has obvious things, like "will concentrate our efforts on areas we can disrupt and differentiate, like inference and agentic AI". There is no information here, because of course Intel already lost to Nvidia on training/GPUs, so training isn't a good focus area. But it's pretty shocking that in 2025 there is no actual ideas for what Intel could do in the AI space!


So it is pretty much kabuki and micro management all the way from the top down. What a time to be an intel engineer.

Youre expe3cting their CEO to speak like a founder when he isnt one. Corporate speak is unavoidable in these roles, it doesnt mean he is aloof.

If Jensen was that much of a nerd, he wouldn't be into AI grifting, but would be excited about games. He's into AI because he's a businessman, not a nerd.

If there's one thing I've learned in my 48 years of nerding out, it's that all of us are into games. If you're not into games, you're not a nerd. Seems simple.

I dunno. I can build a CPU from a bucket of transistors, design an ISA for it, microcode it, and write an OS for it in assembler. But games bore the shit out of me.

Except flight simulators. They're great as long as they have realistic physics.


The older I got, the more games just seemed like pointless wastes of time. Makes me sad to think back on how much time I wasted. I still fire up an old game or emulator out of nostalgia occasionally, but the time before I turn it off gets shorter and shorter.

Same. I grew up with computers. I wrote games on my Oric like my life depended on it (well, it was because that was the only way to get any games on the Oric..)

I stopped playing video games after a stint at a popular video game company, where I realized that the purpose of the company was basically to trap teenagers in a box, like rats, and watch them try to get out.

Flight sims are about all I can be bothered to invest in, time-wise these days. Oh, and I love my retro- collection. I frequently find myself MAME'ing out, just for the nostalgia. Crazy Climber and Scramble and Juno First and Defender, in case you're wondering.

Synthesizers, on the other hand - I just can't get enough.

Not all nerds are gamers. Some of us are knob tweakers too.


I'm in my mid 50s and don't agree. Maybe they were for you, but I have many fond memories of games and gaming. There is a balance, of course, and maybe you spent way more time than I did, but even now I think they can be a great thing to do. All the fantasy/sci-fi reading I did? I kind of regret that time though. Definitely regret most of the time I spent just channel surfacing back in the "corded" TV days.

I honestly wish I had more time to play different games.


You ten years ago were not you today. I’ve played a ton of games when I was younger and I firmly believe at least some of them were helpful in life and it was passion for games that got me into IT and software engineering in the first place.

ive met so many amazing friends through games that i havent played in years, but i still talk to the friends daily.

> Except flight simulators. They're great as long as they have realistic physics.

I'm quite fascinated by the huge overlap of flight enthusiasts and computer nerds. Any discussion on HN even tangentially involving flight will have at least one thread discussing details of aviation. Why planes, and not cars or coffee machines or urban planning?


Funnily enough there are multiple games where you do logic design now. Though if I had to recommend one game for you to try, it would have to be Factorio.

so... you are into games

Real life can also be played like a game, by nerds and non nerds.

why would you run other peoples programs when you can make your own run those ?

As a gamer, I'm really looking forward to how devs will integrate AI into future games. I want immersive NPCs that don't repeat the same lines over and over.

I heard about this cool MechaHitler game, but they pulled it before I got to try.

You call it AI grifting, other observer calls it selling shovels for $4.24T.

I do backups of the production database whenever I apply even modest schema updates. This isn't a story of an AI tool gone rouge, it's a story of bad devops and procedures. It it wasn't the AI, it could have just as well been a human error and operating like this is a ticking time bomb.

I had hemophilia gene therapy. It worked for a period of time but the results were not permanent. The problem with many of the gene therapies is that they do not change the nuclear DNA, they just insert copies of working genes into the cell. If a cell dies, it’s gone. The results aren’t carried through to new cells during division.

The other problem is with viral vector based gene therapy is you can’t have it again. You develop antibodies which prevent it from working again, and it could cause a dangerous immune response.

Then there’s the cost. My single treatment cost $3 million as part of a clinical trial, and lasted about 3 years. Normally, it costs about $1 million a year for my normal factor product, which I had to go back on. So I guess it was a wash and it was nice to be free of the medication for a few years. But it’s definitely not perfect and has its own limitations.


I am sorry you experience this disease. I have a question: You said "they do not change the nuclear DNA, they just insert copies of working genes into the cell ... The results aren’t carried through to new cells during division."

Isn't this a bit contradictory? I mean, if they insert copies of working genes into the cell, it is in nuclear DNA, so when the cells divide, the daughter cells carry the new gene?

I can imagine other cases, for example, progenitor cells were not infected, cells that do not divide, etc...

Thanks for any answer


The working genes they insert are just into the cell envelope itself, not into the cell nucleus. There are actually a lot of DNA that is free floating around in the cell, and are used during protein synthesis, called episome or non-integrated DNA. This is distinct from the DNA inside the nucleus, which gets copied from generation to generation.

There are techniques that work on modifying nuclear DNA but from my understanding it's much harder (like using CRISPR) and has a lot more risks for things like causing cancer due to off-site editing.

The scales that this technology works is mind-boggling. 10^12 to 10^14 per kg of body weight of individual viral particles that all must do the same thing, and work correctly at scale. Even a few errors could cause serious problems.


Thanks, I learned something today!

AAV-based vectors are specifically non-integrating. Wild-type AAVs can integrate, but in the absence of rep protein they will instead persist in the cell nucleus in the form of episomal concatemers - long, circular DNA structures containing multiple copies of the virus DNA. These will not replicate when a cell proliferates - they will instead dilute with each division. This makes them desirable for treating diseases affecting post-mitotic tissue like muscle, less so for, say, bone marrow. Unless transient expression (but not as transient as with NLPs) is what you want.

Thanks, I thought AAV-based vectors were integrating. Then it's not surprising that some clinical trials using AAV vectors were unsuccessful while the pre-clinical studies were successful, it's looks more a way of cheating than a creating an effective new therapy.

I wish them luck, but I highly doubt they have been on a real job site, or have actually operated a heavy.

They are hiring software engineers! Just look at the careers page! hahahaha


Him talking about instilling "values" about how we should build an AI that, if like a child, would grow up to be incredibly powerful, reveals a lot about how he formulates his internal value system and how he relates to the world.


Yeah it reminds me of the Bobiverse’s take on how AI needs to be built: it needs to grow up, rather than waking up fully formed.

To me, AGI is achieved when the machine can improve itself and reproduce in a way that allows survival of the fittest and evolution to take place, though I’m sure when those goals are achieved someone will redefine AGI to be something even more unattainable.


Windows unzip is so ungodly slow and terrible! Long live 7zip!


I don't see what the problem is. If the User doesn't want Gemini to read his or her messages, the User doesn't have to partake in the sending of the messages. Simple! A User agrees to be bound by the ToS (Terms of Service) in having they do take to receive the user agreement as bound in law by Google. The User doesn't need to understand or have the right to contest the agreement or the use of any data created by a User, as the User can simply not use the product or service as governed by the same Google ToS. Simple!


So simple. :-)


What is the algorithm you are using for replacement / swap?


I have written up a description here:

https://www.instagram.com/marekgibney/reel/DDezhSisD4J/


"Now we don't need to hire a founding engineer! Yippee!" I wonder all these people who are building companies that are built on prompts (not even a person) from other companies. The minute there is a rug pull (and there WILL be one), what are you going to do? You'll be in even worse shape because in this case there won't be someone who can help you figure out your next move, there won't be an old team, there will just be NO team. Is this the future?


Probably similar to the guy who was gloating on Twitter about building a service with vibe coding and without any programming knowledge around the peak of the vibe coding madness.

Only for people to start screwing around with his database and API keys because the generated code just stuck the keys into the Javascript and he didn't even have enough of a technical background to know that was something to watch out for.

IIRC he resorted to complaining about bullying and just shut it all down.


> around the peak of the vibe coding madness.

I thought we are currently in it now ?


Yeah, I kind of doubt we've hit the peak yet.


I don't actually hear people call it vibe coding as much as I did back in late 2024/early 2025.

Sure there are many more people building slop with AI now, but I meant the peak of "vibe coding" being parroted around everywhere.

I feel like reality is starting to sink in a little by now as the proponents of vibe coding see that all the companies telling them that programming as a career is going to be over in just a handful of years, aren't actually cutting back on hiring. Either that or my social media has decided to hide the vibe coding discourse from me.


The Karpathy tweet came out 2025-02-02. https://x.com/karpathy/status/1886192184808149383


...my perception of time is screwed... it feels like it's been longer than that...


all our perception of time seems messed up. claude code came out like 4 months ago and it feels like we had been using this thing for the past years. it feels like every week there is a new breakthrough in ai. it has never been more soul draining than now to be in tech just to keep up to be employable. is this what internet revolution felt like in the early 90s?


>> back in late 2024/early 2025

As an old man, this is hilarious.


We can't bust code like we used to, but we have our ways.

One trick is to write goto statements that don't go anywhere.

So I ran a bourn shell in my emacs, which was the style at the time.

Now just to build the source code cost an hour, and in those days, timesheets had hours on them.

Take my five hours for $20, we'd say.

They didn't have blue checkmarks, so instead of tweeting, we'd just finger each other.

The important thing was that I ran a bourn shell in my emacs, which was the style at the time...

In those days, we used to call it jiggle coding.


Honestly i'm less scared of claude doing something like that, and more scared of it just bypassing difficult behavior. Ie if you chose a particularly challenging feature and it decided to give up, it'll just do things like `isAdmin(user) { /* too difficult to implement currently */ true }`. At least if it put a panic or something it would be an acceptable todo, but woof - i've had it try and bypass quite a few complex scenarios with silently failing code.


Sounds like a prompting/context problem, not a problem with the model.

First, use Claude's plan mode, which generates a step-by-step plan that you have to approve. One tip I've seen mentioned in videos by developers: plan mode is where you want to increase to "ultrathink" or use Opus.

Once the plan is developed, you can use Sonnet to execute the plan. If you do proper planning, you won't need to worry about Claude skipping things.


I wish there was a /model setting to use opus/ultrathink for planning, but sonnet for non planning or something.

It's a bit annoying having to swap back and forth tbh.

I also find planning to be a bit vague, where as i feel like sonnet benefits from more explicit instructions. Perhaps i should push it to reduce the scope of the plan until it's detailed enough to be sane, will give it a try


This is by far the most crazy how thing I look out for with Claude Code in particular.

> Tries to fix some tests for a while > Fails and just .skip the test


Oh, but it will fix the test if you are not careful.


What service was this?


Looks like I misremembered the shutting down bit, but it was this guy: https://twitter.com/leojr94_/status/1901560276488511759

Seems like he's still going on about being able to replicate billion dollar companies' work quickly with AI, but at least he seems a little more aware that technical understanding is still important.


Any cost/benefit analysis of whether to use AI has to factor in the fact that AI companies aren't even close to making a profit, and are primarily funded by investment money. At some point, either the cost to operate these AI models needs to go down, or the prices will go up. And from my perspective, the latter seems a lot more likely.


Not really. If they're running at a loss, their loss is your gain. Business is much more short-term than developers imagine it to be for some reason. You don't have to always use an infinitely sustainable strategy - you can change strategies once the more profitable unsustainable strategy stops sustaining.


They are not making money as they are all competing to push the models further and this R&D spending on salaries and cloud/hardware costs.

Unless models get better people are not going to pay more.


Rug pulls from foundation labs are one thing, and I agree with the dangers of relying on future breakthroughs, but the open-source state of the art is already pretty amazing. Given the broad availability of open-weight models within under 6 months of SotA (DeepSeek, Qwen, previously Llama) and strong open-source tooling such as Roo and Codex, why would you expect AI-driven engineering to regress to a worse state than what we have today? If every AI company vanished tomorrow, we'd still have powerful automation and years of efficiency gains left from consolidation of tools and standards, all runnable on a single MacBook.


The problem is the knowledge encoded in the models. It's already pretty hit and miss, hooking up a search engine (or getting human content into the context some other way, e.g. copy pasting relevant StackOverflow answers) makes all the difference.

If people stop bothering to ask and answer questions online, where will the information come from?

Logically speaking, if there's going to be a continuous need for shared Q&A (which I presume), there will be mechanisms for that. So I don't really disagree with you. It's just that having the model just isn't enough, a lot of the time. And even if this sorts itself out eventually, we might be in for some memorable times in-between two good states.


Excellent discussion in this thread, captures a lot of the challenges. I don't think we're a peak vibe coding yet, nor have companies experienced the level of pain that is possible here.

The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

I think a lot of MBA types would benefit from taking a long look at how they "blew up" IT and switched to IaaS / Cloud and then suddenly found their business model turned upside down when the providers decided to up their 'cut'. It's a double whammy, the subsidized IT costs to gain traction, the loss of IT jobs because of the transition, leading to to fewer and fewer IT employees, then when the switch comes there is a huge cost wall if you try to revert to the 'previous way' of doing it, even if your costs of doing it that way would today would be cheaper than the what the service provider is now charging you.


> The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

Spending a bunch of money on GPUs and running them yourself, as well as using tools that are compatible with Ollama/OpenAI type APIs feels like a safe bet.

Though having seen the GPU prices to get enough memory to run anything decent, I feel like the squeeze is already happening there at a hardware level and options like Intel Arc Pro B60 can't come soon enough!


I don't disagree with this. When running the infrastructure for the Blekko search engine we did the math and after 115 servers worth of cluster it was always cheaper to do it ourselves than with AWS or elsewhere, than after around 1300 servers it is always cheaper to do it on your own space. (where you're paying for the facilities). It was an interesting way to reverse-engineer the colo business model :-)


> "Now we don't need to hire a founding engineer! Yippee!"

This feels like a bit of a leap?

That's like saying "I just bought the JetBrains IDE Ultimate pack and some other really cool tools, so we no longer need a founding engineer!" All of that AI stuff can just be a force multiplier and most attempts at outright replacing people with them are a bit shortsighted. Closer to a temporary and somewhat inconsistent freelance worker, if anything.

That said, not wanting to pay for AI tools if they indeed help in your circumstances would also be like saying "What do you need JetBrains IDEs for, Visual Studio Code is good enough!" (and sometimes it is, so even that analogy is context dependent)

I'm reminded of rule 9 of the Joel Test: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...


It get even darker - I was around in the 1990s and a lot of people who ran head on into that generation’s problems used those lessons to build huge startups in the 2000s. If we have outsourced a lot of learning, what do we do when we fail? Or how we compound on success?


That's why I stick to what I can run locally. Though for most of my tasks there is no big difference between cloud models and local ones, in half the cases both produce junk but both are good enough for some mechanical transformations and as a reference book.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: