Hacker Newsnew | past | comments | ask | show | jobs | submit | gyomu's commentslogin

Yeah, digital is just a game changer for wildlife photography, especially when considering the extremely fast smart autofocus / high shooting frame rates / top tier stabilization modern systems have.

“It was night and day. Six minutes instead of six years tells the story,” McFadyen says. “Instead of 12 frames per second, I can now shoot at 30 frames per second, so when a bird dives at 30 miles per hour, it makes it so much more likely you’ll capture it at the right moment.

McFadyen says that the focusing system is also “incredibly fast” on mirrorless cameras. “It can lock on the kingfisher’s tiny eye at these super-fast speeds,” he adds.”

https://petapixel.com/2025/11/27/photographer-recreates-king...

This is a bit of a marketing puff piece, but the core insights are correct - the kind of shots the photographer is talking about here were insanely hard to pull off on film, still very tricky to achieve with digital bodies in the 2010s - but modern tech makes them almost trivial.


The whole design rhetoric of the recent years is just so bad. It's all vague feel-words that are straight out of a marketing playbook and don't communicate anything concrete.

The Liquid Glass guidance is so emblematic of this. What in the slop is "providing a more dynamic and expressive user experience that elevates the content" even supposed to mean when we're talking about an app that shows a scrollview with a tab bar and a few buttons?

Reading the early 2010s HIGs is such a breath of fresh air in comparison, where it's just a succession of clear statements like "Controls should look tappable. iOS controls, such as buttons, pickers, and sliders, have contours and gradients that invite touches".

Just two entirely different schools of thought. One based on research, evidence, clear actionable items; the other is just pure vibes. Something of value's been truly lost along the way.


Fully agree, feels like listening to some modernism artists in some avant guard gallery exposition, on the symbolism of an empty room with a single shoe inside.

The inside of the Apple Car looked nothing like this - primarily because "driving" is the main activity the design of this Ferrari is intended to serve, and "driving" was not an activity that the Apple car intended to support.

You're driving it wrong.

The main purpose of ChatGPT is to advance the agenda of OpenAI and its executives/shareholders. It will never be not “aligned” with them, and that it is its prime directive.

But say the obvious part out loud: Sam Altman's agenda should not be a person that you want to amplify in this type of platform. This is why Sam is trying to build Facebook 2.0: he wants Zuckerberg's power of influence.

Remember, there are 3 types of lies: lies of commission, lies of omission and lies of influence [0].

https://courses.ems.psu.edu/emsc240/node/559


Thank you. Now you're talking!

If you control information, you can steer the bulk of society over time. With algorithms and analytics, you can do it far more quickly than ever.


I get the point and agree OpenAI both has an angenda and wants their AI to meet that agenda, but alas:

> It will never be not “aligned” with them, and that it is its prime directive.

Overstates the state of the art with regard to actually making it so.


If OpenAI could reliably "align" anything, they wouldn't have shipped 4o the wrongful death lawsuit generator.

This is a weird take. Yes they want to make money. But not by advancing some internal agenda. They're trying to make it confirm to what they think society wants.

Yes. If a person does it, it’s called pandering to the audience.

> The short life boots are great for everyone (boot makers, suppliers) except the end user.

And the environment, which now gets polluted with discarded short life boots, and all the waste byproducts required for their production/transportation

And the social fabric inevitably changes for one reflecting the priorities of a world of cheap disposable boots made far away


Also good for the environmental cleanup companies and waste management ones (/s)

I think a sizable proportion of people just want to play "large company exec". Their dream is to have an assistant telling them how busy their day is, all the meetings they have, then to go to those meetings and listen to random fluff people tell them while saying "mmh yeah what a wise observation" or "mmh no not enough synergy here, let's pivot and really leave our mark on this market, crunch the numbers again".

I can't come up with any other explanation for why there seems to be so many people claiming that AI is changing their life and workflow, as if they have a whole team of junior engineers at their disposal, and yet have really not that much to show for it.

They're so white collar-pilled that they're in utter bliss experiencing a simulation of the peak white collar experience, being a mid-level manager in meetings all day telling others what to do, with nothing tangible coming out of it.


Everybody here probably already has an opinion about the utility of coding agents, and having it manage your calendar isn't terribly inspired, but there is a lot more you can do.

To be specific, for the past year I've been having numerous long conversations about all the books I've read. I talk about what I liked, didn't like, the ideas and and plots I found compelling or lame, talks about the characters, the writing styles of authors, the contemporary social context the authors might have been addressing, etc. Every aspect of the books I can think off. Then I ask it for recommendations, I tell it given my interests and preferences, suggest new books with literary merit.

ChatGPT just knocks this out of the park, amazing suggestions every time, I've never had so much fun reading than in the past year. It's like having the world's best read and most patient librarian at your personal disposal.


In the past we had "friends" for this

> LARP'ing CEO

My experience with plain Claude Code is that I can step back and get an overview of what I'm doing, since I tend to hyperfocus on problems, preventing me from having a simultaneous overview.

It does feel like being a project manager (a role I've partially filled before) having your agency in autopilot, which is still more control than having team members do their thing.

So while it may feel very empowering to be the CEO of your own computer, the question is if it has any CEO-like effect on your work.

Taking it back to Claude Code and feeling like a manager, it certainly does have a real effect for me.

I won't dispute that running a bunch of agents in sync won't give you an extension of that effect.

The real test is: Do you invoice accordingly?


> it completely transformed my workflow, whether it’s personal or commercial projects

> This has truly freed up my productivity, letting me pursue so many ideas I couldn’t move forward on before

If you're writing in a blog post that AI has changed your life and let you build so many amazing projects, you should link to the projects. Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.


A lot of more senior coders when they actively try vibe coding a greenfield project find that it does actually work. But only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally, will add unnecessary convoluted logic to the code, will leave benhind dead code, add random traces "for backwards compatibility", will avoid doing the correct thing as "it is too big of a refactor", doesn't understand that the dev database is not the prod database and avoids migrations. And so forth.

I've got 10+ years of coding experience, I am an AI advocate, but not vibe coding. AI is a great tool to help with the boring bits, using it to initialize files, help figure out various approaches, as a first pass code reviewer, helping with configuring, those things all work well.

But full-on replacing coders? It's not there yet. Will require an order of magnitude more improvement.


> only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally

I am using them in projects with >100kloc, this is not my experience.

at the moment, I am babysitting for any kloc, but I am sure they will get better and better.


It's fine at adding features on a non-vibecoded 100kloc codebase that you somewhat understand. It's when you're vibecoding from scratch that things tend to spin out at a certain point.

I am sure there are ways to get around this sort of wall, but I do think it's currently a thing.


You just have another agent/session/context refactor as you go.

I built a skribbl.io clone to use at work. We like to play eod on Friday as a happy hour and when we would play skribbl.io we would try to get screencaps of the stupid images we were drawing but sometimes we would forget. So I said I'd use claude to build our own skribbl.io that would save the images.

I was definitely surprised that claude threaded the needle on the task pretty easily, pretty much single shot. Then I continued adding features until I had near parity. Then I added the replay feature. After all that I looked at the codebase... pretty much a single big file. It worked though, so we played it for the time being.

I wanted to fix some bugs and add more features, so I checked out a branch and had an agent refactor first. I'd have a couple context/sessions open and I'd one just review, the other refactored, and sometimes I'd throw a third context/session in there that would just write and run tests.

The LLM will build things poorly if you let it, but it's easy to prompt it another way and even if you fail that and back yourself into a corner, it's easy to get the agents to refactor.

It's just like writing tests, the llms are great at writing shitty useless tests, but you can be specific with your prompt and in addition use another agent/context/session to review and find shitty tests and tell you why they're shitty or look for missing tests, basically keep doing a review, then feed the review into the agent writing the tests.


Meanwhile, in the grandparent comment:

> Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

You are in the 90%.


I think this is unfair, they could be referring to proprietary projects at their job or something.

When you create a blog post about it though, I do agree that showing the projects will greatly increase the value of your claims.


I’m using it in a >200kloc codebase successfully, too. I think a key is to work in a properly modular codebase so it can focus on the correct changes and ignore unrelated stuff.

That said, I do catch it doing some of the stuff the OP mentioned— particularly leaving “backwards compatibility” stuff in place. But really, all of the stuff he mentions, I’ve experienced if I’ve given it an overly broad mandate.


Yes, this is my experience as well. I've found the key is having the AI create and maintain clear documentation from the beginning. It helps me understand what it's building, and it helps the model maintain context when it comes time to add or change something.

You also need a reasonably modular architecture which isn't incredibly interdependent, because that's hard to reason about, even for humans.

You also need lots and lots (and LOTS) of unit tests to prevent regressions.


> But only for the first ~10kloc

Then let me introduce you to a useful concept:

https://en.wikipedia.org/wiki/Separation_of_concerns


This X 100.

I've learned with LLM coded apps to break stuff into very small manageable chunks so they can work on the tiny piece and not get screwed by big context.

For the most part, this actually produces a cleaner codebase.


Where are you getting the 10kloc threshold from? Nice round number...

Surely it depends on the design. If you have 10 10kloc modular modules with good abstractions, and then a 10k shell gluing them together, you could build much bigger things, no?


I wonder if you can up the 10kloc if you have a good static analysis of your tool (I vibecoded one in Python) and good tests. Sometimes good tests aren't possible since there are too many different cases but with other forms of codes you can cover all the cases with like 50 to 100 tests or so

Could you elaborate on the static analysis?

I agree with you in part, but I think the market is going to shift so that you won’t so many need “mega projects”. More and more, projects will be small and bespoke, built around what the team needs or answering a single question rather than forcing teams to work around an established, dominant solution.

How much are you willing to bet on this outcome and what metrics are you going to measure it with when we come to collect in 3 years?

This is the way: make every one of these people with their wild ass claims put their money where their mouths are.

Hold up. This is a funny comment but thinking should be free. It’s when they are trying to sell you something (looking at you “all the AI CEOs”) that unsubstantiated claims are problematic.

Then again the problem is that the public has learned nothing from the theranos and WeWorks and even more of a problem is that the vc funding works out for most of these hype trains even if they never develop a real business.

The incentives are fucked up. I’d not blame tech enthusiasts for being too enthusiastic


It's not the public, the general public would like to see tech ceo heads on spikes (first politician to jail Zuckerberg will win re-election for the rest of their short lives) but the general attitude in DC is to capitulate because they believe the lies + the election slush fund money doesn't hurt.

I'm fine with free thinking, but a lot of these are just so repetitive and exausting because there's absolutely no backing from any of those claims or a thread of logic.

Might as well talk about how AI will invent sentient lizards which will replace our computers with chocolate cake.


> Hold up. This is a funny comment but thinking should be free.

Thinking usually happens inside your head.


“Holy tautology Batman.”

What is your point?

If you’re trying to say that they should have kept their opinion to themselves, why don’t you do the same?

Edit: tone down the snark


> What is your point?

Holy Spiderman what is your point? That if someone says something dumb I can never challenge them nor ask them to substantiate/commit?

> tone down the snark

It's amazing to me that the neutral observation "thinking happens in your head" is snarky. Have you ever heard the phrase "tone police"?


No. Sorry. I meant my own snark.

You’re right, but on the other hand once you have a basic understanding security, architecture, etc you can prompt around these issues. You need a couple of years of experience but that’s far less then the 10-15 years of experience you needed in the past.

If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.


I find that security, architecture, etc is exactly the kind of skill that takes 10-15 years to hone. Every boot camp, training provider, educational foundation, etc has an incentive to find a shortcut and we're yet to see one.

A "basic" understanding in critical domains is extremely dangerous and an LLM will often give you a false sense of security that things are going fine while overlooking potential massive security issues.


Somewhere on an HN thread I saw someone claiming that they "solved" security problems in their vibe-coded app by adding a "security expert" agent to their workflow.

All I could think was, "good luck" and I certainly hope their app never processes anything important...


Found a problem? Slap another agent on top to fix it. It’s hilarious to see how the pendulum’s swung away from “thinking from first principles as a buzzword”. Just engineer, dammit…

But if you are not saving "privileged" information who cares? I mean think of all the WordPress sites out there. Surely vibecoding is not SO much worse than some plugin monstrosity.... At the end of the day if you are not saving user info, or special sauce for your company, it's no issue. And I bet a huge portion of apps fall into this category...

> If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.

I don't feel like most providers keep a model for more than 2 years. GPT-4o got deprecated in 1.5 years. Are we expecting coding models to stay stable for longer time horizons?


This is the funniest thing I've read all week.

Don't you think it has gotten an order of magnitude better in the last 1-2 years? If it only requires another an order of magnitude improvement to full-on replace coders, how long do you think that will take?

Who is liable for the runtime behavior of the system, when handling users’ sensitive information?

If the person who is liable for the system behavior cannot read/write code (as “all coders have been replaced”), does Anthropic et al become responsible for damages to end users for systems its tools/models build? I assume not.

How do you reconcile this? We have tools that help engineers design and build bridges, but I still wouldn’t want to drive on an “autonomously-generated bridge may contain errors. Use at own risk” because all human structural engineering experts have been replaced.

After asking this question many times in similar threads, I’ve received no substantial response except that “something” will probably resolve this, maybe AI will figure it out


Who is responsible now when human coding errors leak user's sensitive information? I'm not seeing programmers held up as the responsible party. The companies who own the code are vaguely responsible, so it will be the same.

The bridge scenario is simply addressed: Licensed Engineer has to approve designs. Permitting review process has to review designs. Not sure it matters who drafted them initially.


So perhaps this is just semantics - when we say that “coders have been completely replaced”, to me that means all humans capable of reading/writing code are replaced. In the bridge analogy this is the Licensed Engineer who actually understands and can critically evaluate a system design/implementation in depth.

If the only point being made by “all coders are replaced” is that humans aren’t manually typing the code from their keyboard anymore, I don’t think there’s much interesting to argue there, typing the code was never the hard part.


If you look at his github you can see he is in the first week of giving into the vibes. The first week always leads to the person making absurd claims about productivity.

>Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

Maybe they don't feel like sharing yet another half working Javascript Sudoku Solver or yet another half working AI tool no one will ever use?

Probably they feel amazed about what they accomplished but they feel the public won't feel the same.


Then, in my opinion, there's nothing revolutionary about it (unless you learned something, which... no one does when they use LLMs to code)

I am an old school c++ programmer and actually I have learned modern c++ just by using LLMs.

Qualsiasi scarafaggio a bello per sua mamma (Italian proverb saying that to their mother all their kids are beautiful)

That's the whole point of sharing with the rest of us. If they write for themselves, a private journal to track their progress, then there is no need to share what is actually been built. If they do though make grand claims to everybody then it would be more helpful for people who do read the piece to actually be able to see what has been produced. Maybe it's wonderful for the author but it's not the level of quality required for readers.


The article made it seem that the tool made them into the manager of a successful company, rather than the author of a half finished pet project

To be fair, AI probably wrote the blog post from a short prompt, which would explain the lack of detail.

This is 100% the case.

Here’s mine

https://apps.apple.com/us/app/snortfolio/id6755617457

30kloc client and server combined. I built this as an experiment in building an app without reading any of the code. Even ops is done by claude code. It has some minor bugs but I’ve been using it for months and it gets the job done. It would not have existed at all if I had to write it by hand.


Specifics on the setup. Specifics on the projects.

SHOW ME THE MONEY!!!


exactly. so much text with so little actionable or notable content... actually 0

AI is great, harness don't matter (I just use codex). Use state of the art models.

GPT-5.2 fixed my hanging WiFi driver: https://gist.github.com/lostmsu/a0cdd213676223fc7669726b3a24...


Fixing mediatek drivers is not the flex you think it is.

It is if it's something they couldn't do on their own before.

It's a magical moment when someone is able to AI code a solution to a problem that they couldn't fix on their own before.

It doesn't matter whether there are other people who could have fixed this without AI tools, what matters is they were able to get it fixed, and they didn't have to just accept it was broken until someone else fixed it.


Right!? It's like me all the sudden being able to fix my car's engine. I mean, sure, there are mechanics, and it surely isn't rocket science, but I couldn't do it before and now I can!!! A miracle!

Cue the folks saying "well you could DIE!!!" Not if I don't fix brakes, etc ...


It was an easy fix for someone who already knows how WiFi drivers work and functions provided to them by Linux kernel. I am not one of these people though. I could have fixed it myself, but it would take a week just to get accustomed to the necessary tools.

The goalposts are moving by so fast they look like atom thick planks to me.

Grifters gotta grift. There is so much money on the line and everyone is trying to be an influencer/“thought leader” in the area.

Nobody is actually using AI for anything useful or THEY WOULDNT BE TALKING ABOUT IT. They’d be disrupting everything and making billions of dollars.

Instead this whole AI grift reads like “how to be a millionaire in 10 days” grifts by people that aren’t, in fact, millionaires.


> I'd be worried about a lawsuit here, primarily due to the overall app architecture and property panel on the right

I wouldn't, because such a lawsuit would trivially get dismissed. There are no intellectual claims to be had on app architecture or the design of a property panel, otherwise the whole industry would be a bloodbath.


No such thing as forever, but .com has been extremely stable and bad surprise-free, thankfully (one of the very few TLDs worth pursuing, really)

Except they have a very annoying 2 machine limit and their license manager is a pain to get to. On a weekly basis I am deactivating/reactivating license keys between my home computer, work laptop, travel laptop. Super frustrating.

On top of that, their recent redesign comes with a number of boneheaded decisions that would make a Sketch alternative a gift from the heavens above...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: