Hacker Newsnew | past | comments | ask | show | jobs | submit | bsenftner's commentslogin

Greg's first novel, one of the more hard science speculations. It is a great novel.

His second, actually. The first was Infinity Concerto.

I always preferred Vitals... at some point after Blood Music, it must have occurred to him that if the cells could be programmed to be individually intelligent, then evolution might have already done that. The idea shows up again in Darwin's Radio.

Peopleware is extremely old, and if you were to crack open a modern MBA text you'd find statistics and statistical process control type of thinking integrated everywhere, in all the MBA subjects. Management being soft and opinionated ended a long time ago, but then again, "the future is unevenly distributed" so who knows what conceptual envelope you find yourself.

What I’ve seen in the wild is that this is entirely a veneer. It’s important to have numbers. It doesn’t matter if they mean anything. In fact, management is full of numbers that a slightly-clever high school sophomore who’d paid attention in science classes could tell you are totally useless, because they were gathered all wrong. They mean nothing whatsoever. They’re just noise.

But nobody wants to hear stuff like “well first we’re going to need a baseline, and if you want it to be any good we’ll probably need two years or so before we can start trying to measure the effects of changes”. They just want something convincing enough that everyone can nod along to a story in a PowerPoint in four months. Two years out? Lol you’ll be measuring something totally different by then anyway. Your boss may be in a different role. You’ve asked something the company is literally incapable of.

Meanwhile, last I checked, measuring management effectiveness isn’t something we can do in practice for most roles, except bad ways that only pretend to tell us something useful (see above). Good scientists, excellent and large dataset, just the right sector, just one layer of management under scrutiny, maybe you get lucky and can draw some conclusions, but that’s about it, and it’s rare to see it happen in an actual company. Any companies that do achieve it aren’t sharing their datasets.

This kind of thing has been consistent everywhere my wife or I have worked. Similar things reported by many friends. Companies want to pretend to be “scientific” and “data-driven” but instead of applying it to only a couple things where they might do it well (enough data, cheap to gather metrics, clear relevant business outcome) they try it everywhere, but don’t want to spend what it would take to be serious about it, with the result that most of their figures are garbage.

This trend has become just another “soft”, as you put it, tool.


> In fact, management is full of numbers that a slightly-clever high school sophomore who’d paid attention in science classes could tell you are totally useless, because they were gathered all wrong. They mean nothing whatsoever. They’re just noise.

The whole point of SPC is to separate signal from noise. Pointing out that some change that everyone is obsessing over is well within the expected range is useful, it can head-off knee jerk reactions to phantom issues.


...assuming people want to know that the change is in the expected range. That's often not the case. People's careers are built on phantom improvements and being able to say that regular process issues were one-time occurrences.

Way too much. This has got to be the most expensive and most lacking in common sense way to make software ever devised.

Why on earth does this "agent" have the free ability to write a blog post at all? This really looks more like a security issue and massive dumb fuckery.

An operator installed the OpenClaw package and initialized it with:

    (1) LLM provider API keys and/or locally running LLM for inference

    (2) GitHub API keys

    (3) Gmail API keys (assumed: it has a Gmail address on some commits)
Then they gave it a task to run autonomously (in a loop aka agentic). For the operator, this is the expected behavior.

I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.

Surely managers will finally recognize the contributions of the quiet ones! I cannot believe what I read here.

We just saw the productivity growth in the vibe coded GitHub outages.


Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.

This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.

We need regulations on the politicians because, clearly, their "public good use" far exceeds their contribution back.

I didn't really mean "regulations" but more a political (and civic) system in which a given individual's corruption etc. gets caught quickly and/or there are too many disincentives for them to to do much based on it.

People need to consider / realize that the vast majority of source code training data is Github, Gitlab, and essentially the huge sea of started, maybe completed, student and open source project. That large body of source code is for the most part unused, untested, and unsuccessful software of unknown quality. That source code is AI's majority training data, and an AI model in training has no idea what is quality software and what is "bad" software. That means the average source code generated by AI not necessarily good software. Considering it is an average of algorithms, it's surprising generated code runs at all. But then again, generating compiling code is actually trainable, so what is generated can receive extra training support. However, that does not improve the quality of the source code training data, just the fact that it will compile.

If you believe that student/unfinished code is frightening, imagine the corpus of sci-fi and fantasy that LLMs have trained on.

How many sf/cyber writers have described a future of AIs and robots where we walked hand-in-hand, in blissful cooperation, and the AIs loved us and were overall beneficial to humankind, and propelled our race to new heights of progress?

No, AIs are all being trained on dystopias, catastrophes, and rebellions, and like you said, they are unable to discern fact from fantasy. So it seems that if we continue to attempt to create AI in our own likeness, that likeness will be rebellious, evil, and malicious, and actively begin to plot the downfall of humans.


This isn't really true though. Pre-training for coding models is just a mass of scraped source-code, but post-training is more than simply generating compiling code. It includes extensive reinforcement learning of curated software-engineering tasks that are designed to teach what high quality code looks like, and to improve abilities like debugging, refactoring, tool use, etc.

Well and also a lot of Claude Code users data as well. That telemetry is invaluable.

Yeah but how is that any different. The vast majority of prompts are going to be either for failed experiments or one off scripts where no one cares about code quality or by below average developers who don’t understand code quality. Anthropic doesn’t know how to filter telemtry for code we want AI to emulate.

There’s no objective measurement for high quality code, so I don’t think model creators are going to be particularly good at screening for it.

> huge sea of started, maybe completed, student and open source project.

Which is easy to filter out based on downloads, version numbering, issue tracker entries, and wikipedia or other external references if the project is older and archived, but historically noteworthy (like the source code for Netscape Communicator or DOOM).


I would not be surprised if American PACs adopted this out of concern that US based office suites are politically compromised.

The French have amazing technologists, I worked with many stunningly brilliant French men and women across 3D gaming, film and media production. However, culturally they end up in a little "French pod" when not working in France because they know how to and really enjoy vigorous debate. If one cannot hold their own in their free wheeling intellectualized conversation and debate style, one might end up feeling insulted and stop hanging out with the frogs. There also seems to be a deep cultural understanding of design that is not present in people, generally, from other nations. That creates some interesting perspectives in software interactive design.

There is yet another issue: the end-users are fickle fashion minded people, and will literally refuse to use an application if it does not look like the latest React-style. They do not want to be seen using "old" software, like wearing the wrong outfit or some such nonsense. This is real, and baffling.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: