Hacker Newsnew | past | comments | ask | show | jobs | submit | ElectricalUnion's commentslogin

Common business-oriented language (COBOL) is a high-level, English-like, compiled programming language.

COBOL's promise was that it was human-like text, so we wouldn't need programmers anymore.

The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.

The main lesson of COBOL is that it isn't the computer interface/language that necessitates a programmer.


Agreed, the programmer is not going away. However, I expect the role is going to change dramatically and the SDLC is going to have to adapt. The programmer used to be the non-deterministic function creating the deterministic code. Along with that were multiple levels of testing from unit to acceptance in order to come to some close alignment with what the end-user actually intended as their project goals. Now the programmer is using the probabilistic AI to generate definitive tests so that it can then non-deterministically create deterministic code to pass those tests. All to meet the indefinite project goals defined by the end-user. Or is there going to be another change in role where the project manager is the one using the AI to write the tests since they have a closer relationship to the customer and the programmer is the one responsible for wrangling the code to validate against those tests.

> The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.

Agreed. I've spent the last few years building an EMR at an actual agency and the idea that users know what they want and can articulate it to a degree that won't require ANY technical decisions is pure fantasy in my experience.


Right now with agents this is definitely going to continue to be the case. That said, at the end of the day engineers work with stakeholders to come up with a solution. I see no reason why an agent couldn't perform this role in the future. I say this as someone who is excited but at the same time terrified of this future and what it means to our field.

I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased). I feel that current models are missing critical thinking skills that I feel you need to fully take on this role.


> I see no reason why an agent couldn't perform this role in the future.

Yea, we'll see. I didn't think they'd come this far, but they have. Though, the cracks I still see seem to be more or less just how LLMs work.

It's really hard to accurately assess this given how much I have at stake.

> and he's far more qualified albeit biased

Yea, I think biased is an understatement. And he's working on a very specific product. How much can any one person really understand the entire industry or the scope of all it's work? He's worked at Google and OpenAi. Not exactly examples of your standard line-of-business software building.


> I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased).

If Opus 4.6 had 100M context, 100x higher throughput and latency, and 100x cheaper $/token, we'd be much closer. We'd still need to supervise it, but it could do a whole lot more just by virtue of more I/O.

Of course, whether scaling everything by 100x is possible given current techniques is arguable in itself.


There’s nothing any human can do that an AI can’t be expected to perform as well or better in the future.

Maybe the Oldest Profession will be the last to go.


Related: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...?

At my job, we use a lot of AI to literally move fast and break things when working on internal tools. The idea is that the surface area is low, rollbacks are fast, and the upside is a lot better than the downside (our end users get a better experience to help them do their job better).

But our bottleneck is still requirements for the project. We routinely run out of stuff to do and have to ask for new stuff or work on a different project.

But you're absolutely right. Most people (programmers, managers, etc) don't know exactly what problems need to be solved, or at least, struggle to communicate it adequately for it to be implemented well enough. They say they want X. But they haven't thought about the repercussions of it, or that it requires Y first. AI might be able to help there, but it will give a totally bogus answer if it does not have any context of the domain, which is almost never documented in code.

These are still very much so technical roles, but maybe we are becoming more "technical domain experts."


I predict the main democratization change is going to be how easy people can make plumbing that doesn't require--or at least not obviously require--such specificity or mental-modeling of the business domain.

For example, "Generate me some repeatable code to ask system X for data about Y, pull out value Z, and submit it to system W."


What happens when value Z is not >= X? What happens when value Z doesn't exist, but values J and K do? What should be done when...

I hear what you're saying, but I think it's going to be entertaining watching people go "I guess this is why we paid Bob all of that money all those years".


Hence the "not obviously require" bit: Some portion of those "simply gluing things together" will not actually be simple in truth. It'll work for a time until errors come to a head, then suddenly they'll need a professional to rip out the LLM asbestos and rework it properly.

That said, we should not underestimate the ability of companies to limp along with something broken and buggy, especially when they're being told there's no budget to fix it. (True even before LLMs.)


LLM generated code is replacing the hacked together spread sheet running many businesses.

This seems needlessly nitpicky. Of course there will be edge cases, there always are in everything, so pointing out that edge cases may exist isn't helpful.

But it stands to reason that would be a huge shift if a system accessible to non-technical users could mostly handle those edge cases, even when "handle" means failing silently without taking the entire thing down, or simply raising them for human intervention via Slack message or email or a dashboard or something.

And Bob's still going to get paid a lot of money he'll just be doing stuff that's more useful than figuring out how negative numbers should be parsed in the ETL pipeline.


Edge cases are pretty much the reason you need professional developers, even before LLMs started writing code.

> when value Z is not >= X?

Is your AI not even doing try/catch statements, what century are you in?


Did you just arrogantly suggest that my LLM should use exceptions for control flow? Funny stuff!

How do you model the business domain without modeling the business domain?

For some sorts of "confusables", you don't even need Unicode in some cases. Depending on the cursed combination of font, kerning, rendering and display, `m` and `rn` are also very hard to distinguish.


Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.


Ideally you want to run all those trusted (read: security critical, if compromised entire system is no longer trustworthy) processes on separated and audited machines, but instead busy people end up running them all together because they happen to be packaged together (like FreeIPA or Active Directory), and that makes it even harder to secure them correctly.


There's a very good reason to package these things together on the same machine: you can rely on local machine authentication to bootstrap the network authentication service. If the Kerberos secret store and the LDAP principal store are on different machines and you need both to authenticate network access, how do you authenticate the Kerberos service to the LDAP service?


systemd should add and remove interfaces connected in the exact same hardware path with the same name they had before.

Default literally insane legacy behaviour is just vomiting eth${RAND} where RAND depends on racing conditions.

My educated guess is that people that insist on using the legacy eth${RAND} never had a surprise random firewall and routing rules suddenly apply to different interfaces at a inconvenient time, making production halt and catch on fire, yet.


hardware paths change when you add or remove hardware. systemd developers deny this despite it affecting half of all desktop computers in existence. Your one network jack used to be eth0, systemd now changes it each time you add or remove hardware and insist they're making it more stable instead of more variable whilst they are making it more variable.


Yep. I've experienced on several computers that the systemd-approved "predictable" network device name changes when adding or removing a SSD. The solution is to turn off the network device renaming and allow the single ethernet interface in the machine to always be known as eth0.

systemd developers like to come up with clever solutions to the problems they care about, and ignore the side effects for any use cases they don't care about. And quite often those afflicted use cases are the ones most people would consider to be the more typical use cases.


Because of systemd I often have to add udev rules to rename my devices to something consistent, which has the advantage I can use even more sensible names, like "upper" and "lower", or "eth" and "wifi", but the disadvantage I have to learn udev.


jart Cosmopolitan. It combines a polyglot format (the αcτµαlly pδrταblε εxεcµταblε is simultaneously a Windows Portable Executable and a Thompson Shell script) and a polymorphic libc that works in all major OSes under both amd64 and arm64.

It's a single binary.


And produces binaries that are smaller than a typical single-OS build.


"BRICS" is one of those organizations of countries that make even OPEC (famous for being non-commital, non-punishing, barely advisorial organization that doesn't meet it's own goals since the 1980's) look like a very serious group.


But that is the point of WARC: otherwise, your archival method need some sort of general inteligence (ai or human behind the scenes) to store exacly what you need.

With WARC (and good WARC tooling like Browsetrix-crawler) you store everything HTTP the site sent.


Is this even valid ruby? Doesn't it need Rails-specific Active Support to work?


f-droid implies

* that the application is source-available;

* toolchain used to build the app is FOSS - application does not use Play Services, or proprietary tracking/analytics, or proprietary ad libraries.

* application toolchain doesn't depend on "binary blobs";

Not even passing the sniff test on those easy to meet requirements is suspicious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: