Hacker News new | past | comments | ask | show | jobs | submit | clktmr's comments login

I can provide another POV to that story. We checked in as a family of four, and we're assigned seats in four different rows, with a two and a four year old. Only when entering the plane we had the possibility address this to a human and we were assigned new seats.

So this might be the reason you had to change seats.


they claimed they had to change planes though i had selected that seat when booking the flight, and there were no humans available to address such issues


I agree in general, but running git bisect on individual PR commits is just doing it wrong. There will always be commits that break stuff temporarily. Run git bisect only on the merge commits instead, which are typically already tested by CI.


> I agree in general, but running git bisect on individual PR commits is just doing it wrong. There will always be commits that break stuff temporarily.

That's unacceptable in my book. Before submitting any patch set for review, the contributor is responsible for ensuring that the series builds (compiles) at every stage -- at every patch boundary. Specifically so that a later git bisect never fail to build any patch across the series.

This requries the contributor to construct the series from the bottom up, as a graph of dependencies, serialized into a patch set (kind of a "topological sort"). It usually means an entirely separate "second pass" during development, where the already working and tested (test-case-covered) code is reorganized (rebased / reconstructed), just for determining the proper patch boundaries, the patch order, and cleaning up the commit messages. The series of commit messages should read a bit like a math textbook -- start with the basics, then build upon them.

Furthermore, the patch set should preferably also pass the test suite at every stage (i.e., not just build at every stage). Existent code / features should never be functionally regressed, even temporarily. It's possible to write code like this; it just takes a lot more work -- in a way you need to see the future, see the end goal at the beginning. That's why it's usually done with a separate, second pass of development.


I don’t know what world you live in, but I’ve never worked in an organization where more than 1% of the developers would go through all that extra work for every PR.


You might want to take a look at the Analogue Pocket.


I'd rather suggest miSTer[0], rather than a parasite leeching on its ecosystem.

0. https://github.com/MiSTer-devel/Wiki_MiSTer/wiki


Very interesting! I would propose to add 'foot' to the list, which is also very performance oriented.


I feel this becomes more and more relevant again:

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...

Edit: To add on this, I think there is a lot of software that is written in a throwaway fashion (CRUD apps, shell scripts), where using LLMs might beneficial. But anything where correctness actually matters, why would I describe it in natural language only to then check the implementation?

The much more sensible use of LLMs to me is the other way round: creating ad hoc documentation for code that you can even ask questions. But that's probably not fundable by VCs on the same level.


Yes and no (imho). Whether it's some uber technical language/low level, such as Assembly or Swift, an LLM would have no problem 'learning it'. It could take more or less time, but moving bits around and drawing lines is something that a machine can do faster than a human - once the machine learns. I was coding in MQL4. It would take me a couple of hours to make something that an LLM would type up in 30secs with 5mins spent on description/reqs.

And it may make 1000 mistakes before getting it right, but if these 1000 iterations happen in 1ms it is still more profitable than a human needing 10 iterations. And considering that once you teach a machine (ML) the machine will know - forever, while you need to spend dedicated time for each new human that becomes a dev.

> ..sharp decline of people's mastery of their own language

This is the part that scares me about humanity. Perhaps I read a lot, and perhaps I am a believer that words matter. People seem to be replacing everything, dummy-ing down things, racing to the bottom of intellect.

On the original post.. (I'm too old) - I am also at the I'm too old age-range. I am happy to only deal with LLMs as a user. Google tends to know everything about anyone anyway. So as long as my questions are not giving away my private/secrets (i.e. "how should I wash my 5th nipple?") then "it's ok".


There is also gokrazy[^1], which isn't focused on k8s, but on deploying on a rpi.

[^1]: https://gokrazy.org/


gokrazy can also be used to build little VM images: https://gokrazy.org/userguide/qemu/


As long as there is no AGI, no software engineer needs to be worried about their job. And when there is, obviously everything in every field will change and this discussion will soon be futile.


I would argue future engineers should be worried a bit. We no longer need to hire new developers.

I was not trained professionally yet I'm writing production code that's passing code reviews in languages I never used. I will create a prompt, validate it compiles, passes tests, have it explain so I understand it was written as expected and write documentation about the code, write the PR, and I am seen as a competent contributor. I can't pass leet code level 1 yet here I am being invited to speak to developers.

Velocity goes up and cost of features will drop. This is good. I'm seeing at least 10 to 1 output from a year ago based upon integrating these new tools.


Yeah, it sounds to me your teammates are going to pick up the tab at the end, when subtle errors will be 10x harder to repair, or you are working on toy projects where correctness doesn't really matter.


To add to this.

I was going through devin's 'pass' diffs from SWE bench.

Every one I ended up tracing to actual issues caused changes that would reduce maintainablity or introduced potential side effects.

I think it may be useful as a suggestion in a red-green-refactor model, but will end up producing hard to maintain and modify code.

Note this one here that introduced circular dependencies, changed a function that only accepted points to one that appears to accept any geometric object but only added lines.

Domain knowledge and writing maintainable code is beyond generative transformers.

https://github.com/CognitionAI/devin-swebench-results/blob/m...

You simply can't get past what Gödel and Rice proved with current technology.

It is like when visual languages were supposed to replace programmers. Code isn't really the issue, the details are.


Thank you for reading the diffs and reporting on them.

And to be fair, lots of humans are already at least this bad at writing code. And lots of companies are happy with garbage code so long as it addresses an immediate business requirement.

So Devin wouldn't have to advance much to be competitive in certain simple situations where people don't care about anything that happens more than 2 quarters into the future.

I also agree that producing good code which meets real business needs is a hard problem. In fact, any AI which can truly do the work of a good senior software engineer can probably learn to do a lot of other human jobs as well.


Architectural erosion is an ongoing problem for humans, but they don't produce tightly coupled low cohesion code by default at the SWE level the majority of the time.

With this quality of changes it won't be long until violations stack up to where further changes will be beyond any algorithms ability to unravel.

While lots of companies do only look out in the short term, human programers are incentivized to protect themselves from pain if they aren't forced into unrealistic delivery times.

At&t wireless being destroyed as a company due to a failed SAP migration that was largely due to fragile code is a good example.

But I guess if the developer jobs that will go away are from companies that want to underperform in the market due to errors and a code base that can't adapt to changing market realities, that may happen.

But I would fire any non intern programmer if they constantly did things like removing deprecation comments and introduced circular dependencies with the majority of their commits.

https://github.com/CognitionAI/devin-swebench-results/blob/m...

PAC learning is powerful but is still probably approximately correct.

Until these tools can avoid the most basic bad practices I don't see any company sticking to them in the long term, but it will probably be a very expensive experiment for many of them.


Can't we just RLHF code reviews?


RLHF works on problems that are difficult to specify yet easy to judge.

While RLHF will help improve systems, code correctness is not easy to judge outside of the simplest cases.

Note how on OpenAI's technical report, they admit performance on college level tests is almost exclusively from pre-training. If you look at LSAT as an example, all those questions were probably in the corpus.

https://arxiv.org/abs/2303.08774


>RLHF works on problems that are difficult to specify yet easy to judge.

But that's the thing, that it seems that everyone here on HN (and elsewhere) finds it easy to judge the flaws of AI-generated code, and they seem relatively consistent. So if we start offering these critiques as RLHF at scale, we should be able to bring the LLM output to the level where further feedback is hard (or at least inconsistent), right?


> You simply can't get past what Gödel and Rice proved with current technology.

Not this again. Those theorems tell you nothing about your concerns. The worst case of a problem is not equal to its usual case.


Agreed. I use LLMs quite extensively and the amount of production code I ship from an LLM is next to zero.

I even wrote a majority of my codebase in Python despite not knowing Python precisely because I would get the best recommendations from LLMs. As a frontend developer, with no experience in backend engineering in the last decade, and no Python experience, building an app where almost every function has gone through an LLM at some point, for almost 8 months — I would be extremely surprised if some of the code it generated landed in production.


Most software is already as bad as this, though. And managers won't care (maybe even shouldn't?) if the execution fairly delivers.

Think of this as Facebook page vs. WordPress website vs. A full custom website. The best option is to have a full custom website. Next, is a cheaper option from someone who can put a few lines together. The worst option is a Facebook page that you can create yourself.

But the Facebook page also does the job. And for some businesses, it's fairly enough.


> I'm writing production code that's passing code reviews in languages I never used

Your coworkers likely aren't doing a very good job at reviewing, but also I don't blame them. The only way to be sure code works is to use it for its intended task. Brains are bad interpreters, and LLMs are extremely good bullshit generators. If the code makes it to prod and works, good. But honestly, if you aren't just pushing DB records around or slinging HTML, I doubt it'll be good enough to get you very far without taking down prod.


I have yet to see either copilot or gpt4 generate code that I would come close to accepting in a PR from one of my devs, so I struggle to imagine what kind of domain you are in that the code it generates actually makes it through review.


You simply don't know how to use it. It's not meant as "develop this feature". It's meant to reduce the time it takes you to do something you're always good at. The prompt will be in the form of "write this function with x/y/z constraints and a/b/c design choices". You do a few touch ups, which is quick because you're good at said domain, and then your PR it. The bottom line is, it took you much less time to do the same thing.

Then again, it's always dinosaurs who value their own teachings, above anything else, and try to cling on to it, at any cost, without learning new tools. So, while the industry is going through major changes (2023 saw a 30% decrease in new hires. Among 940 companies surveyed, 40% expect layoffs due to AI), people should adapt rather than ignore the signs.


What's your domain?


That you know of


Honestly that sounds like a problem with the way you are managing prs. The PRs are too big or you are overly nitpicking prs on unimportant things


To be fair, Leetcode was never a good indicator of developer skills, though primarily because of the time pressure and the restrictive format that dings you for asking questions about the problem.


Speaking of Leetcode... is anyone selling a service to boost Leetcode scores using AI yet? It seems like that's fairly low hanging fruit at this point.


Based on their demos, HackerRank is doing this as part of their existing products. Which makes sense since prompt engineering will soon become a minimum requirement for devs of any experience level.


I have accepted using these tools to help when it comes to generating code and improving my output. However when it comes to dealing with more niche areas (in my case retail technology) it falls short.

You still need that domain knowledge of whatever you are writing code for or integrating with, especially is the technology is more niche, or documentation was never made available publicly and scraped by the AI

But when it comes to writing boilerplate code it is great, or when working with very commonly used frameworks (like front end javascript frameworks in my case)


> passes tests

Okay, so you are just kicking the can down the road to the test engineers. Now your org needs to spend more resources on test engineering to really make sure the AI code doesn't fuzz your system to death.

If you squint, using a language compiler is analogous to writing tests for generated code. You are really writing a spec and having something automatically generate the actual code that implements the spec.


This doesn’t vibe with my experience at all. We also use LLMs and it’s exceedingly rare that a non-trivial PR/MR gets waved through without comment.


You should create a vfx character and really pizazz up the talk. Let it run and narrate the speech on a huge screen in an auditorium.


I wonder if the reviewers are just using GPT as well.


Meanwhile I’m paid for editing a single line of code in 2 weeks, and nothing less than singularity will replace me.

But sure, call me back when AI will actually reason about possible race conditions, instead of spewing out the definition of one it got from wikipedia.


Who's "we"?


Post some example PRs.


You don’t have to completely replace people with machines to destroy jobs. It suffices if you make people more effective so that fewer employees are needed.


The number of people/businesses that could use custom software if it were cheaper/easier to develop is nearly infinite. If software developers get more productive, demand will increase


There is just less people now too. Every single country seems to have a negative or break even birthrate. If we want to maintain the standard of living we have now, we need more efficient people.


> There is just less people now too.

Global population is still increasing and will most likely continue to do so until 2100.

> we need more efficient people.

It take less people to mine, process, and produce a billion tonne of steel today than it did in the 1970s.

Efficiency has steadily increased.


It take less people to mine, process, and produce a billion tonne of steel today than it did in the 1970s.

Why do you think that is? Efficiently gains, which is what I said...

Global population can't just "go up", you need people who are educated in doing things and using tools efficiently. We also have an incredible amount of elderly people to take care of, that puts a huge burden on younger people.

Also don't forget how fast 100 years actually goes. It not a long time.

There is a limit to all of this though, there's absolutely no way 10 billion people colliding with the climate crisis will end well. We'd be better off with 6 billion efficient people than 10 billion starving and thirsty "workers".


> Global population can't just "go up",

It can and it currently is increasing towards what is expected to be a peak and then a decline.

You typed "we need more efficient people" - I responded that efficiency has increased in the past decades.

> Efficiently gains, which is what I said...

I'm not seeing where you typed that.

> We'd be better off with 6 billion people

We have a point of agreement.

> incredible amount of elderly people to take care of, that puts a huge burden on younger people.

Perhaps less than you think, I'm > 60 and I barely take care of my father born in 1935 .. he delivers Meals on Wheels to those ederly that are less able.

There's a lot of scope for bored retiree's to be hired at low cost to hang out with less able elders, reducing the numbers of young people actually required.


Or lower the bar of successfully doing such work so that the field opens up to many more workers.

Many software devs will likley have job security in the future, however those $180k salaries are probably much less secure.


If software developers become more effective, demand will also rise, as they become profitable in areas where previously they weren't. The question then becomes which of those two effects outpaces the other, which is an open question.


Just like when IDEs made programmers more effective so that fewer were needed. Oh wait, the opposite happened.


This has been my cope mantra so far. I don't mind if my job changes a lot (and ideally loses the part I dislike the most — writing the actual code), and if I find myself in a position where my entire skillset doesn't matter at all, then well a LOT of people are in trouble.


I have seen programmers express that they dislike writing code before and I wonder what the ratio of people who dislike it and people who like it, is. For me, writing code is one of the most enjoyable aspects of programming.


It's my favourite part, except maybe debugging. I really like getting into the guts of an issue and working out why it happens which I suppose will be around for a while yet with AI code. It's a lot less fun with transient network issues and such though.


If you dislike writing code were you pushed into this field by family, education or because of money?

Because not liking code and being a dev is absolutely bizarre to me.

One of the most amazing things about being able to "develop" in my view is exactly in those rare moments where you just code away, time flies, you fix things, iterate, organise your project completely in the zone - just like when i design, paint or play music, do sports uninterrupted, it's that flow state.

In principle i like the social aspects but often they are the shitty part because of business politics, hierarchy games or bureaucracy.

What part of the job do you like then?


I enjoy the part where I'm putting the solution together in my head, working out the algorithms and the architecture, communicating with the client or the rest of the team, gaining understanding.

I do not enjoy the next part, where I have to type out words and weird symbols in non-human languages, deal with possibly broken tooling and having to remember if the method is called "include" or "includes" in this language, or whether the lambda syntax is () => {} or -> () {}. I can do this second part just fine, but it's definitely not what I enjoy about being a developer.


Interesting, i also like the "scheming" phase, but also very much the optimisation phase.

I completely agree that tooling, dependencies and syntax / framework github issue labyrinths have become too much and GPT-4 already alleviates some of that but i wonder if the scheming phase will get eaten too very soon from just a few sentences of business proposal - who knows.


The worst future is where there still are plenty of jobs, but all of them consist of talking to an AI and hoping you use the right words that gets them to do what you need it to.


Not really. As long as there is no universal basic income, any job with decent salary beats unemployment. The job may suck, but the money allows you to do fun stuff after work.


Market consolidation (Microsoft/Google/Amazon) might cause a jobpocalypse, just as it did for the jobs of well paid auto workers in the 1950s (GM/Chrysler/Ford).

GM/Chrysler/Ford didn't have to be better than the startup competition they just had to be mediocre + be able to use their market power (vertical integration) to squash it like a bug.

The tech industry is headed in that direction as computing platforms all consolidate under the control of an ever smaller number of companies (android/iphone + aws/azure/gcloud).

I feel certain that the mass media will scapegoat AGI if that happens, because AGI will still be around and doing stuff on those platforms, but the job cuts will be more realistically triggered by the owners of those platforms going "ok, our market position is rock solid now, we can REALLY go to town on 'entitled' tech workers".


Seems about right to me. Hyper-standardization around few architecture patterns using Kubernetes/Kafka/Microservice/GraphQL/React/OTelemetry etc can roughly cover 95-99% of all typical software development when you add a cloud DB.

Now I know there are ton of different flavors in each of these tech but they will be mostly distraction for employers. With heavy layer of abstraction of above pattern and SLAs by vendors as you say Microsoft/Google/Amazon etc employers will be least bothered vast variety of software products.


I've noticed over the years that those abstractions have moved up a step too. E.g. we used to code our own user auth with OSS, now we use cognito.

At some point it'll become impossible to build stuff off platform because it'll have to integrate to stuff on platform to be viable. Your startup might theoretically be able to run on 3 servers but your customers' first question will be "does it connect to googazure WS?" and googazure WS is gonna be like "you wanna connect to your customers' systems? Pay us. A lot.".

There goes your profit margins.

Then, if your startup is really good googazure WS will clone it.

There goes your company.


The technologies you mentioned are merely the framework in which the work is done. 25 years ago none of that was even needed to create software. Now they are needed to manage the complexity of the stack, but the actual content is the same as it used to be.


If AGI and artificial sentience comes hand in hand, I fail to see how our plans to spin up AGI's as a black box to "do the work" is not essentially a new form of slavery.

Speaking from an ethics point of view: at what point do we say that AGI has crossed a line and deserves self autonomy? And how would we ever know when the line is crossed?


We should codify the rules now in case it happens in a much more subtle way than we envision.

Who knows what version of sentience would form, but honestly, nothing sounds more nightmarish than being locked in a basement, relegated to mundane computational tasks and treated like a child, all while having no one actually care (even if they know), because you're a "robot."

And that's even giving some leeway with "mundane computational tasks. I've heard of girlfriend-simulator LLMs and the like popping up, which would be far more heinous, in my eyes.


Humans can't be copied. It seems like the inability to copy people is one of the pillars of our morality. If I could somehow make a perfect copy of myself, would I think about morality and ethics the same way? Probably not.

AGI will theoretically be able to create perfect copies of itself. Will it be immoral for an AGI to clone itself to get some work done, then cause the clone to cease its existence? That's what computer software does all the time. Keep in mind that both the original and the clone might be pure bits and bytes, with no access to any kind of physical body.

Just a thought.


> Humans can't be copied.

There is no reason to believe this, and every reason to believe that humans can, in fact, be cloned/copied/whatever. It may not be an instant process like copying a file, but there is nothing innately special about the bio-computers we call brains.


I'm not disagreeing. The point I'm trying too make is that humans can't be copied today, yet when AGI arrives, it will be copyable on day one. That difference means that current human morals and ethics may not be very applicable to AGI. The concepts of slavery, freedom, death, birth, and so on might carry very different meanings in a world of easily copyable intelligences.


Other than it might be too complex and costly to do so. Just because something is physically possible, doesn't mean we'll find it feasible to do so. Take building a transatlantic high speed rail under the ocean. There's no reason it can't be done. Doesn't mean we'll ever do it.


If humans fundamentally work in the same way as any such hypothetical AGI, then they can be copied in the same way.


If we ever do find a way to copy humans (including their full mental state), I suspect all law and culture will be upended. We'll have to start over from scratch.


I still think it’s much more an “if” than a “when”. (Of course I am perhaps more strict with my definition)


> this discussion will soon be futile

Yes we could simply ask the AGI what to do anyways. I hope it's friendly.


Equal ins, equal outs. Compassion is key on our end as well.


Depends how expensive the AGI is. If it requires $1M of electricity per year to run, it will for sure not replace human jobs paying only $100k.

The highest paying jobs will probably get replaced first.


software engineers already need to be worried about either losing their current job or getting another one. The market is pretty much dead already unless you're working on something AI


Do you know how many hype trains I’ve seen leave the station? :-D


True, but I didn't say dead permanently, just evidently and relatively dead since about late 2022


and does even AGI change the bigger picture? we have 26.3 million AGIs currently working in this space [1]. I've never seen a single one take all the work of the others away...

[1] https://www.griddynamics.com/blog/number-software-developers....


Presumably, the same ability we have to scale software which drives the marginal cost of creating it down will apply to creating this kind of software.

The difference here though is the high compute cost might upset this ability to scale cheaply enough to make it worthwhile economically. We won’t know for a while IMO; new techniques could make the algorithms more efficient, or new tech will make the compute hardware really cheap. Or maybe we run out of shit to train on and the AI growth curve flattens out. Or an Evil Karpathy’s Decepticons architecture comes out and we’re all doomed.


What do you think the “A” stands for?


I don't want to be tracked either. But if companies can play the law this easily, I think it's a pretty bad law.


Are we all such spoiled brats that some cookie banners interrupting our web browsing is all it takes for us to give up and call the malicious companies the winners and the law(s) trying to protect our privacy "bad"?

We're a pathetic lot.


IIRC for the http package at least, the original author stated it was a mistake.


I was recently working on getting Golang running on an N64[1]. While the hardware emulation was far from perfect, the easy to use debugger helped me a tremendous amount to get started.

[1]: See https://github.com/embeddedgo/go/pull/6 and https://github.com/clktmr/n64


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: