Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaw12's commentslogin

The VIEW could be AI slop, but underlying CONTENT has some meaning.

There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.


> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs

Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.


BLS outlook is based on historical trends and inertia, both could be true at the same time:

* Yes software engineering jobs can grow - by increasing demand for custom software thanks to coding agents unlock

* AI can impact it - by making software engineers LLM code approvers


I'm not saying that your assessments are wrong. But you were talking about how valuable this content is, and I don't understand how the insight you claimed to get from the visualization ("There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs") could at all be discernible from the visualization.

BLS outlook is comically bad. For example, BLS had pharmacists' outlook as amazing all throughout the 2010s, while /r/pharmacy and sdnforums had a constant stream of posts complaining about declining pay and quality of life at work, all while the pharmacy business' profit margins and number of employers declined.

What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.


Valid points. But crucial part of not "letting go" of the code is because we are responsible for that code at the moment.

If, in the future, LLM providers will take ownership of our on-calls for the code they have produced, I would write "AUTO-REVIEW-ACCEPTER" bot to accept everything and deploy it to production.

If, company requires me to own something, then I should be aware about what's that thing and understand ins and outs in detail and be able to quickly adjust when things go wrong


In the past ten years as a team lead/architect/person who was responsible for outsourced implementations (ie Salesforce/Workday integrations, etc), I’ve been responsible for a lot of code I didn’t write. What sense would it have made for me to review the code of the web front end of the web developer for best practices when I haven’t written a web app since 2002?

as a team lead, if you are not aware of what's happening in the team, what kind of team lead is this?

on the other hand, you may have been an engineering manager, who is responsible for the team, but a lot of times they do not participate in on-call rotations (only as last escalation)


As a team lead, I know the architecture, the functional and non functional requirements, I know the website is suppose to do $x but I definitely didn’t guide how since I haven’t done web development in a quarter century, I know the best practices for architecture and data engineering (to a point).

That doesn’t mean I did a code review for all of the developers. I will ask them how they solved for a problem that I know can be tricky or did they take into account for something.


Just like DEI, sustainability efforts, I predict we will see new initiatives for forced hiring of Juniors.

Implementation can differ (e.g. ratio of interns vs total headcount and so on), but it is the time for governments to intervene and force corporations to train people, humans are resource for the government, they need to polish that resource to thrive.


> Just like DEI, sustainability efforts, I predict we will see new initiatives for forced hiring of Juniors.

The professor's jobs are to TEACH students.

Research grants are given by governments mainly to first TEACH students and secondly to get something useful.

If they are not doing their job they should be fired.

That's not DEI or anything of the sort. That's common sense.

They can do their research at private companies if it's worth it.


> Research grants are given by governments mainly to first TEACH students

Government's goal is obvious and correct, but if you have done a research and tried to get a grant you should know grants are very "political" as well, if you are researching a thing which is not trendy or takes another 10 years to yield results, but there is another lab who is telling we are researching LLM, it will be very difficult to get a grant even if you promise to TEACH/hire 20 students for that research.

Justifying long term benefits is difficult problem


As someone who both recoils at DEI and is at least a decade too old to benefit from a policy like this personally, I have to say this honestly sounds like a great idea.

Both avoids the tragedy of the commons (why would a corporation pay to train a junior when they can just let their competition do it then poach the experienced senior) and gives more opportunity to a new generation that are frankly getting economically screwed over enough as-is.


i mean they could train them for free in universities. just pay them to go to majors that actually matter for the economy

Yeah, could be, some problems I see with this implementation:

1. the wait time is too long for the company to fill a position, it is difficult to predict what happens in the next 4 years

2. difficult to match the students with companies. For example, you are interested in CS, but company wants specifically React developer (assuming there was no AI and there was still demand), would the student change all their courses based on the requirements and live like a robot who is forced to take courses they are not much interested in. Now imagine when gap is higher between topics (CS vs React is closer, compared to MBA vs procurement, both are somewhat subset of same topic)


The point is that we will still need senior level employees, but the way fresh grads get to that level is generally through entry level positions, experience and mentorship. I don't think we can expect the university system to start pumping out senior level graduates.

We could but it would be something more like the medical system where education lasts much longer, and expected wages at the end are much higher.

i dont mean to just enroll them in a undergrad. make them work in gov projects

Humans are a resource for the government? Jesus man, slavery has been abolished in the West.

Corporate lobby should be treated as bribing the politicians.

In ideal world (where we don't live), some of the primary goals of corporations and governments contradict to each other (and there is another body):

* Corporations - maximum profit at all cost to its shareholders

* Government (I mean the ideal one) - prosperity for its citizens

* UN - prosperity for the world (because governments can achieve prosperity for own citizens by exploiting other government citizens)

When they have contradictory goals, lower in the chain should not drastically impact the higher body's goals.

Corporate lobby is doing it, hence US is moving towards feudal system. Because corporations wants to exploit people at maximum speed and squeeze everything, but do not want to take the responsibility for nurturing the people.

Here is how it looks like:

    * You hire Sr eng, squeeze max out of them, lay them off
    * Demand government to have better education, so it can squeeze out next
    * Stop unionization at all costs
    * now we are seeing this with Junior positions, no one wants to nurture and grow them, everyone wants Sr+ engineers

Isn't it great news for us?

You get an open model which is a 95% of Opus 4.6 quality and 80% cheaper in most inference providers and also can run on your own hardware

Also they did the hard parts of:

* crawling the content

* running the fine tuning (or training)

Better than 1 or 2 companies taking control of the whole AI economy


Because this is Microsoft, experimenting and failing is not encouraged, taking less risky bets and getting promoted is. Also no customer asked them to have 1-bit model, hence PM didn't prioritize it.

But it doesn't mean, idea is worthless.

You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea.


> You could have said same about Transformers, Google released it, but didn't move forward,

I don't think you can, Google looked at the research results, and continued researching Transformers and related technologies, because they saw the value for it particularly in translations. It's part of the original paper, what direction to take, give it a read, it's relatively approachable for being a machine learning paper :)

Sure, it took OpenAI to make it into an "assistant" that answered questions, but it's not like Google was completely sleeping on the Transformer, they just had other research directions to go into first.

> But it doesn't mean, idea is worthless.

I agree, they aren't, hope that wasn't what my message read as :) But, ideas that don't actually pan out in reality are slightly less useful than ideas that do pan out once put to practice. Root commentator seems to try to say "This is a great idea, it's all ready, only missing piece is for someone to do the training and it'll pan out!" which I'm a bit skeptical about, since it's been two years since they introduced the idea.


What OpenAI did was train increasingly large transformer model instances. which was sensible because transformers allowed for a scaling up of training compared to earlier models. The resulting instances (GPT) showed good understanding of natural language syntax and generation of mostly sensible text (which was unprecedented at the time) so they made ChatGPT by adding new stages of supervised fine tuning and RLHF to their pretrained text-prediction models.

There were plenty of models the size of gpt3 in industry.

The core insight necessary for chatgpt was not scaling (that was already widely accepted): the insight was that instead of finetuning for each individual task, you can finetune once for the meta-task of instruction following, which brings a problem specification directly into the data stream.


I miss having the completion models like davinci-003 since it gained in performance where it lacked simplicity to get what you want out.

It was fun to come up with creative ways to get it to answer your question or generate data by setting up a completion scenario.

I guess "chat" became the universal completion scenario. But I still feel like it could be "smarter" without the RLHF layer of distortion.


Google had been working on a big LLM but they wanted to resolve all the safety concerns before releasing it. It was only when OpenAI went "YOLO! Check this out!" that Google then internally said, "Damn the safety concerns, full speed ahead!" and now we find ourselves in this breakneck race in which all safety concerns have been sidelined.

Scaling seemed like the important idea that everyone was chasing. OpenAI used to be a lot more safety minded because it was in their non profit charter, now they’ve gone for-profit and weaponized their tech for the USA military. Pretty wild turnaround. Saying OpenAI was cavalier with safety in the early days is inaccurate. It was a skill issue. Remember Bard? Google was slow.

They thought people might prefer quality and safety.

On the one hand, not publishing any new models for an architecture in almost a year seems like forever given how things are moving right now. On the other hand I don't think that's very conclusive on whether they've given up on it or have other higher priority research directions to go into first either

> You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea

Google released transforms as research because they invented it while improving Google Translate. They had been running it for customers for years.

Beyond that, they had publicly-used transformer based LMs ("mums") integrated into search before GPT-3 (pre-chat mode) was even trained. They were shipping transformer models generating text for years before the ChatGPT moment. Literally available on the Google SERP page is probably the widest deployment technology can have today.

Transformers are also used widely in ASR technologies, like Google Assistant, which of course was available to hundreds of millions of users.

Finally, they had a private-to-employees experimental LLMs available, as well as various research initatives released (meena, LaMDA, PaLM, BERT, etc) and other experiments, they just didn't productize everything (but see earlier points). They even experimented with scaling (see "Chinchilla scaling laws").


As a Level 6,

I am feeling like to go back to Level 5.

Level 6 helps with fixing bugs, but adding a new feature in a scalable way is not working out for me, I feed bunch of documents and ask it to analyze and come up with a solution.

1. It misses some details from docs when summarizing

2. It misses some details from code and its architecture, especially in multi-repo Java projects (annotations, 100 level inheritance is making it confuse a lot)

3. Then comes up with obvious (non) "solution" which is based on incorrect context summaries.

I don't think I can give full autonomy to these things yet.

But then, I wonder, people on Level 8, why don't they create bunch of clones of games, SaaS vendors and start making billions


Most of the successes, especially online, is rarely about the thing that is built but more about the marketing around it. I don't we can fully automate marketing effectively

Which model(s) are you using?

If Seniors are going to review every GenAI generated code, how do they keep up with the volume of changes?

So you have 2 systems of engineers: Sr- and Sr+

1. Both should write code to justify their work and impact

2. Sr- code must be reviewed by Sr+

What happens:

a. Sr+ output drops because review takes their time more and more

b. Sr+ just blindly accepts because of the volume is too high, and they should also do their own work

c. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more code

I think (b) will happen


Why not remove TOS completely, if your provider is anyway forcing new terms?

Suppose I start with simple TOS at the beginning: do not use in criminal scenarios

Then I change it to: do whatever you do with it, you are responsible for it anyways

I can even do this per sign-up, show TOS which makes sense, then next day send new TOS to allow everything


I don't as well, but there are some valid ideas lurking around in the minds of leadership.

Maybe not 60-70% reduction, but we are heading towards a direction where companies will need less people


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: