The VIEW could be AI slop, but underlying CONTENT has some meaning.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
I'm not saying that your assessments are wrong. But you were talking about how valuable this content is, and I don't understand how the insight you claimed to get from the visualization ("There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs") could at all be discernible from the visualization.
BLS outlook is comically bad. For example, BLS had pharmacists' outlook as amazing all throughout the 2010s, while /r/pharmacy and sdnforums had a constant stream of posts complaining about declining pay and quality of life at work, all while the pharmacy business' profit margins and number of employers declined.
What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.
Valid points. But crucial part of not "letting go" of the code is because we are responsible for that code at the moment.
If, in the future, LLM providers will take ownership of our on-calls for the code they have produced, I would write "AUTO-REVIEW-ACCEPTER" bot to accept everything and deploy it to production.
If, company requires me to own something, then I should be aware about what's that thing and understand ins and outs in detail and be able to quickly adjust when things go wrong
In the past ten years as a team lead/architect/person who was responsible for outsourced implementations (ie Salesforce/Workday integrations, etc), I’ve been responsible for a lot of code I didn’t write. What sense would it have made for me to review the code of the web front end of the web developer for best practices when I haven’t written a web app since 2002?
as a team lead, if you are not aware of what's happening in the team, what kind of team lead is this?
on the other hand, you may have been an engineering manager, who is responsible for the team, but a lot of times they do not participate in on-call rotations (only as last escalation)
As a team lead, I know the architecture, the functional and non functional requirements, I know the website is suppose to do $x but I definitely didn’t guide how since I haven’t done web development in a quarter century, I know the best practices for architecture and data engineering (to a point).
That doesn’t mean I did a code review for all of the developers. I will ask them how they solved for a problem that I know can be tricky or did they take into account for something.
Just like DEI, sustainability efforts, I predict we will see new initiatives for forced hiring of Juniors.
Implementation can differ (e.g. ratio of interns vs total headcount and so on), but it is the time for governments to intervene and force corporations to train people, humans are resource for the government, they need to polish that resource to thrive.
> Research grants are given by governments mainly to first TEACH students
Government's goal is obvious and correct, but if you have done a research and tried to get a grant you should know grants are very "political" as well, if you are researching a thing which is not trendy or takes another 10 years to yield results, but there is another lab who is telling we are researching LLM, it will be very difficult to get a grant even if you promise to TEACH/hire 20 students for that research.
Justifying long term benefits is difficult problem
As someone who both recoils at DEI and is at least a decade too old to benefit from a policy like this personally, I have to say this honestly sounds like a great idea.
Both avoids the tragedy of the commons (why would a corporation pay to train a junior when they can just let their competition do it then poach the experienced senior) and gives more opportunity to a new generation that are frankly getting economically screwed over enough as-is.
Yeah, could be, some problems I see with this implementation:
1. the wait time is too long for the company to fill a position, it is difficult to predict what happens in the next 4 years
2. difficult to match the students with companies. For example, you are interested in CS, but company wants specifically React developer (assuming there was no AI and there was still demand), would the student change all their courses based on the requirements and live like a robot who is forced to take courses they are not much interested in. Now imagine when gap is higher between topics (CS vs React is closer, compared to MBA vs procurement, both are somewhat subset of same topic)
The point is that we will still need senior level employees, but the way fresh grads get to that level is generally through entry level positions, experience and mentorship. I don't think we can expect the university system to start pumping out senior level graduates.
Corporate lobby should be treated as bribing the politicians.
In ideal world (where we don't live), some of the primary goals of corporations and governments contradict to each other (and there is another body):
* Corporations - maximum profit at all cost to its shareholders
* Government (I mean the ideal one) - prosperity for its citizens
* UN - prosperity for the world (because governments can achieve prosperity for own citizens by exploiting other government citizens)
When they have contradictory goals, lower in the chain should not drastically impact the higher body's goals.
Corporate lobby is doing it, hence US is moving towards feudal system. Because corporations wants to exploit people at maximum speed and squeeze everything, but do not want to take the responsibility for nurturing the people.
Here is how it looks like:
* You hire Sr eng, squeeze max out of them, lay them off
* Demand government to have better education, so it can squeeze out next
* Stop unionization at all costs
* now we are seeing this with Junior positions, no one wants to nurture and grow them, everyone wants Sr+ engineers
Because this is Microsoft, experimenting and failing is not encouraged, taking less risky bets and getting promoted is. Also no customer asked them to have 1-bit model, hence PM didn't prioritize it.
But it doesn't mean, idea is worthless.
You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea.
> You could have said same about Transformers, Google released it, but didn't move forward,
I don't think you can, Google looked at the research results, and continued researching Transformers and related technologies, because they saw the value for it particularly in translations. It's part of the original paper, what direction to take, give it a read, it's relatively approachable for being a machine learning paper :)
Sure, it took OpenAI to make it into an "assistant" that answered questions, but it's not like Google was completely sleeping on the Transformer, they just had other research directions to go into first.
> But it doesn't mean, idea is worthless.
I agree, they aren't, hope that wasn't what my message read as :) But, ideas that don't actually pan out in reality are slightly less useful than ideas that do pan out once put to practice. Root commentator seems to try to say "This is a great idea, it's all ready, only missing piece is for someone to do the training and it'll pan out!" which I'm a bit skeptical about, since it's been two years since they introduced the idea.
What OpenAI did was train increasingly large transformer model instances. which was sensible because transformers allowed for a scaling up of training compared to earlier models. The resulting instances (GPT) showed good understanding of natural language syntax and generation of mostly sensible text (which was unprecedented at the time) so they made ChatGPT by adding new stages of supervised fine tuning and RLHF to their pretrained text-prediction models.
There were plenty of models the size of gpt3 in industry.
The core insight necessary for chatgpt was not scaling (that was already widely accepted): the insight was that instead of finetuning for each individual task, you can finetune once for the meta-task of instruction following, which brings a problem specification directly into the data stream.
Google had been working on a big LLM but they wanted to resolve all the safety concerns before releasing it. It was only when OpenAI went "YOLO! Check this out!" that Google then internally said, "Damn the safety concerns, full speed ahead!" and now we find ourselves in this breakneck race in which all safety concerns have been sidelined.
Scaling seemed like the important idea that everyone was chasing. OpenAI used to be a lot more safety minded because it was in their non profit charter, now they’ve gone for-profit and weaponized their tech for the USA military. Pretty wild turnaround. Saying OpenAI was cavalier with safety in the early days is inaccurate. It was a skill issue. Remember Bard? Google was slow.
On the one hand, not publishing any new models for an architecture in almost a year seems like forever given how things are moving right now. On the other hand I don't think that's very conclusive on whether they've given up on it or have other higher priority research directions to go into first either
> You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea
Google released transforms as research because they invented it while improving Google Translate. They had been running it for customers for years.
Beyond that, they had publicly-used transformer based LMs ("mums") integrated into search before GPT-3 (pre-chat mode) was even trained. They were shipping transformer models generating text for years before the ChatGPT moment. Literally available on the Google SERP page is probably the widest deployment technology can have today.
Transformers are also used widely in ASR technologies, like Google Assistant, which of course was available to hundreds of millions of users.
Finally, they had a private-to-employees experimental LLMs available, as well as various research initatives released (meena, LaMDA, PaLM, BERT, etc) and other experiments, they just didn't productize everything (but see earlier points). They even experimented with scaling (see "Chinchilla scaling laws").
Level 6 helps with fixing bugs, but adding a new feature in a scalable way is not working out for me, I feed bunch of documents and ask it to analyze and come up with a solution.
1. It misses some details from docs when summarizing
2. It misses some details from code and its architecture, especially in multi-repo Java projects (annotations, 100 level inheritance is making it confuse a lot)
3. Then comes up with obvious (non) "solution" which is based on incorrect context summaries.
I don't think I can give full autonomy to these things yet.
But then, I wonder, people on Level 8, why don't they create bunch of clones of games, SaaS vendors and start making billions
Most of the successes, especially online, is rarely about the thing that is built but more about the marketing around it. I don't we can fully automate marketing effectively
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
reply