SO was phenomenal for sourcing engineering talent. No board I ever used since gets close to the level of incoming candidate quality. I am still sorry it's gone, and I'm especially sorry every time I need to interact with LinkedIn hiring instead.
The arena is saturated, but the quality is garbage. Lots of 3rd party recruiters hiding company names posting the same job everywhere.
If you can keep the quality high for a small-medium group there is money to be made, I imagine (i'm trying to start a job board myself for a tiny group). But just look at anything Pieter Levels has built (remoteok.io) dude has a huge monthly MRR. granted hes a solo founder and SO probably employs many. Also look at Dice.com. They allegedly bring in near 150Million/year! To have a targeted audience in the palm of you hands and throw it out the window is just bad business. SO is really just throwing away money.
I used to work for company helping with hiring process, we had integration with most of them.
The consensus at the time was that end to end wise (company hires and is happy with candidate) - you could just remove every integration leaving just stack overflow jobs and it'd be fine.
It's beyond me why they cancelled it, so stupid from my PoV.
Maybe SO management figured it should be more lucrative to have those job sites/brokers bid on ads placed on SO rather than becoming a competitor to their customers.
I loved looking for jobs on SO. I found some of the coolest companies and was seriously bummed when it went under. It’s how I heard about my current job I’ve happily been at for a couple of years.
>>Just as tractors made farmers more productive ...
really, that is example used? If AI will do to development was tractors did to farming there is not going to be many developers left.
>>today’s AI the potential for the loss of certain jobs, yes, but also, if history is a guide, a future in which a great variety of more highly skilled work
Ag -> Industrial -> Information transitions were all supported by a mix of massive expansion in population, mass migration of populations, and globalization of economies
information -> automation / AI transition does not seem to have these 3 things in the same way. Globalism is slowing or reversing into protectionism. Migration is still high but seemingly for different reasons (geopolitical) as people displaced for war, crime, or climate and critical for this discussion population growth as SLOWED way down, and it expected to reverse about 2040.
This means people looking at the historical models for how these tech disruptions played out are very flawed in their "everything will be just fine because Farmers became factory workers, and factory workers became developers"
Current economic models may not play out like they did in the past. History may rhyme, but it does not actually repeat.
Also lets not forget the terrible way society in general handled these previous transitions that resulted is massive amounts of suffering for the people displaced. "Learn to AI" can not become the mantra of the day like "Learn to Code" did....
My prediction is we will see a MASSIVE increase in wealth gaps, and extreme decrease in standard of living in most of the industrial world (we are already seeing this in a limited way) leading to more and more political instability
Yep. Farm. Factory. Office. The end. There's nowhere to go, not for this amount of people.
The idea that rather than losing jobs, it will compliment us and even enable millions of new "non-programmers" to start programming, I'm puzzled where this demand actually would come from? What would they even "build? And can you imagine the absolute nightmare of millions of people producing "code" with zero background in programming?
In a sane world, this could be a humanitarian moment where we redirect our human capital to things of a less commercial nature, say human care. We don't live in that world though.
Electricians, plumbers, nurses, construction workers, manufacturing, there's an absolute ton of blue collar jobs out there short of people.
Where do you think all the renewable energy is coming from, its coming from large teams of crews installing those wind turbines and solar panels in the wilderness.
Every electrical transformer in the US is going to have to be upgraded, to accomodate for surges in demand from EVs.
The housing price crisis, once regulation is fixed, people will still have to build them. Robot construction workers ain't coming any time soon.
All of these are massive sources of employment. If AI really does automate a large chunk of white collar jobs, corporate profits will skyrocket, and so will government tax and private investment (Profits generally end up in investment funds somewhere), and the above sectors will absorb that investment.
Now, will the transition look pretty? Will your office workers make an easy transition to physical labour? No. But neither did the coal miners find 'learning to code' easy.
Time to buckle up, the tsunami comes. Also, if you can use AI, you are already safer than most of the white collar jobs, so we shouldn't be the ones crying the hardest.
Stop telling people to find new work and start talking about UBI and short work weeks. No one needs to work 40 hours now, and AI makes that even more useless.
In 1985, there were about 170,000 coal miners in the United States — the peak of over 800,000 was in the 1920s. There simply isn’t a comparison in the scale there.
Coal miners weren't the only blue collar workers replaced, there were 20 million US manufacturing staff at its peak, now its down to 12. All offshored. That's 8 million alone.
The US is already spending $1 trillion on infrastructure, $1 trillion can hire 10 mil people digging ditches for a year, and when subsidizing gainful employment (ie, only subsidizing 20% of the wages), can probably generate 50 mil jobs. If AI is that impressive, expect the government coffers to swell so much it can spend $1 trillion on infrastructure every single year.
The US can easily absorb $30 trillion in total of infrastructure investment + housing construction, rebuilding itself to say Chinese standards. This wasn't possible previously because of cost and labour shortages. Now it is.
The future isn't all roses, but pretending it'll be some sort of apocalypse, is just another form of coping, excusing yourself to understanding the reality.
>>The US is already spending $1 trillion on infrastructure,
No the US is not. I urge you to read the damn bills not the Headlines. No where in any of these "infrastructure" is even 10% of the money being spent on actual infrastructure...
>>If AI is that impressive, expect the government coffers to swell so much it can spend $1 trillion on infrastructure every single year.
Where on earth do you get this.... Where in the history of anything has that been true. Where do you expect this money to come from... Corporate Taxes. Please ...
>>Electricians, plumbers, nurses, construction workers, manufacturing, there's an absolute ton of blue collar jobs out there short of people.
and most of them do not pay even a fraction of what information jobs pay. The 100K welder that I see tossed around all the time is like the $300K dev, sure they exist but that is not the median.
>>Every electrical transformer in the US is going to have to be upgraded, to accomodate for surges in demand from EVs.
Which is ironic given there is an extreme transformer shortage in the US, and the manufacturing supply chain that builds many of the components for transformers is actively being reallocated to build things for EV's
>>The housing price crisis, once regulation is fixed,
HAHAHAHAHAHAHAHAHA good one... I hope you are not serious
>>Robot construction workers ain't coming any time soon.
I dont know about that, Several innovations from Concrete printers, to rammed earth homes look very interesting. Then there is a "FlatPack" home trend, and several other innovations that could very well reduce the number of blue collar trade jobs as well
>>All of these are massive sources of employment.
Employment yes... High Income No, they also destroy the body so you really need to be in a supervisory, manager, or other such role by the time you are 50-55.
>>and the above sectors will absorb that investment.
That has never been true in the history of humanity, and it will not be true here.
>>>Will your office workers make an easy transition to physical labour? No. But neither did the coal miners find 'learning to code' easy.
This is one of the worse misconceptions people have, very few if any "coal miners" learned to code.. This does not happen. That is not the transition
The transition from Mining to Information was a generational thing, a family that was 3 generations of miners, well the 4th generation learned to code.
The miners move to other blue collar jobs, they became truck drivers, plumbers, welders, factory workers, etc.
>>Also, if you can use AI, you are already safer than most of the white collar jobs,
You're absolutely right about blue collar needing a revival, there's an enormous amount of work to do there, and it's a type of work near impossible to replace with AI. Further, I'd say that we need to better appreciate these jobs, in compensation and work conditions.
That said, I don't see this mass transition happening from white collar to blue collar for a very sizable part of existing white collar. In many developing countries, white collar is a of a considerable age. There's skill/capability/physical issues as well as a social/cultural aspect. You're telling people that their educational investment is now worthless and are hereby demoted to the working class. May be correct, true and just, but expect social unrest.
Agree. All historical comparison to prior disruptions are invalid.
The reason they are invalid, is that all prior disruptions were narrow, were followed by periods of stabilization that allowed people to plan and reason about their lives.
AI is the opposite in this regard, it is broad and there is no period of stabilization. It is continual acceleration.
AI is a skill/technology replication machine. This concept has never existed before. The impacts are not relatable to anything else.
The implications go far beyond jobs as well. The potential cultural and societal impacts are equally as disturbing. I've written in detail about some of those possibilities here.
I completely agree. I do not understand why some people are downvoting you. Everyone on this site (assuming at least 95% are developers) should conceptually understand that AI to office jobs is nothing like tractors to farmers. Before AGI it's an arms race between what skill can humans acquire that AI can't yet simulate. After AGI there is no contest.
It's crazy to me how I have memories of AI discussions from years back where this crucial difference was well known and acknowledged. But right now it feels like half the people are turning a blind eye to it on purpose.
"Climbing the skill ladder is going to look more like running on a treadmill at the gym. No matter how fast you run, you aren’t moving, AI is still right behind you learning everything that you can do."
Huh. The way I see SO and the future of AI is ... I don't. I don't bother looking at SO any more for the easy questions, I just use ChatGPT. It's not perfect, but neither is SO, and ChatGPT is faster and more polite.
I still don't think ChatGPT is really coming for software engineers' jobs anytime soon, though. A high level programming language is about telling a computer exactly what you want as succinctly as possible. By the time you get specific enough with ChatGPT to produce said code, you could just write that code yourself. Without the non-deterministic behavior, too.
If your manager can assign you a Jira task with a 3 sentence description and get a reasonable result, what makes you think they can’t have the same interaction with an AI? Sure, it won’t be ChatGPT, it’ll be something else, but same thing.
In this case there’s nothing specific to humans that make us uniquely capable of completing an information task. It’s not like we’re talking about creating great art or something. There doesn’t have to be intent behind completing a spec in the most reasonable way possible.
I don't think I've ever seen a post on SO with that many downvotes before. That's pretty telling how the community feels about being used as training data (especially backed up by the highly upvoted reply saying such).
That's nothing for a meta post with an unpopular announcement, there are quite a few with hundreds of downvotes. But yes, this does likely indicate something about how the active SE users feel about this. It doesn't help that the blog is written in a way that is much more likely to appeal to shareholders than software developers on SO. But ChatGPT has been a quite significant moderation issue, and the blog post doesn't address any of this (it is kinda devoid of content in general beyond "SE will do something with generative AI, updates later this year").
"That's pretty telling how the community feels about being used as training data"
I don't think most of the replies on SO mention this directly, they mostly focus on how this will impact the site's quality negatively. But really you are correct that people don't like being used as training data and imo the "arguments" everyone are making against AI disadvantages on SO, on art sites, on wherever else are pretty much "excuses". Quoted because even if they are correct they are just excuses for the core much more ethical issue - humans don't want to be replaced. Being replaced by a machine is even shittier. Being replaced by a machine that exploited your own hard work against you even more so.
In a real world example if my boss hired a junior whom I must mentor with the sole purpose of me being fired and replaced so he can pay him less. Well that would be considered very unethical even though it does happen and many workers do put up with it. Yet the current AI movement is that on an unprecedented scale and to the people being replaced it must feel like a massive bitch slap in the face.
To me it seems like an extreme expression of power and it's bizzare how many people are fine with it.
meta forums on SE sites see disagreement a lot more often than the regular answers areas. These are usually longtime SE users with a minimum of site credit to participate. So while these are are not the bulk of SE users their voices should ring louder imo.
have I been missing something all these years? How do you see reply votes? I only see # points (not actual up/down vote count) on my own replies, and the OP.
Click on the votes themselves. This is at 1000 points as sort of a rate limit so that only a fraction of the site can do it) and to ensure that people understand the nuance of voting better.
Prashanth Chandrasekar was brought in as the hatchet man to facilitate the sale after destroying Rackspace. Fired the beloved community managers and the community became much more hostile since. Why are any of you surprised he is out of touch with the community?
SO should be ran by a foundation funded by tech companies enlightened enough to realize the massive productivity gains it produces (with SE as a side goodwill project). In this era, this counts as wishful thinking. Instead we got a 1.8B sale, that's not pocket change, now the profit must flow.
Only one really, this being Internet Archive. Mozilla [1] and the Wikimedia Foundation [2 - talking about Wikipedia specifically] are too ideologically tainted to be seen as "enlightened". I have not heard of similar problems with the Internet Archive so I hope that sanity prevails in that organisation so that internet history is recorded regardless of the ideological bent of what is archived.
In addition to the problems you listed, Wikimedia spends money like it was water, and Mozilla's handling of their main product, Firefox, could be charitably described as mismanagement. Neither organization deserves trust.
The Internet Archive that just risked their existence on a why-did-we-think-we'd-get-away-with-this blunder (charitably you could call this ideological blindness, at best?)?
But it is possible for tech companies, even ones we generally think of as evil, to collaborate (entirely out of self interest!) on mutually beneficial advancements.
Don't mistake "it would be wise for tech companies to support such a thing because they would get ROI" with "any company who supported such a thing should automatically be declared noble and perfect and of unimpeachable character forever"
Once upon time the name Rackspace meant quality and the utmost in customer support. You paid for that, of course.
In 2015 Prashanth Chandrasekar became Vice President and GM of the company -- the leader in all but name. In 2016 Apollo Global Management bought the company and took it private. The rest, as they say, is history. Sad history. And yes, he is 100% to blame. Or, from his and Apollo's perspective, to praise. Gutting a company this big brings in a lot of dough.
It took him two years to sell Stackoverflow. It's a harder sell, after all, than Rackspace.
In a way, Community may be the past of AI... Meaning the "Great Pause" [1] could happen, not by decree/law/agreement, but by people who will (out of self-preservation?) stop adding quality content to the free data sources that are SO, GitHub, etc. If ChatGPT and others end up "stuck" with data up to 2023, and the data afterwards are wacked-out conspiracy theories and SEO padded recipes, we may just end up with a "usefulness" limit of transformer AIs in general.
I'm getting the point of the post, but confused to what it means for community/public SO.
Is the essence really: "Please contribute manual qualitative solutions to public SO, so we can use it to train GenAI for our enterprise customers", or have I misread?
I think this non-announcement is the preamble to trying to change the licensing of SO content in such a way that other organizations can't use it as training data, but SO can. Obviously such a move would outrage the SO userbase so they're flailing about trying to spin it as something to do with 'community'.
Isn't it obvious that the new AI gods will require our regular offerings in the form of text, pictures, music, and anything else that can be manifested in this existence we call reality? Create, and praise the transformers.
That would be a hilarious future if humanities job was to upload our creative offerings to AI and it dispenses some trivial amount of crypto based on how much it learned.
Your 8 year olds cat drawing has been subsumed for 25c. Your essay on Philip K. Dick has been subsumed for .32c.
Transformers and AI will discover that 98% of human expression is just a rehash of something else. That 2% of original expression will become 50x as valuable instead of getting lost in the noise. This may lead to unpredictable disruptions in the extreme regularity of human behavior as people start to realize how boring and unoriginal everything they've ever thought and done is.
"as people start to realize how boring and unoriginal everything they've ever thought and done is"
Most people actually like stability and no surprises, when doing their habits, like drinking coffee for example. It should not suddenly turn to wine, that would freak them out.
And books and movies should follow a familiar pattern, or it will be perceived as disturbing. Slight variations are welcome, but nothing too drastic. So I doubt people will be truly shocked.
Comedy gold. So first AI takes content created by human labor without permission or compensation. Then it centralizes the sum of it and monetizes it exclusively.
But wait, it gets better. You will now also supply your labor for free to fix/curate the current AI errors. Which will make the AI even better and more profitable, and yourself ever more obsolete over time.
Picking our brains to create a giant private for-profit brain. Why would anybody willingly contribute to this scheme with their free time, in a backdrop where their own relevancy is at stake? There is no community without human incentives, every community will starve and die. You can extend this doom scenario to all open (web) content.
More pragmatically speaking, StackOverflow is royally screwed. It was already on its way down for various reasons but this is a shock. AI coding assistants are rapidly spreading and improving, making it inevitable that programmers will have less need for a direct visit to SO over time. Worse, those actually keeping the site running is a small group of hardcore volunteers that you just alienated.
The future is even more bleak for their enterprise product. Having your private copy of some data and training it for internal use is rapidly being commoditized. Many companies have a Microsoft contract, giving them (potential) access to Azure OpenAI that allows you to do just that.
But that doesn't take it far enough and is just an intermediary step. Soon you'll simply point your enterprise AI at everything. Your Wiki, your documents, your SharePoint, your email. All of the companies' knowledge will be at your fingertips from any contextual UI, whether this is Word, Excel, Outlook or your code editor.
And not just that, this enterprise intelligence will be combined with the world's intelligence. In such a future, would one seriously need a private copy of StackOverflow? The future I describe is about a year away.
It sounds like he’s suggesting that AI will open up a huge new pool of amateur developers, and those developers will need a community to turn to to know how to leverage AI.
But that kinda ignores the impact that AI will have on the concept of the SO community, or whether they’ll even be a need for it.
I honestly think his take is a misread of what kind of “community” SO has. My use of SO has always been 99% functional and borderline mercenary. I’m not getting to know anyone, I’m not building relationships, I just need a question answered. There’s nothing sticky about this community other than it being a good place to get those questions answered. As soon as AI can do that, I’ll never return to that “community”, and I’ll miss it as much as I miss Yahoo Answers.
> At Stack Overflow, we’ve had to sit down and ask ourselves some hard questions. What role do we have in the software community when users can ask a chatbot for help as easily as they can another person? How can our business adapt so that we continue to empower technologists to learn, share, and grow?
Yes, why would one subject themselves to the toxic SO community when a bot can give you a tailored answer in seconds and doesn't close your question as duplicate.
SO is a toxic cesspool, especially for people new to programming or just looking to learn. If ChatGPT and other AI tools can get the job done without resorting to asking anything from the SO “community”, that’s a victory.
The problem is if you solve the problem of not needing the community by training an AI on community content, your community will leave and then you're out of training data. AI is impressive, but I'm unconvinced it can answer truly novel programming questions.
ChatGPT can not do anything especially not in programming.
All it can provide is how the answer would sound or look like.
Be prepared for an absolute avalanche of bugs and security holes no one alive would've made. I make my living from debugging so I welcome the new job security but it's a massive net negative for society, no question about that.
I asked it to write a simple function to copy text to clipboard. It unecessarily created a async function and forgot to pass down the text as a variable.
Next, it tried to use AlpineJS to create tooltips with NextJS which is really not supported and couldn't fix the resulting bugs.
For sure, it's better than nothing, but you've got to be at least somewhat competent before you start copy-pasting code from it blindly.
For me, both. I search for something generic, either an official manual, a bug report or a StackOverflow's answer pops up. I have a specific narrow question, I'm unable to google the answer, I ask, a more knowledgeable person appears and ties all the loose ends together in I way I have never thought of.
"It might be in the self-interest of each developer to simply turn to the AI for a quick answer, but unless we all continue contributing knowledge back to a shared, public platform, we risk a world in which knowledge is centralized inside the black box of AI models that require users to pay in order to access their services."
A shared, public platform you say? The type of platform that AI will scan and train on? Therefore by contributing to such platform, you actively participate in AI centralization, no?
And the sweat you put in can now be monetised even more efficiently. Think about it, once ai replaced us and we’ll all be unemployed, we’ll have nothing better to do than sit and post on SO to make these dudes rich.
I assume the CEO is intentionally vague. He knows what's up. There's very few people whom enjoy interacting with SO, most just want to get to the answer.
If AI code assistance has that answer, it's game over. Not even leeching SO would be attractive, let alone contributing.
Some comments may refer to that discussion post instead of the original blog (e.g. ones talking about downvotes) because it was a separate submission to HN whose comments were moved to this page.
A bunch of the comments seem to be of the opinion that "using LLMs in S/O" === "LLM answering questions"
But that isn't the only task available for fancy auto-complete. Eg. The LLM could help novices make better questions, or include more/less context in an answer, with the person still being the arbiter of truth.
One of the biggest frustrations for people on SO is struggling to find a question that already exists - and then people get really angry when their question is closed as a duplicate... but what if the LLM could take proposed question text and point the person at an answered question so they don't even have to wait for an answer or a duplicate? It'd be much better than the current duplicate finder.
>but what if the LLM could take proposed question text and point the person at an answered question so they don't even have to wait for an answer or a duplicate?
I have used LLMs before to understand broken English. Being able to read someones words without stopping every few lines is a major benefit to interlingual communication.
Has there ever been a case of a company the size of SO deciding that, given their skills and culture, they’re just going to wind the company down rather than try and compete in a new technical arena that they’re entirely unsuited for? I’d respect that.
I don’t know if what he’s suggesting here makes sense (I tend to think no), but I’m automatically a little skeptical of the typical response to a serious threat: “no way, y’all, this is actually good for us!”
Imagine google coming out right now and saying the future of AI is search ads.
"Just as tractors made farmers more productive, we believe these new generative AI tools are something all developers will need to use if they want to remain competitive."
Yeah it also put a lot of them out of jobs. At least the tractor doesn't steal people's work to resell it, it's just a tool.
Funny thing I just realised I have contributed absolutely nothing to SO since ChatGPT came out. I wonder if others have done the same. I also wonder if open source zealots are enough to keep these regurgitators up-to-date or if their quality will decline as the dead internet approaches
I am not updating or pushing any more code to my humble open source repositories, nor am i answering questions on reddit. If the plan is to put us out of jobs they can reingest their own content. Like the HUMANCENTiPAD.
Online discussion happens at the edge of what is known and that is where AI learns from. Yes, humans talking in a scrape-able way is needed for the future of AI, but there should really be some way for the teachers of AI to get compensated for their efforts in improving it. If Stack Overflow trains a model on contributor data it should not call the contributors community, but investors with proper compensation.
the use and abuse of the term "community" to mean things convenient to for-profit content aggregators. What could be wrong with the New Digital Feudalism?
I would suspect that the company is too small to train their own LLM from scratch. But Stack Overflow probably has too much traffic to just pay for something like the ChatGPT API and built something on top of it. I'm not sure how many good options there are in between, can you realistically create your own LLM for this kind of specialized area without the kind of resources OpenAI/Google/Microsoft have?
seems folks are mad because Stack exchange ceo wants to use their own LLM to provide answers on their site. Which is something that other SE sites have banned as answers previously.
Years ago. AI bots like ChatGPT will make SO almost entirely worthless (worth so little there won't be enough demand to sustain a large service, it'll collapse except for being an archive).
I've been thinking of making a community in Japan for AI founders.
I think we should all be conversing over a variety of thing and helping each other to succeed. There is some fear of colluding of of course, but we can put the right rails in place.
AI is hard, sales is hard. We don't all need to compete.
I wonder what you all think?
I was hoping that the title "Community Is the Future of AI" would refer to the exciting community that's growing around Generative AI in SF. There was an awesome hackathon this past weekend and a fun networking event on Friday. "Cerebral Valley" (not sure if this is a company or an unofficial org of people from hacker houses) is a major (but certainly not the only) source of such events [1]. VCs, Twitter influencers, early-stage founders, and college grads have all been organizing other events, and so far, it's been a magical community (and hopefully this community is a net positive for the future of AI).
Open AI assistants should really be implemented on open protocols with open payments. Nostr is a perfect protocol for that because of Zaps which can provide a strong alignment signal to the (various different) communities — not just white “western” men.
Edit: since the current url has more information (as alecco pointed out downthread) I'm going to reverse this. Sorry everyone! I'm also going to edit the title to make it clear that it's not just the blog post but also the community discussion.
Ah - that makes a difference. Sorry! I'll see if I can fix this.
Edit: ok, I've merged the thread back to the original submission and edited the title to clarify that it has more information than just the blog post. It's late, of course, but better than never I guess.
The fact that this post by Prashanth Abd al-Rahman (SO CEO) is so downvoted makes me think he's onto something. This is like when artists cry because of how good Midjourney is and they'll have to switch careers. If programmers are complaining about AI it's because it's good.
This is a bad heuristic, strengthening your priors based solely off rejection.
Nobody questions LLMs can write code. The question is whether Stack Overflow has a place in that future. The community’s rejection, and CEO’s ham fistedness, suggest the data he has are the data he’s got. Which makes him uncompetitive vis-à-vis e.g. GitHub or any IDE.
No, the AI can only replicate what some other human figured out before and wrote down. Right now LLMs aren't really creative problem solvers. Therefore he needs a bunch of idealistic/stupid/unaware devs to keep sharing novel solutions that can be used to improve whatever AI he wants to have and sell it to companies which will fire those same devs that figured out those answers.
Providing this is opt in i'm pretty OK with it, unless i'm missing something?
We trialed a private SO instance at work, and discoverability was an issue (now we have to search slack engineering channels, check notion, and check SO).
If I could quickly ask, in for example slack, a question in natural language and have it query our private SO, slack, notion etc, I would be pretty game (providing this data remains private).