I’ve been developing software professionally for over 20 years now, and ChatGPT/GH Copilot are the biggest productivity enhancers I’ve seen since code completion.
Earlier today, I used ChatGPT to help me bang out a Ruby script to clone a repository, extract its documentation, and add those docs to another site that will serve as a centralized documentation source for a collection of projects.
I know Ruby and have been using it since 2007, but I still have to look things up with it all the time. By giving ChatGPT a bunch of poorly worded, lazily proofed commands, I was able to cut the time of development probably in half.
It wouldn’t be nearly as good with a language I didn’t know, but saying it’s a waste of time and money feels like it’s really missing the sea change that’s happening.
Yes, it's going to excel at understanding programming languages, which have rigid structures and clearly defined commands whose inputs and outputs could never be misunderstood by a computer. And more power to ya if it saves developers a bunch of time, but it's being built and hyped as something that can do everything for everyone, which it very clearly can't. It has already pushed the customer service industry into a far, far more irritating direction, for example.
I sort of disagree, you could try manning the hotline of customer service for a month or so. Ideally for a terrible company with an endless river of customers self pity and a lovely linear script that offers no good solutions 90% of the time. You end one call, the next starts immediately, max productivity the full 8 hours.
I’d say 50% of the time CoPilot suggests things that are not helpful, 40% of the time it saves me some typing, and 10% of the time it reads my mind and fills out a large section of code for me perfectly (using all my idioms, naming, code “accent”, etc). That 10% is spooky - I’m witnessing literal magic. This isn’t going away.
> biggest productivity enhancers I’ve seen since code completion
I’m still productive without ever having pursued any code completion features, and I think one aspect of your productivity you are leaving out is “is your bump in productivity equal to, or preferably favorable to, its costs?”.
What Rube Goldberg type of a mechanism of misery has to happen to help you complete your code?
Such as the blatant disregard for creatives’ licensing stipulations, tech companies building nuclear reactors, the political elevation of the scumbags who run these companies, et al?
~“I saved 10 keystrokes, and it only cost my grandchildren their clean drinking water.”
I hope they build lots of them. The alternative is that they use fossil gas which continues to heat up our atmosphere. Nuclear fission is certainly not problem-free (Hanford, three mile island, Chernobyl, etc), but it’s a whole lot better than pushing us past 2°C.
I highly recommend listening to this conversation with Jigar Shah, the head of the DOE's Loan Programs Office on this very subject. https://www.volts.wtf/p/nuclear-perhaps
> generative AI is a mimic of human action, parroting back our words and images. It doesn’t think, it guesses – and often quite badly in what is termed AI hallucination.
Human thinking is also guessing to a large extent. It is guessing with many feedback loops until a resolution that is good enough, is found. It indeed mimics human behavior and is quite good at it. What counts is, how well can it guess? Since I use AI quite frequently for things where my output is part of the feedback loop, I must say, AI's guesses are very often spot on.
About parroting: I'm very sure AI does not simply parrot, I'm very often still amazed by how well my questions are understood, it gets what I'm actually asking for. A parrot never will even process my question.
As a consumer, I'm devastated by how AI is destroying the quality of services I used to rely on. As a tech professional, I'm stoked about everyone who is investing their time and energy into gambling on text generators, instead of developing their technical skills. The technical debt it creates will just make those skills even more valuable in the future.
That the author does not think generative AI can solve social problems shows a serious lack of imagination. Therapy, medical diagnosis, improved machine translation, and better information retrieval are all social benefits. There are social costs, too, but that doesn't mean the technology is a waste of time -- it means the technology needs to be regulated.
It is very hard to get good medical diagnosis. Doctors are overworked and can barely spend more than 10 minutes thinking about you. People with long complex medical histories get completely fucked over, so basically these social tasks already are not being performed properly by humans because the humans doing it are stretched way too thin.
AI is going to quickly surpass the quality of medical diagnosis by doctors, at which point hopefully people can see a the right kind of specialists faster and get treatment quicker.
> AI is going to quickly surpass the quality of medical diagnosis by doctors
High doubts. We're still talking about GPTs? The 'transformer' part will be an issue. I use it the enterprise version _a lot_, but one thing you really can't use it for is finding a bug, and considering how Transformers work, that's not a surprise.
ChatGPT finds about half of the bugs in the code it creates when the issue is described, which is more than I expected and still low enough for my job security.
Given the history of AI is written in examples of "machines can't ever… oh wait one did that must mean this never needed intelligence", I don't rule out any specific breakthrough on any particular time scale.
LLMs are amazing for basic coding tasks. Codeium was able to do about 30% of my programming, before I realized it was why my laptop was overheating and turned it off.
Maybe that says more about how low-entropy code really is than it does about AIs intelligence, but in any case it works.
I'm not sure what else I'd ever use it for though. I have no interest in Replika or anything similar, and I want it to stay out of creative writing and personal communication.
> We urgently need the expertise of social scientists to be able to make much-needed collective decisions about the future of generative AI that we want; we can’t leave it to business, markets or technologists.
> Kean Birch is director of the Institute for Technoscience & Society at York University.
Academic sociologist argues that AI should be controlled by academic sociologists. Color me surprised.
What a mess of an article. Complete ignorance. If you haven't read it yet, I'd suggest not wasting your time. It's just a lot of unfounded assumptions and hand-wringing by someone who appears to have a (very large) axe to grind.
What benefits have we gained from generative AI (really ML) that currently outweigh the cost of researching developing, and running them? Or do we stick with an expected value based on what generative AI may be able to do and the next step technologies that come after it?
The article is making pretty clear arguments for costs of generative AI, and raising the author's opinion that it isn't worth it. Just claiming it is in fact worth it without anything to support that isn't super helpful.
Last November, DeepMind released the results of a generative AI model that created theoretical molecular structures for over 2 million undiscovered synthetic materials. Within days, materials science researchers were able to confirm 700 of them. The sheer number of these new potential materials discovered is greater than has been created in the rest of human history combined. These are materials that can be used in manufacturing, energy production, and other objectives that are critical not just for advancing human society, but avoiding the impending crisis we are already facing.
Similar AI endeavors have been underway for medicine and human health.
The author is making extremely shallow, flawed arguments that hinge on an ignorant (or possibly, deliberately narrow-minded) understanding of what generative AI is, how it is already being used, and the magnitude of what is already being achieved with it.
It will be interesting to see how many of those, if any, pan out to have a meaningful use at scale. If I remember right, those 700 or so were synthesized in a lab but I don't think we know much beyond that.
We'll see if any of them end up being viable as far as manufacturing and material availability go, and whether they're better replacements for existing tech like batteries. The hope is that we'll have Jarvis inventing a new material for Iron Man's suit, but we could always end up with an endless pile of technically feasible but functionally useless materials.
A few billion moderately interesting images, and a collection of slightly weird nearly-free marginal cost interns who studied literally every subject but somehow still only act like freshly minted graduates with no real-world experience.
With EVs the focus becomes how wonderful they are because they do not burn fossil fuels in their engines. Great, but what about what all the other issues (including non renewable issues in the rest of the supply chain involved in building the EV)? They're greenwashed away - no need to discuss public transit and densification. EVs will fix everything and we don't need to change our life on any significant way.
They are a way to continue going down the wrong path and feel good about it.
it's a not a waste of time but imo it's also not going to be disrupting. it could be a good sounding board when you know the subject otherwise it's mostly useless. it does makes things convenient but how much value it add, that's yet to be seen.
Earlier today, I used ChatGPT to help me bang out a Ruby script to clone a repository, extract its documentation, and add those docs to another site that will serve as a centralized documentation source for a collection of projects.
I know Ruby and have been using it since 2007, but I still have to look things up with it all the time. By giving ChatGPT a bunch of poorly worded, lazily proofed commands, I was able to cut the time of development probably in half.
It wouldn’t be nearly as good with a language I didn’t know, but saying it’s a waste of time and money feels like it’s really missing the sea change that’s happening.