I'm getting pretty tired of these posts dunking on ChatGPT making it to HN. Yes, we know if you try to see if it is "smart", it'll fall down. It's pretty easy to show how silly it is compared to things a human could do, etc. Haha, robot do the mistake again!
This is the equivalent of someone Googling something back in 2000 and saying "wow this is just a joke because I searched for 'cat' and got a few dog pics. It'll never be useful".
I've had ChatGPT debug my router config, teach me esoteric C++ libraries with examples, set up config files for various things (e.g. CMake, Kubernetes) and be able to explain to me what it's doing when I ask it, and even help me learn a foreign language (hint: "I want to have a conversation in $LANGUAGE that I'm learning, so please respond to my queries using very simple words and phrases in $LANGUAGE", and yes, it'll do it).
Sure, you can have all kinds of "fun" asking it "are you a robot herp derp", but it does incredibly useful things right now.
No, it isn’t like that at all. People are claiming that ChatGPT is either intelligent or proto-intelligent (we’re on the precipice!). Which means being way, way more capable than a search engine.
Appreciate the insight. And I agree it's incredibly useful - I use it hundreds of times a day too! I guess my point (which maybe I didn't make clear in the post) is that it's just a tool and shouldn't be viewed with the mystique that some people ascribe to it.
I've been using it to explain minified code I come across and unusual code patterns I don't understand. It's similar to a coworker in that it's not always right but can generally point me in the right direction if not. But one that responds within seconds and not with a random delay of 5 minutes to 3 days. It also means that I'm interrupting my real coworkers less.
Yes! It's basically like having a knowledgeable senior dev, but on pretty much everything. Maybe they don't always get the details right, but the insight in explaining the "fundamentals" of how something works is key, and ChatGPT usually does this quite well.
I used to be the "C++ wizard" who everyone in my team would bug when they'd come across some weird behavior, but now I just say "ask ChatGPT about $footgun in C++". It actually explains it super well!
I was watching "HyperNormalisation" by Adam Curtis for the second time. In his segment on Eliza, an early example of a chat bot, I realized that Curtis makes a mistake in his interpretation of Eliza. For Curtis it's narcissism that makes Eliza attractive. Curtis levels the charge that Westerners are individualistic and self-centered often.
But when an interview with the creator Joseph Weizenbaum is shown starting at 01hr:22min, he never says that. He relates how his secretary took to it, and even though she knew it was a primitive computer program, she wanted some privacy while she used it. Weizenbaum was puzzled by that, but then the secretary (or possibly another woman) says Eliza doesn't judge me and it doesn't try to have sex with me.
What jumped out at me was that Weizenbaum's secretary was using Eliza as a thinking tool to clarify her thoughts. Most high school graduates in America don't learn critical thinking skills as far as I can tell. Eliza is a useful tool because it encourages critical thinking and second order thinking by asking questions and reflecting back answers and asking questions in another way. The secretary didn't want to use Eliza because she was a narcissist, she wanted to talk through some sensitive issues with what she knew was a dumb program so she could try and resolve them.
That's how I feel about ChatGPT so far. It's a great thinking tool. Someone to bounce ideas around with. Of course, I know it's a dumb computer program and it makes mistakes, but it's still a cool new tool to have in the toolbox.
Thanks for the link. I've been journaling for over 20 years. I want to feed my writings into a LLM because I know I've come up with some good answers for personal issues before, but they are buried in my writings. It would be great to pull out a summary of my thinking about some recurring issue.
This is especially true because I'm a senior citizen and have noticed a decline in my memory.
Curtis' shtick gets old after awhile. To hear him tell it we can no longer make sense of the world because it's complicated and politicians lie to us all the time, so we retreat into a mirrored room of our self, gratifying every desire to escape thinking about our futile life and soon approaching death.
I've watched just about everything he's made and honestly, he starts to sound like a moralizing preacher after awhile. We are selfish, self-centered, infantile, wanting computers to watch over us so we our worst impulses don't destroy us and so on. Things a preacher might say.
But his documentaries prompt me to think in different ways with other ideas, so I appreciate them for that
I know nothing about ML and AI and what GPT3 does looks absolutely magical to me. It helped me write unit tests the other day and the day before it generated a migration script for a new feature I was working on. A week earlier it helped me improve a letter I was writing to HOA to get some stuff fixed at the house. So all these posts dumping on GPT3 seem very unnecessary. What it does under the covers might not be all that impressive to a trained eye, but to a layperson like me it is by far the first ML model which has made significant contributions to my daily tasks.
Yes, GPT-3 is not better than an expert at the same task. But most people don't have access to an expert's time, so a good-enough solution being widely available is game changing.
It's a hot take headline and the author says they don't even have time or energy to effectively write more than the headline, so here's an image of a GPT-3 conversation to read.
Pepperidge farm remembers the same types of dismissals when Netscape first come about. "I already have a compuserve account", "It's just a bunch of geeks talking about Frank Zappa", "You can't DO anything with it".
It's hard to perceive clearly what you're looking at when a new thing comes along precisely because it's new. I think GPT3 fits that.
It's Netscape. A messy buggy rapidly evolving piece of software that's opening up new ideas faster than it itself is iterating.
Anyway, what else you gonna work on? Yet another lightweight typescript framework for making web apps? Heh.
Marcus has made some very good contributions, including both original research and a good exposition/popularisation of a fairly mainstream skeptical view.
GPT-3 is a system that can appear to be close to artificial general intelligence (AGI), but it's not. Under the surface, it's fundamentally different. Which is why I'm skeptical that it can incrementally evolve into AGI. But who knows, I'm really curious about GPT-4.
Users are infatuated with and romanticize GTP. It’s similar to translation software. Compared to the human mind they’re brute force algorithms with access to warehouses of personal data. Any mention of this fact sends users into fits. It’s borderline compulsive. I often hear people saying “I don’t understand how it works but I think it’s impressive” and that sort of says it all. GTP attracts people who don’t know and/or don’t want to know how it works. They can’t reconcile their appreciation for it with the fact that it’s unimpressive. It’s a golden apple.
This is the equivalent of someone Googling something back in 2000 and saying "wow this is just a joke because I searched for 'cat' and got a few dog pics. It'll never be useful".
I've had ChatGPT debug my router config, teach me esoteric C++ libraries with examples, set up config files for various things (e.g. CMake, Kubernetes) and be able to explain to me what it's doing when I ask it, and even help me learn a foreign language (hint: "I want to have a conversation in $LANGUAGE that I'm learning, so please respond to my queries using very simple words and phrases in $LANGUAGE", and yes, it'll do it).
Sure, you can have all kinds of "fun" asking it "are you a robot herp derp", but it does incredibly useful things right now.