Hah! My first thought was, "Oh, is their plan this time to give the playbook to Microsoft?"
I confess that the current ChatGPT hype wave leaves me feeling cold. A lot of people are obviously excited about it. But then, a lot of people have been excited about VR and blockchains, and I'm still waiting to see significant uptake. I have to wonder if this is another area where it's more sci-fi driven novelty hypnosis than the beginning of a revolution.
Most people have heard of blockchain (through crypto), but non-techies are _actually using_ ChatGPT for daily tasks. Departments of ed and universities moved fast in integrating anti-ChatGPT into their programs.
I suppose I should have been more specific. I can believe people will be using this. I'm just not yet seeing how it will make things net better. So yes, academic cheaters will surely be trying to use it, although we'll see whether they actually succeed.
But continuing with your analogy, I'll note that the people who fight financial crimes and fraud have absolutely had to integrate responses to cryptocurrencies. That didn't prove that there was lasting value to the cryptocurrency hype wave.
People are comparing LLM hype to crypto and VR hype and that’s really not accurate. Those are both always in a state of “someone is using it now, and a ton of people will use it in the future”. ChatGPT is in a state of “a ton of people are using it now”.
It would remind me of the explosion of social networking, except it doesn’t have network effects! It’s crazy how much it’s getting used by all sorts of people not because they want to interact with their friends (messaging apps) but just because they want to use it.
I was a big AI and even just generally ML doubter until I started using it as part of my workflow and reading about how other people are, too. At that point, you kinda have to update your priors.
I will note that you didn't use it for most of the coding. And that looking at your code, you also seem perfectly capable of figuring out the AppleScript if you had wanted to. In an amount of time that would not have wildly increased the total project time.
So overall, this looks very "stone soup" to me. Did it get you over a hump? Surely. Did it actually contribute much in terms of code? Not from what I see in your script.
Not op, but this seems like a particularly weird take...
I do support work myself, and our developers have a function that is broken and shoving too much stuff into the database. This crap is compiled into a DLL so I can't do anything about directly. Went to GPT and said "hey, I have a table named X, when Y updates write a trigger that deletes this"
In a few seconds it dumps out a working trigger. Now, I've never written one of these before, dong SQL stuff isn't really my forte, but with a basic understanding of what I needed done I got working code with an explanation faster than I could parse the how to.
Even if AI doesn't contribute more than that, for lots of use cases it's more than enough.
Sure, I totally believe you found it valuable, and it's great that it helped you solve your problem. As with simonw, it sounds like it got you over a hump. One you could have gotten over on your own, but maybe you didn't want to or couldn't take the time. But in both cases, what GPT coughed up was a dead end in development terms. It's not like you could just ask it to fix your app or to replace your app.
This sort of reminds me of the 3d-printing hype cycle. People were very excited! We could make any object! We could soon be 3d-printing 3d printers! But in practice, 3d printing is now mostly useful for small, marginally valuable stuff. That's not nothing; I have friends who are excited about their 3d printers. But I also have friends who are excited about woodworking and accordion playing and falconry. But none of these are particularly world-changing.
Aha! Yeah I see where you're coming from now. There's a huge amount of hype around this stuff which is completely unjustified: it's not going to replace programmers! It's (hopefully) not going to put vast numbers of people out of work.
My interest in it is this: given this (flawed) technology exists, what are the things we can use it for that are genuinely useful and help us build interesting things that we may not have built before?
I've been finding an increasing number of areas where it gives me a 10-30% productivity boost... and a few areas like the above one where that tips me from "not going to do that project" to "doing that project". Those are what's exciting to me, far away from the breathless hype around this stuff.
Yes, exactly. This article is about how Google wants to put this into everything. "A new internal directive requires 'generative artificial intelligence' to be incorporated into all of its biggest products within months." I think a lot of people are skipping the "what are the things we can use it for that are genuinely useful" step. Glad to see people actually doing that, so thanks for explaining further.
I get that. But for these sort of "stone soup" use cases, it's not really a technology for creating software. It's a technology for helping people manage feelings. That's surely a valuable thing, but that's very much not what it's being sold as.
To my shock, She started explaining to me how awesome it is and how all of her teacher friends are using it to offload a lot of the boring work like writing comments and stuff for students report cards. They are required to write numerous paragraphs for each student several times.
Something in the past I would take my wife an entire week each time.
These are people that need help setting up Wi-Fi on a laptop. And they are in love with ChatGPT.
Sure. Prose generation is a use case I'm hearing a lot. I'm just not convinced that it's value generating. E.g., ChatGPT doesn't know shit about the students, so it is being used as a bullshit generator here. If I were a parent, I would be incandescent with rage to learn that one of the few communications I got from a teacher all year was machine-generated bullshit.
Wouldn't the alternative be to reconsider the merits of the 40 hour bimonthly task?
If I'm understanding right, your wife is coming up with some bullet point style feedback about each of her students as part of a recurring evaluation, then using ChatGPT to generate lengthy prose from those bullet points.
Hailing ChatGPT as a savior here feels something like the tail wagging the dog. Why not just change the nature of the task so that what's required is to provide the quick bullet points, instead of using language model prose generation as a go-between for communication from teacher to student? What value is being added by the middle ChatGPT layer? More importantly, what value was being added by the 36 hours of marginal effort previously spent doing this manually?
That's one of my main hang ups with all the proposed use cases of ChatGPT - it seems we've created arbitrary problems and now have this complicated tool to solve them. Which in turn will enable us to keep making bigger and more arbitrary problems. But the problems don't need to exist in the first place
Exactly. If the only valuable part is the "general points that are relevant", then I would much rather receive those. As you say, perhaps in bullet points. Adding a bunch of utterly predictable prose around that doesn't make it better for me, it makes it worse, because it just dilutes the signal.
Alternate would be that she write the actual comments that she has, and not puff them out to meet some artificial "must be multiple paragraphs even if the information to be conveyed doesn't justify it". Concise communication is the only thing that isn't wasting effort for both the writer and the reader.
Reducing internal pushback to customer hostile bureaucratic requirements imposed by management on employees is a harmful effect of ChatGPT, not a benefit.
She would be fired for refusing. She’s a teacher. She has almost no freedom in anything. It’s also an international standard for her school. It would require an uprising of tens of thousands of teachers world wide.
I agree with what you say. However she is not willing to a fight worldwide bureaucracy.
As soon as people figure out teachers are using this, schools will either ban it or come up with another useless but time-consuming task to signal to parents that the teacher cares individually about their child enough to do the time-consuming task.
Prose writing used to be a fairly honest signal that someone smart spent time writing it for you. Now it's not, so we will invent another signal that someone smart spent time on you.
Sure, but we should only automate the tasks that are of net benefit to society. Automating robocalls freed up lots of schmucks from having to manually bother people all day. But that didn't make the world better. It made it worse.
Yep, when I went home for christmas and mentioned it off-hand my BIL was like "Oh yeah, we use that for our business to write tweet and some other things" (self-employed, online fitness trainer with multiple coaches working under him and my sister). I talked with him last night and they are paying for Jasper.ai for their team to collaboratively work on documents. All of this with zero input from me. My parents also recently started playing with ChatGPT after a very basic crash course and the next day they were gushing about how easy it was and how helpful it was for repetitive text they have to write.
My entire family fully understands it's just a tool, you need to check it's output, and it's not perfect but it's saved them a bunch of time and allows them to focus on what they are best at by getting a good starting point from *GPT.
I'm not fully convinced it was a net-positive, because sometimes asking the "dumb" question shows that other people in class had that question but didn't want to ask, or didn't know they had that question until someone else voiced it.
I agree. However the ChatGPT situation is the first hype that’s really been harmful rather than just disruptive. It can and is being used to generate low accuracy noise which degrades quality information. I suspect our demise isn’t going to be a big bang but a slow fizzle of information decay. And this is the start of it. At best it’s going to make us stupider.
Information decay has started around the year 2000. I'm not a Hipster, I have experienced the internet pre 2000, which was an semi exclusive club for educated and intelligent people with accurate search results.
After the masses poured onto it the results became more half truths and opinions.
And now you have "AI" feeding on those half truths and opinions.
Whatever could go wrong?
However imagine if it was fed with facts only.
Imho, info decay took off when social metastasized, which is to say ~2010+.
It was an interesting time, because (from memory) 2000 was still "It's pretty complicated for average person to create and post content on the web" and transitioned to 2010's "There are highly polished, easy to use, free SaaS solutions that a trained dog could use." With mobile versions of same in the next few years. (All dates mass market penetration, not first release)
There was a direct correlation between ease-of-publishing and less intellectual discourse on the web.
"Info decay" was definitely a good way to put it. It wasn't that people were necessarily dumb or angry, but that there was a lot more honestly incorrect information online.
Before, there were crackpots, but it was pretty easy to tell they were crazy.
Having seen both, the banal wrongness is far worse.
I think that's one reason wikipedia really took off ~2005+. People were craving an easier way to get even roughly-right information.
Well, you can't train LLM on "facts alone", since they are basically throwing darts at the next word, not using information to craft a message. But I hear your larger point: it'd be nice to have an agent that was trained on wikipedia information, but already spoke a language
I think GGP comment started this thread about a knowledge engine, not a "word generator" if that makes sense. Training on Wikipedia in this case means "training to put together words like wikipedia authors do" right? Not "learning the facts of wikipedia to provide useful synthesis and cross referencing." though it probably does OK by accident.
recently I tried to search for CO2-consumption (as in usage as an industrial gas) on Google. It didn't return 1 useful hit, everything was about emissions.
So: We are already there, all without ChatGPT... ChatGPT (with its cool ways to make it nondeterministic, like: replacing tokens with synonyms and raising the temperature. Or: retraining it...) will make things worse definitely.
Probably the way to go is for {academia,sects,orgs with sense for humanity,...} to build stores of attributable information and then start shunning the branches which produce bullshit (this is what scientific publishing should do already.... it doesn't... even without ChatGPT there's already templated articles strewn around the corpus and noone cares). The main issue there is, that this means that they will also have to stop using a lot of technology (and buy a 32nm foundry...), because many orgs are probably already poisoned.
There were a number of other relevant results. Papers on various applications of CO2, forum posts, etc. Nothing on emissions.
Sure google is a lot worse these days than it used to be, but it still seems plenty usable to me and I have no issues finding what I want with some basic operators.
The problem imo is that Google was good enough that basic operators were not required, as such the avg Joe is amazed when finding out that such operators exist, and Google doesn’t really promote the advanced search page these days.
Google has gotten way better at interpreting search queries formulated like natural speech. It used to be you had to know what sort of keywords together would get you the results you were interested in. Now I can type in "what is the capital of France" and have a Wikipedia article about Paris pop up, an Encyclopedia Brittanica entry, and World Population Review for the city. That wouldn't have been the case in 2005.
It's become a tool that basically anyone can at least get some information out of. Googling used to be a skill that non-technical people struggled with.
i feel like it's been half decent at that for at least a decade? that's my typical way to google something dumb like "how do know if my dog sprained or broke?
2005 is well over a decade ago and that's very much on the conservative end of when I remember the selection of search terms as being very important and a sort of art form.
Specifically because they said they only found stuff on emissions. Never tried it without.
It's one of the most common patterns I come across. I search word X but find a bunch of word Y stuff that's not what I'm looking for, because word X and Y appear together a lot in Y's context. So I add -Y to the query.
Sure this talks a lot about emissions, but it also seems to include a detailed breakdown of various industrial uses for CO2, with numbers, projections, statistics and all.
Information decay has started with forums for the masses. It has become increasingly hard to actually find useful advice, because lots of the "answers" you find are actually partly wrong, mostly because people without knowhow feel like they need to answer questions to boost their karma...
Many of my coworkers feel the same. And this started way before GPT was conceived.
Besides, your post somehow feels like generic technology-angst to me. It seems like whenever we invent something new, there are those people that predict this is finally our demise...
GGPT is a tool, like every other technology. Use it wisely, and you will likely benefit. Use it without engaging your brain, and you will like see negative effects.
Noise? We've been suffering from it quite some time now. Once complete control (of information) wasn't possible the next choice has been baffle them with bullshit (and repetition). This is happening on all fronts, all ideologies. Sure some may be more subtle than others but it's everywhere. To believe otherwise is simply naive.
ChatGPT is a symptom that perhaps finally raises discussion of the root problem above the noise. Yeah, ironic.
The solution is already predicted by scifi. We will start building new sub-internets with stricter, formalised, rules to reduce spam and noise. The outside internet will become the wild, noisy internet where few people reside. It's like in Cyberpunk 2077 where there is the "wall" that keeps all the AI out from the mainstream internet.
> new sub-internets with stricter, formalised, rules to reduce spam and noise.
I'm not sure more, better rule are going to stop a rule engine from participating. If anything, formalized rules merely makes it easier to train the machine learning system du jour to follow them.
My modest proposal is to go the other direction - fill the entire internet with garbage data so that the generative models trained on it become useless. Clearly there's nothing that could go wrong with this plan.
Not rules as in "match this regular expression", rules as in "if I don't know you, you can't join", BBS-style. Which is a bit unfortunate but also definitely what's going to happen.
The masses are still on mainstream platforms, but more and more people are nauseated by the instagrams of this world.
Sooner or later some group will realize it becomes necessary to dissolve the bands which have connected them with another, and you'll have a formal split.
I don't really agree it's unfortunate. I think the "throw everyone into the same pot" has been an unfortunate side effect of the commercializing of communication. BBSes and forums were allowed to grow organically. They weren't for everyone, but anyone who wanted to be a good-faith participant in the conversation was generally welcomed, while baddies were moderated out by moderators who were actually empowered. Some would say over-empowered (there was plenty of drama about moderators having biases and what-not) but then communities could split without people necessarily feeling like they had just been excommunicated from a global forum like Facebook or Twitter.
In short, I think people are wired to do better in smaller groups and I welcome this balkanization, albeit with the caveat that the communities can interconnect and grow/shrink/die/emerge organically to suit the needs of the people within them instead of commercial investors.
I'm saying 'unfortunate' because the open internet has some beautiful corners. There is some good in the world, and it's worth fighting for[0].
But on the other hand it's hard to deny that the mainstream internet is getting worse and worse. I grew up with the internet as my nanny, but I'm moving closer and closer to the idea that smaller groups focused on specific interests really do provide better experiences.
Perhaps it was never meant to be, and the sweet spot is just living your life completely offline and using the Internet for Wikipedia and Whatsapp.
> My modest proposal is to go the other direction - fill the entire internet with garbage data so that the generative models trained on it become useless.
I've been thinking for a while about writing another network on top of IP with decentralized DNS (dealing with domain names similarly to how SSH deals with public keys), make it open-source, free, with a one-click installer, and strictly prohibit any commercial activity on it (no ads, no acting on behalf of a company) or at least make commercial activities selectively opt-in for the end user. The same could be done for bots. Some of the technical aspect are already solved by others but the problem is that such a project would need some larger organizational support with lawyers in order to make the EULA enforceable.
In other words, it's more of a social than a technical issue.
I don’t think an EULA will stop bots, spam and scams.
And without commercial buy in, I can’t see why anyone apart from computer geeks and scammers would want to use it. The first question your average users would ask is “why isn’t $SHOP on there”.
Much as I dislike the modern web, it’s what the people have chosen in exchange for free services (like email) and convenience (like online shopping).
I don't care at all about average users, this network would be an alternative on top of what we have and can be as elitist as the small user group wants. The difficult part is having the power to sue organizations who violate the EULA (a social and financial problem) and getting a decentralized banning system to work (a very tricky technical, distributed social choice problem).
We already have a few options in that domain. ipfs, onion sites (Tor).
And to repeat my earlier point, an EULA isn’t going to keep your vision clean. Malicious actors aren’t going to give a hoot about any EULA. And any business that might take an EULA seriously would simply choose not to use such a minority platform anyway.
Also if this “EULA” is targeting how companies use your platform, then it’s not an EULA. What you’re talking about is a different type of contractual agreement.
I only can imagine that banks, telecoms (and states) verify that you are a real person. Maybe we can have some organizations that disconnect your accounts from your identity. Maybe not, and everyone has to use their real identity. In the times of chatGPT there will be no other way to distinguish real people from bots.
Like how facebook or github account depend on your phone number, or free trials depend on your credit card number.
> It can and is being used to generate low accuracy noise which degrades quality information.
I would argue this is a problem stemming from the way "AI" is presented rather than anything to do with "AI" itself.
That is to say, fucking nobody trusts autocorrect to be correct and "AI" is really just a more complex form of autocorrect.
But people hear "AI" and think AI as in Transformers, Terminator, R2-D2, etc. They're intelligent, right? And that's where the presentation is doing significant harm. There is nothing intelligent, let alone sentient, about "AI".
I'm not sure if it's that chatGPT is that good, or that traversing today's attention-weaponized web is that bad, but I think I'm getting about a 30% productivity boost by replacing:
Google -> Stack Overflow
With...
chatGPT --maybe--> Google --> Stack Overflow
I won't make predictions about the future, but it's been pretty transitive for today.
Same here, except I dropped Google as my search engine of choice a long time ago.
ChatGPT is great for doing some of the heavy lifting. I pasted in a list of raw, unprocessed data and asked it to give me unique values sorted alphabetically. I know I could have done it in code, but just being able to paste it in, give a plain English command and get the same results was fantastic.
What I do is I don't ask it to generate results, but to give me a program that does the processing, then have it give me a test suite where I can 'trivially' (usually) verify the program on synthetic data (which, of course, I also have ChatGPT create for me). It usually even includes good edge cases.
I've had a few times where I didn't like the library that it used to come to the results, I just told it 'use library xyz' and it would rewrite the whole thing using idioms native to that library. It's amazing. It's eliminated like 75% of the drudgery that I've come to hate so much about programming the last years.
> How do you verify if what ChatGPT returns is actually correct?
A bit ironically, if it is regarding a fact, I perform a search on DDG or google and find an authoritative website or paper to corroborate it. So it still feels like search outside of an LLM is an important tool. If the result is code I can test it myself.
There needs to be some lexicon for this kind of thing. Theres a parallel with NP problems, some things are hard to figure out but easy to verify. Others hard to do both.
It works, too. I asked it for the history of a term, and it gave a very plausible-sounding answer citing three different links. All of them were to the same page that didn't say what Bing said it did. A rare miss, but at least I could check its work.
100%, this is a tool that is going to be on every person's phone in the next 5yrs. Whether it's ChatGPT or via search engines or another interface. This has massive implications for the world.
The recent South Park episode where the boys are using ChatGPT to write the perfect romantic replies to their girlfriends really opened my eyes to how impactful this is going to be, not just business copywriting/education essays/etc but even for social interactions like dating (Tinder is going to get interesting).
It's like an autocomplete for your day-to-day life.
Anecdotally, it's already being used in day to day life. My friends (who work in non stem fields) have been using it to settle arguments. My neighbour has been using it to ask for gardening advice. An acquaintance at the mosque used it to help him create posters in Photoshop (he asked it how to implement masking and such). It's actually incredible how quickly it's being adopted.
I think you will be surprised here. Its a huge value for many average people, highly accessible and basically free. This is not an adoption wave, but an unprecedent tsunami!
> ChatGPT has over 100 million users 2 months in. It's not even remotely comparable to VR or blockchain.
First, that just means ChatGPT had 100m people enter their email into the registration page. Second, none of those 100m users had to invest any money, unlike VR or blockchain.
> But then, a lot of people have been excited about VR and blockchains
dunno about blockchains, but i'm sure there isn't 100million VR units sold. Plus you don't need to pay upfront to use chatGPT to try it. And everyone who has tried it has been amazed, or at least, impressed at how good the responses are.
Not saying that the language models and AI hype is to be believed, but there's substance there. If you are an investor, you should be wary of missing out. And if you're not an investor, you should be wary of losing your human advantage in the economy.
> And everyone who has tried it has been amazed, or at least, impressed at how good the responses are.
This is almost the definition of a novelty effect. Consider, for example, 3D.
Stereoscopic 3D has gone through at least 5 waves of people trying it, saying, "Amazing! This will change everything!" But so far every time everything has not been changed. (The 5 waves I count: Brewster stereoscopes in the 1850s, 3D still viewers in the 1940s, 3D movies in the 1950s, VR in the 1990s, and 3D movies/TV starting in 2009. Assuming that we see the current wave of VR/metaverse hype as still open, of course.)
The question for me is what happens once something gets past "amazing". For example, consider Shazam's music recognition technology. It was widely considered amazing at the time. But in very short order it became boring, as it had basically one use case, one that did not matter much to most people.
Yeah I spent a week or two getting excited playing with ChatGPT but then I got bored. I also bought a Quest 2 a while ago and sold it after a few months, so I guess the novelty just wears off quickly for me.
I think that just puts you ahead of the curve. But perhaps not very far ahead.
I rented the Quest for a couple weeks at Christmastime. The first week everybody thought it was amazing, me included. Things tapered off in the second week, with the kids back on their Switches and the Playstation. When I mailed it back nobody even noticed.
I hear a fair number of anecdotes like that. And I find very few people for whom VR games are a daily driver the way, say, mobile games are.
It is a common and uninformed HN fallacy to equate AI and Blockchain.
While Blockchain only has hype, AI has hype and real applications. While there are enough bottom-feeder AI bros on Twitter and Discord, AI does have real applications and it solves real problems and makes people real money.
You need to shift your focus from the AGI and LLM hype to real application areas.
I have shipped end-user facing real world AI product myself. It saved each client (non tech companies) 2-3mn USD per year, and made our company money.
I can say with authority that if you think AI and blockchain are the same, you are plain WRONG.
That's why I wrote non-tech companies in parenthesis.
For a tech company, you can sell codegen software where the customers solely determine value.
You can sell another tech company Blockchain Snake Oil and make money off of it.
My work was not such a case. End users, consumers saw real value which made their lives better in a small way, and help save our coustomer, a business- save millions of dollars.
I really care about my work, and rejected many offers where I could have made more money working on snake oil recipe.
I think at the very least this is going to lead to some major changes in UI design, but first they need to reel it in and get it under control instead of training it off of Reddit posts and then making a surprise pikachu face when it turns out to be an arrogant cunt.
But it's definitely not going to go nearly as far as people are predicting. The more i use it the more it becomes apparent to me that there's no real intelligence behind it, just large-scale pattern recognition that makes it seem real because I'm not intelligent enough to comprehend the patterns.
> apparent to me that there's no real intelligence behind it
Does it even need to be to be useful?
A ChatGPT that is legitimately "intelligent" would basically be the El Dorado of tech. There's plenty of value between here and there.
The big issue IMO is people are going to Chatgpt's website explicitly and having certain high expectations. Or they use it via Bing expecting it to be a search engine answer machine. Once such an interface is more directly integrated into our lives in a clean way where people understand what it is and what the limitations are it could easily be an app that basically everyone has installed, but more like a super-useful tool like Google Maps rather than replacing Google. That's still a big business and major development for the industry.
And even if the consumer daily-used mobile app thing or some high-visibility Google integration turns out to be less interesting/useful than predicted, there's still a million niche business and hobby usecases.
I think ChatGPT will get to us whether we want it or not. It has more obvious consumer applications, and I'm sure it has applications which have not been realized yet. The most obvious applications I can think of off the top of my head:
- Customer reviews on sites like Amazon can be gamed with significantly better accuracy, at significantly higher numbers. Presumably there will be a period where this is successful for marketers, and then eventually people will come to wholly distrust customer reviews altogether. This will also likely be the death of the "reddit" trick, where you can just query on reddit for real, live customer experiences with products.
- Significantly increased ads on anything like a social network, where a "user" might be talking about a product.
- Generally cheaper ability to write advertisements in a broad category of platforms.
- Sponsored misinformation (government, corporate, etc.) at scale.
To be clear, I think that ChatGPT is going to be a disaster for the internet.
Have you had a chance to use Bing Search with AI/ChatGPT? I have been using it for over two weeks and I am pleasantly surprised how practical and generally useful it is.
I pay for a premium OpenAI ChatGPT account, but except for explaining what code in unfamiliar programming languages does (including libraries it is calling), and occasionally having it write an email for me I don't get a lot of value out of it.
I do get a lot of value from the OpenAI APIs (as well as the offerings from Hugging Face).
It depends on what you expect of ChatGPT. A reliable human-like assistant? No. A reliable natural-language API to every IoT capable device? Absolutely.
For search, chatgpt is the best out there. If I have a question I don't have to read the whole Wikipedia entry. Chatgpt just gives me the relevant part.
Oh, the horrors! Reading a whole Wikipedia entry! (Which is not what people trying to answer a question do anyhow; they look at headings and skim for the bit they want.)
I believe that gives an answer. But the right answer? Even Wikipedia doesn't guarantee that.
In general it seems like the things that have actually ended up changing the world significantly in my lifetime seemed insignificant when they were first introduced. The first time I was on the internet it seemed novel, but unimportant. When I created my first social media account, it was interesting, playful, and not going to change the world. The iPhone was a joke, who would want a phone that big?
The simple fact that people are predicting that GPT will somehow change the world makes me doubt its actual value.
Those are your personal reactions, but those were not common reactions at the time. In the early 1990s I was telling people the Internet was amazing and showing it off to them. Social media was a huge deal starting with SixDegrees. The iPhone was incredibly hyped at the time.
I think ChatGPT is worth doubting, but I think your examples need some work.
As a general critique on 2023 society, it's remarkable how much discourse is about AI becoming like humans, while there is little talk of how creative human roles are increasingly behaving like robots. Block buster Hollywood scripts for the most part seem to come straight out of Southpark's Computerized Automatron.
> while there is little talk of how creative human roles are increasingly behaving like robots
There is, in fact, a vast amount of talk about that, has been for decades. You might not see a lot of it on HN, but that's mostly because its off-topic.
The FPS effect is tangible. The visual effect is subtle in a game made in 2016 before anyone in the mainstream was talking about this sort of thing, but imagine the possibilities for an indie developer without much funding. They can make their assets at a resolution that's reasonable for their budget, but that would look bad scaled up with traditional methods. They can offer higher resolutions at AAA quality without an AAA performance tuning and asset budget.
I'm dubious about the NSA use cases and most of the US defense policy establishment seems to agree. Most of those types of use cases are pathologically fixated on provability and traceability on why an inference was made programmatic "i reckon" is largely useless in the context of anything important. The problems there remain processing complex data accurately and quickly.
See, I got that answer a lot with VR. And like you, they never asked whether I had used it, generally assuming that the only possible way I could be skeptical is through ignorance.
But as with both of those and plenty of other novel technology, whether or not I like it doesn't answer most of my questions, which are about what other people will use things for. E.g., I did not like YouTube, but I am just not a video person. So me using YouTube and saying "meh" wasn't informative, because I wasn't the target audience. Similarly, I was wowed by the modern VR experience, but being wowed didn't make me a long-term user, something apparently true for a lot of people.
I know several people that use it to save time composing letters and emails. They can dictated the important bits and let it structure the words. They love it. Its value is probably about right at $20/mo for a tool. It isn’t the greatest thing ever, but it’s useful for language composition. It’s easy to alter the response and build upon it to correct anything that needs adjusting. They don’t use it to give them the universal truth of the world.
I have heard this use case mentioned, and I get why lazy writers might want it. But I'm not seeing yet that it's good for readers. Can it bulk out actual points? Sure. As a reader, do I want the useful bits scattered among a bunch of fluff prose? Definitely not.
And that goes double for anything that is supposed to be meaningful human connection. E.g., after my mom died, I got a number of nice notes of condolence from people. I appreciated them. If any of them had felt machine-generated, that person would be dead to me.
So my general feeling about the composition case is that it is going to vastly enable the generation of documents that are, to one degree or another, bullshit. And I don't think that's a net positive for society. It's not like we had a bullshit shortage previously.
You ask for a real world use case, it saves people time/energy and has value to them. You respond by saying it's bullshit and they're lazy, but the fact is there is a lot of communications like that in the world. Not everything needs to be a Pulitzer prize email that evokes monumental emotions in your soul. Sometimes you just need to convey some simple information in a general format
Sure, but you haven't grappled with the heart of my point, whether this is better for the readers as well.
And this is a straw man:
> Pulitzer prize email that evokes monumental emotions
There's nothing wrong with a plain and simple email. Indeed, I think that's generally better unless the writer finds it worth the time to write something fancier.
I've never been excited about VR and 'blockchains'. I was, and still am, excited for Bitcoin, which in my opinion is the only useful application of blockchain. I use Bitcoin in a daily basis, and I have done it for over a decade.
As for ChatGPT, I'm actually using it and it enhances my productivity by a lot. So, I believe this isn't just hype.
I see it primarily as a form of transfering money, not storing money. I get paid overseas, convert it to bitcoin, transfer it to my wallet, then sell immediately.
How is this better than the kinds of transfers millions of other people use? With bitcoin you're converting twice via a very volatile pseudo-currency, so for most people the fees plus the risk mean other things are better.
Mircosoft's implementation of ChatGPT in Edge (Dev) is weak. They have really neutered its ability and length/number of queries you can ask it. It also tries to sprinkle in some web suggestions as well.
This is worrisome because I think mega corporations like Microsoft and Google want AI to just be a fun little tool for drafting emails and writing jokes and recipes. Any more powerful language constructions are too risky to their existing business models or too risky in terms of PR. People will just screenshot something the AI says philosophically and it will get retweeted and people will lose their mind.
I really hope a open source workable competitor to chatGPT emerges soon.
We said the same thing, in one way or another, about indexing and searching the web -- privacy/anonymity, spam/seo, algorithms, bias, corporations.... So today one can build their own crawler and search engine with great opensource software, yet a vast majority is querying a web index through third parties as the computing power required (well, mostly storage and networking) remains brutal, and then all those nitty-picketty details about crawling one needs to painstakingly figure out.
Even if by Moore's law we get that great power in our own computers, enough to run a today's chatGPT clone, there will always be a commercial version that is 100x more awesome out there built by actors with immense compute power backed by rivers of corporate money.
Would love it, too. But at this stage locking it up behind a corporate paywall is probably better IMO - at least until governments around the world are prepared to address a tsunami of misinformation, fraud, and abuse that these systems enable.
This really does not sound like you would love it to be an open source workable competitor at all.
IMO it's not the task of governments to be prepared to address misinformation. It's an individual responsibility. This requires an educated civil society with the media literacy necessary to make a democracy work in the 21st century. It's an ongoing, aspirational project. ...and we can't wait until we think we're ready.
This would be great, if it were only remotely feasible that an individual could ever have the tools and training to counterbalance entire industries of giant disinformation-focused billionaire corporations and governments with thousands of highly trained disinformation professionals using modern tooling, well-funded research departments, and increasingly deep data to spread disinformation.
There is no amount of 'media literacy' training (which, in practice, would have to be created and administered by public schools under the watchful eye of some of the selfsame disinformation purveyors, in some states) that will ever be up to the task.
> There is no amount of 'media literacy' training (which, in practice, would have to be created and administered by public schools under the watchful eye of some of the selfsame disinformation purveyors, in some states) that will ever be up to the task.
Sometimes I feel that the skill level demanded for media literacy is set artificially high in order to force the conclusion that individuals are categorically incapable of judging their personal information environment. But that does not seem plausible to me.
I feel pretty savvy, and I wasn't trained in school. Almost no journalist or public figure has received formalized training in the way you describe.
And why would that be necessary, what would they teach beyond common sense? Do you have examples of large-scale disinformation campaigns that aren't sufficiently contextualized through regular mainstream media?
This really does not sound like you're familiar with the necessity of regulation at all.
Yes an individual should be media literate, just as corporations SHOULD practice fair competition and people should not invest their life savings in Ponzi schemes or kill one another. Unfortunately, maintaining a stable society in the real world requires governance. 16% of the world's population don't have electricity. How can you expect them to educate themselves about misinformation while technology beyond their comprehension is being weaponized against them?
This reply here is a great example of casually portraing something as factual which isn't much more than an opinion. It's not that opinionated replies are bad, it's bad when commenters use this to avoid actually reasoning or referencing anything substantial.
Just ask yourself: What could they mean by being familiar with "the necessity of regulation"? Like _any_ regulation? _Every_ regulation? The " necessity " of specific regulatory measures varies widely, depending on many different factors.
> Unfortunately, maintaining a stable society in the real world requires governance.
Yeah, I mean... how could someone not agree with the second part? ... but why would that be "unfortunate"?
Governance is such a broad human endeavor, even language is part of it, as are all the protocols, supply chains, standardizations etc. that make this conversation possible. Hardly unfortunate?
> Yes an individual should be media literate, just as corporations SHOULD practice fair competition and people should not invest their life savings in Ponzi schemes or kill one another.
I didn't say they, the people, should do something, I said we should do something. We should not, as in this debate, infantilize the population.
> 16% of the world's population don't have electricity. How can you expect them to educate themselves about misinformation while technology beyond their comprehension is being weaponized against them?
That's the same argument for why indigenous people can't govern themselves without Western education. I'm not going to go there, and I certainly don't see the threat you are painting.
Misinformation is nothing new at all, and if you argue that social media is making a difference: Not only is that controversial, it assumes electricity - in a completely uncontroversial way. People will deal with misinformation today using the same tools they used to deal with misinformation yesterday: reason, critical thinking, etc.
Do you think educated civil societies just spontaneously form, or is it because they are enabled through the government providing things like education (the best way to combat misinformation).
Imagine if the French - or the Founding Fathers - had waited for their populace to reach some artificially high and fundamentally undefined level of education before instituting democracy on the idea that democracy presupposes education.
This would rightly be understood as a shallow hold-out. ... and they would probably still be waiting today.
Education is hard work and does not happen spontaneously. Good governance can facilitate education. But people should always be assumed to be capable of judging their informational environment independently of formal education. Not to do so would be deeply paternalistic and infantilizing.
Except you forget that when democracy was first unveiled only white, land-owning males could vote because they supposed that they were educated and self-interested enough to make good decisions. The founding fathers were well-known to not trust the public for making decisions and thats WHY we (in the USA) have laws written up and argued over by the House and Senate and not popular vote...
Yes, I know that argument. So I'm curious: Would you support the initial restriction to only white, land-owning males as reasonable? Do you think, for example, that US women's suffrage required women to be educated first?
And, as you seem to assume, this skepticism led to more indirect forms of democratic institutions: Would you argue against direct democratic institutions in the U.S. on the same basis?
I think this really misses my point, and also is just factually wrong as the other poster responded.
>But people should always be assumed to be capable of judging their informational environment independently of formal education. Not to do so would be deeply paternalistic and infantilizing.
That's not the point. The question was "what can be done to combat misinformation" and my response was "education" which in the democracies that I'm aware of, is generally provided by the gov't. You are talking about something else entirely, and a bit pejoratively as well, which is ironic because you are talking about education but apparently completely missed the point.
> The question was "what can be done to combat misinformation" and my response was "education"
No, the topic was that whether the thesis that technologies like ChatGPT enable misinformation is a reason to keep the technology unfree.
And in response to that topic it was me, not you, who introduced education. You then just added a loaded question designed to hint at a preexisting government.
edit: I can't add further comments anymore due to HN limits, but notice how the person responding to me is changing the topic after I pointed out that they are misrepresenting the earlier discussion.
They wrote:
> The question was "what can be done to combat misinformation" and my response was "education"
But they weren't able to point at the comment where 1) this question supposedly emerged and 2) their response supposedly brought up education as a solution.
Preexisting gov't? Where do you live? Most people here live under a gov't - why are you being this obtuse. Bizarre that you think you are in a position to state what my intent was when my last response to you was that you missed my point. So much for making an effort to understand.
I know that some of the features were available later in some other social networks too, but the original premise of making groups (friends, family, coworkers, random people I met I don't know where,...) and posting selective content into selective groups was great.
Some good ideas and horrible execution. First of all the UI wasn't great, then you were forced to use your (back then) "real-name" Google account, and no third party clients.
Maybe it was different for people who live and breathe GMail and the google ecosystem but for me it was a walled garden from the start and I kinda hated the UI.
it actually was a great idea and had a good implementation. but google seems not capable to promote/market/maintain/grow a product, especially not something targeting the younger side of the B2C market like a social network. and then the monetarization was pretty bad I guess, so it was orphaned and then killed.
Now its tiktok, and these "first gen social networks" (like facebook, twitter, ...) are a boomer playground mostly, even instagram is slowly getting to that list. Aside from the tech or the medium (video vs photo vs text vs ...) I think this will happen every few years when the youth receives friend requests from tgeir parents on platform X - thats the peak/decline moment no matter how great it is.
Most of those social networks are responsible for their own demise...
Facebook was for looking at content from friends, and then it moved to mainstream media recommended content, ads, and every 20 or so posts, something from a person you actually knew.
People then switched to instagram, where it was based around photos (and not "statuses"), and it was great... now it's 20 photos/videos of "suggested content" and one of someone you actually know.
Social networks should stay social, not become another cable tv provider with "recomended content" and ads.
G+ wasn't designed to be very usable, even ignoring the part where they had 0 users. It was a nerdy design seemingly trying to bring the fun of ACL systems to the public.
Btw, one lasting disconnect between most Google SWEs and the general public: many SWEs consider YouTube a social network today.
To be fair, Google copied this playbook whole-cloth from how Microsoft fought Netscape and survived its transition to the Internet - I was there when billg sent the famous memo. LLMs are obviously a sea-change and existential threat to search, and Google will be looking to survive the transition, even while its core ad serving/matching business has a moat, YouTube has a moat, Android and the App Store have moats, ...
I have my frustrations with PMs like everyone else, and this plan does seem over corrective and destined for failure. But I don't mind having PMs, all in all. I'm not an idea guy, I'm an implementation guy.
You’re going from OP’s: “I’m not an idea person” to “All PMs aren’t idea people”. This is such a fallacy. It just bothers me how easily some people fall into the trap of wrong generalizations.
My experience was different. I had product managers who tried to come up with new features in a very creative way, instead of just collecting requirements from the customers and passing those to developers.
A lot of people confuse Product Managers with Project Managers. In a typical tech company, the Product Managers are creative roles, come up with ideas, new features, and so on, and ProJect Managers help shepherd these features and projects through the software development process from planning through implementation, testing, and deployment. Totally different roles.
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." - Heinlein
I do lots of different things, but at work I’m getting paid because I know how to make people’s ideas reality and not because I’m deciding what to build, which I’m honestly fairly indifferent to as long as I don’t think it’s outright unethical.
Or they may decide to rewrite whole thing in Rust, and before that write a new framework to make sure that productivity is optimized, and write a project management tool for it first because current one sucks.
Yes all software engineers are actually complete idiot's who if left unsupervised will walk around with their trousers on their heads bumping into walls.
Some people are arrogant or not good at their jobs isn't exactly a huge revelation. There are a lot of people who are not software developers who are also stupid and useless. But they are not the ones who get blamed if a software project fails.
That is indeed the feeling I get when looking at the tremendous overengineering displayed by the software stacks teams sometimes end up with. Why solve customer problems when you could instead play with new toys?
To be clear: I'm a software developer, and I feel this natural tendency too. I've gotten pretty good at suppressing it though.
Not all, but 1-2 people in a team of 10 can steer development into rabbit hole of rewriting in rust/re-frameworking/re-architecturing. There should be someone who connects engineers to reality that's driven by business needs and limited funding, and each line of code has a price tag.
At the other end of the scale not everything can be solved with the HN standby of bash scripts and a SQLite DB and things being slow or unstable isn't "technical debt" but an actual problem that can kill people. Not everyone who is working on non trivial solutions is doing it just for the sake of looking clever.
I worked for Google when the Google+ announcement went out and they tied our bonuses and other successes to social. I worked in Cloud, building a non-social product, and it was absolutely demoralizing. Guess what? Google+ crashed and burned, but now Google is doing cloud in a sort of semi-serious way, but still chasing after ML and not rewarding the hard-working cloud engineers. Really a messed up incentivization system.
But Google Plus succeeded - all the permanent employees got a special bonus because of Google’s success in social (first part is sarcastic, second is true).
I was at Google watching the tech all hands with my boss when they announced that "social multiplier" (bonuses tied to success of G+) was going to apply to everyone in the company.
I turned to my boss and said, shit, our bonuses are going to be fucked because G+ is going to flop.
My boss told me not to worry, since "they are never going to be able to admit that this was a failure".
Sure enough, management tried to bullshit everyone on how successful G+ was and we ended up getting a high multiplier for the "success".
Boq sells itself the wrong way. Yeah the "nodes" terminology is misleading and never explained well. The real point of Boq is to just run your stuff, and as a consequence you have to use its stupid code framework for some reason, even though those are totally unrelated things. But in the end, that only affects like 1% of the code, i.e. you wrap your stuff at the highest level with something that gives the dependencies instead of them coming from a "main" file. Those who can swallow their coding pride end up saving a ton of headache having something deal with all the production stuff for them.
Pod seems to only be the "just run your stuff" part on paper, which would be nice, only problem is it sucks. Every little thing you want to do is a huge codelab, and it's less supported. So people default to Boq instead.
What’s hilarious about the G+ rollout (in a highly dark comedy way) is that G+ was probably a superior offering than the competition. If Google had given it time and opportunity to fail, it would have succeeded because they could then fix the minor issues people didn’t like.
Instead they forced it on everyone using any Google product, hurting the product itself, but also preventing the critical feedback loop that would have allowed G+ to improve, and become organically successful.
This is the #1 biggest mistake large companies make when trying to force their way into a new product area.
"We already have a huge user base, we'll just automatically sign them up for <new thing> and get it off to a running start". No, these users will hate you for it and in fact ruin the experience for the select few who are actually interested. Meanwhile executives will tout the rosy user growth metrics, declare the launch a success and collect their bonuses.
There is no substitute for old-fashioned organic growth. Start small. Focus on early power users and listen to their feedback. Have something unique that sets you apart from the competition and attracts more users. This is how Instagram, Snapchat, TikTok and the like were able to establish themselves in Facebook's presence while Google+ failed.
It's surprising they didn't learn from Gmail. For a while I was the most popular person in my circle of business colleagues because I had gmail invites. Artificial scarcity goes a lot further than forcing something on everyone in the end.
On the contrary, they did try the artificial scarcity thing with G+, but unlike Gmail, a social network isn't really useful if your friends aren't on it.
Gmail was able to leverage open email standards, so aren't constrained by network effect. Second, it was way ahead of competitors in terms of storage and overall product.
Google+ was competing head on with Facebook which had a huge leverage in terms of the network effect.
It doesn't work if the service needs the network effect. Google Wave crashed and burned right out of the gate because it was exclusive and it was exceedingly unlikely that all the people you wanted to collaborate with were able to get in.
It was! But forcing it on us made me hate not just Google+ but also YouTube and Chrome and Gmail. Suddenly I needed to worry that all of my Youtube comments would become published under my Real Name? And notifications for Google+ were showing up on my Chrome home screen? And it's pre-installed on my phone?
Google+ was pretty great. It had some really wonderful special interest communities, and it was really easy to separate my family identity from my tinkerer identity from my friends stuff. The whole circles and squares thing was a wonderful, front and center, privacy-focused metaphor, like some product people actually listened to what actual online community Ph.Ds were saying.
But the execs made everything worse repeatedly in an attempt to drive success metrics up.
Yeah. I liked most of G+, I like the current state of unified login, but oh boy. Those early unified login things were horrifyingly poorly done. Buggy, unfitting of how people used their systems, it was a disaster and I think it's a big part of why G+ failed.
But the thing is, with a social networking site -- like a number of other products -- being objectively superior doesn't count for much if your friends are on the other one.
There is a way around this -- find an initial target audience (team gamers for Discord, students of particular colleges for Facebook), start out as a specialty site for those to get a nucleus of active users, and then start building out. But the kind of outreach and engagement (and, honestly marketing savvy) required to make this strategy actually work don't seem to be "Googley".
(It also helps to find target communities not well-served by incumbents -- like, perhaps, the multiple communities who interacted poorly with Facebook's "real names" policy. But instead, Google leadership insisted on copying that policy, and in a particularly heavy-handed way, to the point that Google employees couldn't register accounts with nicknames that they were actually known by to colleagues and family alike; see e.g. https://www.firstpost.com/tech/news-analysis/why-googles-rea... )
Well there I think part of the problem is that for the world's Googles running a boutique social networking site for auto mechanics or whatever makes less sense -- they'll want to have mass appeal from the jump.
I found G+ quite compelling. It was pleasant to use and I'm sure could of grown into a platform of its own. A place for features that made sense in their time and place and some kind of integration in some products. It seems YouTube is gaining some social features, for instance posts to followers and live chat. I think these would of been interesting areas to utilize g+.
But then again I'm outside looking in, I can't speak to the costs they might of incurred letting things unfold at a more measured pace.
It is funny because today I got an email that my work account was given a YouTube handle and it had my full name in the profile there. Same company practices 10 years on.
I still have two fuckin' YouTube accounts for some reason, too. Some devices log me into one, and some log me into the other one. It's almost as bad as Microsoft accounts.
No, G+ was like the turned an ACL system into a social network. I even heard of someone using G+ circles for access control internally! And the G+ rollout redid how end user login works.
It wasn't just the execution, the basic idea was bad. The few people this overly complicated thing would appeal to are probably not the kind to use social media either.
Facebook had lists (don't quote me on the name) which was very similar to Circles, without needing to be on the sahara desert of content which was Google+.
The advantage of FB was also that you saw content, whereas on Google+ everyone would have it locked up and you couldn't see it, great for the less popular types.
Instagram only implemented "close friends" after years of a solid user base to research the usefulness of that feature. It's not that they couldn't think of it; ideas are cheap. A lot of it has to do with the stories feature, an innovation stolen from Snapchat that G+ predates. And G+ Circles was way more complicated.
Circles feature was also marketed the wrong way. The G+ ad showed a lady bad-mouthing her boss in a circle that doesn't include him. Who's gonna put that much faith in the separation? The entire point of a finsta is that it's not a supported feature, so you can believe that it's totally separate and also has a different name. You'd use the finsta to bad-mouth your boss, if you're a weirdo who friends their boss in the first place.
It was faster, had more features, and many of those were about your control. It was meaningfully better than FB in pretty much everything but active user numbers.
Content aside, it was way more cumbersome to use than Facebook. My friends and I tried it cause one guy was a huge Google fanboy, and even he said "this sucks, man."
The biggest problem with G+ was they replaced GChat, and Hangouts was emphatically slower than GChat. Maybe the G+ product itself was fast - the only part of the product I actually used was a huge downgrade from what they had before. I actually stopped using GChat for years as a result, just because it was so slow.
Also I think Messenger may have been faster than Hangouts. It was not faster than GChat, but it didn't matter because Google killed it and that definitely drove me to use Messenger more.
* It made a lot of coddled high-energy employees continue to feel valued and engaged and not jump to a competitor.
* It forced users to unify accounts making cookie joining and user data from different segments much more universally accessible to ads.
* It brought design uniformity to G on the web, which is an important to contrast the Google brand with the otherwise fragmented Android space.
* Lasting revenue and user data from Google Photos and Drive.
Google plus was Google doing what it does best: solving Google problems. It never had a chance as a product, but they didn’t need for it to win. They needed to re-consolidate data input to Ads and defend against a talent drain.
The user account consolidation was pretty explicit. It might have been explained as necessary for UX consistency, but for Ads it was a huge way to whitewash otherwise verboten underwear-smelling.
G+ birthed the "create account / log in with Google" widget for non-Google websites, that could go alongside the "log in with FB" widget that was getting popular. Those widgets let Google / FB join your account cookies with whatever other ads get shown on the page.
G+ lead to a dramatically changed privacy policy that dramatically increaed data sharing.
I imagine this is exactly what they put into their perf reviews at the time. Doesn’t mean it’s causal, or that those things wouldn’t have happened anyway, for a much lower cost.
Well that's kind of a brutal example, but I agree with the comparison. Indeed, G+ was successful for Google zealots and their bottom line. And in turn, to some extent, the shareholders.
I have seen articles claiming all sorts of "benefits" of the War, and I believe that was one of them. The coup of 1965, which massacred a million "Communists" was in there.
I suddenly want a listing of company KPIs that were graded successful, but were outright failures. I have personally witnessed my fair share of tire fires which were communicated in a rather fantastical way to management.
Every Google employee’s OKRs are supposedly readable by anyone in the company. I don’t know if the OKR scoring is also public, but regardless an enterprising individual could scrape other people’s OKRs every quarter and see which keywords are correlated with later promotions.
This is the fundamental reason large organizations struggle in the long run. If a cisco vp started a network generative ai initiative tomorrow, and in a year posted some results - do you think those results would ever be received badly? Or would they be massaged until they were good?
FB in 2005 was better in some ways, but iirc it didn't have pages, events, or communities yet, and those were truly useful features. I was a holdout until I went to college and saw how many clubs and stuff, even marketplaces (pre FB marketplace), were usefully organized on FB.
I could log into FB, browse the world-famous UC Berkeley meme page, RSVP for a comp sci talk, message my 8 group chats, sell my iClicker, buy a dining commons swipe, and message one of those "my sister needs a date for a party" ads all in one place.
Today’s episode is with David Lieb, the Director of Google Photos. Previously, he was the founder/CEO of Bump, an app that allowed users to swap contact information by physically bumping phones. Bump was acquired by Google in 2013, and formed the basis for the design of Google Photos, which launched in 2015 and passed the 1 billion users mark in 2019.
Internally, a ton of tech came out of Google+. Modern Google services and apps are basically using the G+ tech stack. It seems like that sped up the company significantly so that may be the "best thing" to come out of it.
Anyone who has been following the last 7 Google I/Os knows that "AI" has already taken a central place in the way Google markets itself to the outside world. So this time it's not a last-minute pivot.
previous additions of ai added to google products seem to have been tempered by usefulness and tech maturity.
while controversial, predictive text in gmail is actually useful, is very conservative relative to chatGPT, and has been around for quite a while.
predicting what measurements you want out of analytics is handy, and gives accurate information
google docs/drive uses ai to surface relevant docs based on time of day, audience, and frequency of editing. it works remarkably well.
google photos is mainly an ai product, and it's all-around amazing
google translate is an ai product and it's gotten pretty good over the years, though it's not without competitors
google's speech recognition is very good, and built in all over google products, though I never use it b/c i don't like talking to computers
....
i feel like this push is likely to lead to messiness. strategy messiness, messiness in ux, messiness in terms of new ai products not being vetted the same way they were before - allowing more inaccurate results to be presented to users.
maybe users will be ok with it, idk. i've appreciated the ai enhancements to google products so far, i think they've done a good job being even-handed. I hope they manage to keep their hands on the wheel.
What part of it was any good? I remember terrible UI and strong connection between gmail account (which is primary email for many people), social activity and search/browse history. Google pushed something that we didn't want, at the same time killing something that many really wanted and used (Wave and Reader).
> As for exactly what these forced AI integrations will look like, the report cites a recent YouTube feature that would let people virtually swap outfits
Is that really based on generative A.I. and large language models though ?
After reading the headline I thought to myself, "sounds like Ron Amadeo wrote that." Hard to find a tech journalist who has covered a company as long as he has and still takes every opportunity to lampoon them. The man is a treasure.
The article is a bit critical in my opinion. The fact per the article is "a directive that all of its most important products—those with more than a billion users—must incorporate generative AI within months.".
This is very different than "we have this new product called Google+, and every business must integrate with it." The first case (generative AI) is open-ended directive, asking the products to leverage generative AI". The later (Google+) case is a closed-ended instruction (integrate a solution without a clear problem).
A lot of the recent Google moves make more sense if you see them as something suggested or driven by an AI.... which gives credence to rumors that top execs rely heavily on AI to make decisions. Of course an AI would say that the most important thing to work on is an AI.
I remember the video screenshotted in the article. The one with the Bob stick figures fighting Google+, the same pasted in YT comments with utf8-art tanks. Bad times.
But I don't see what G+ has to do with Bard (for those unfamiliar, their answer to ChatGPT). Really a stretch.
According to the article, there is a top-down directive to integrate generative AI into products all across Google, much the same way everything got Google+ integration at one point.
I wouldn't compare the two like this. "Google+ playbook" makes me think, let's rework how our products integrate together in the user's eyes while also pivoting the company's focus. Google was already focused on AI, and Bard integration is probably more behind the scenes and less broad.
If anything, Google Assistant getting shoved onto everything a few years ago (even the wifi routers) was the classic Google+ move. And the Cloud push was bigger.
Just my personal opinion: I wish Google could make money on what I consider their very best services to be: GCP, YouTube, YouTube Music, and Play Books.
I have been done using their search service for quite a while.
One thing I wish they could improve is a consumer version of Google Workplace that did: 1) easily share purchased digital assets with your free Gmail account. 2) Include the most excellent Cloud Search option (without the business/enterprise pricing). 3) Include in transit encryption for everything. - Then charge about $10/month for that and they would likely pick up a lot of consumer customers.
> While Google releases research papers, OpenAI releases products—and the company's generative chat AI OpenGPT has led to a stratospheric rise for OpenAI.
the author is really taking liberties here, hooking up an LLM to a chatbox is not a product, it's a technology demo
I mean, it's a thing they're charging money for, and people are paying it. So that sounds pretty product-like to me. But there's certainly something missing here in terms of a demonstrated value proposition here.
Maybe for you/us as developers. For anyone in a human to human job who's main business is writing text for other people (not just emails), it's already indispensable.
Sure, but real estate writing was already nearly informationless sludge. I'll be impressed when ChatGPT can generate something I actually want to read, rather than something I feel like I'm wading through.
As an example, suppose I come in to your business, flash the gun in my waistband, and offer you a subscription to my protection service. You gulp nervously and pay. Has value been demonstrated?
Suppose I as a software developer create tools for ransomware gangs, who pay me handsomely for each new version. Have I created value?
Why is having an API relevant to it being a product? Twitter was and is a product without one. Google search is a product. Almost everything at Target or the grocery store is a product, and bananas don't have APIs. How many products are just a webterface? Discord, Slack, Amazon (not AWS)...
I want a faster horse, or at least one that isn't saddled [sic] with the weight of all the crap hoisted on it. The winning formula may well be something like...
"OK Google, now repeat that search with all the SEO spam and content marketing cruft filtered out."
I mean, Google also does that a lot: Google Maps/Earth, Google Docs, Android, YouTube and Firebase (and likely a bunch of others I can't think of quickly) were all originally acquired.
Analytics, AdSense, Doubleclick, Blogger, Google Groups, Google Voice, Picasa, AdMob, Postini, recaptcha, Nest. AdWords was stolen from Overture, after their ridiculous idea of a monthly, flat fee for a keyword failed. Afaik, Gmail is the only successful Google product that wasn't an acquisition.
Oh yeah, duh: I normally lead with AdWords--as that's like their core business ;P--as the guy who founded DoubleClick lives right around here, but I forgot! Yeah: the reality is that Google really doesn't know how to start anything itself.
Vic Gundotra (the head of Google Plus) was the most hated executive in Google. There was even a popular meme of him in his sweater vest, where the caption would always be "If you don't like <x>, then you don't have to use <x>."
This interview of Conan, where Vic was the interviewer, was hilarious. Conan spent 20 minutes skewering him and then kicked him off the stage. I imagine a lot of Googlers enjoyed that.
Wow this is industrial strength cringe, like straight out of an episode of Silicon Valley. As a xoogler I knew about the meme and somewhat his rep, but can’t believe nobody showed this to me. Thanks! (I guess)
There was a scene from an internal meeting where an SVP rolled their eyes while Vic was talking about G+ and the real name policy. Even others as Vic's level were not a fan of him.
I confess that the current ChatGPT hype wave leaves me feeling cold. A lot of people are obviously excited about it. But then, a lot of people have been excited about VR and blockchains, and I'm still waiting to see significant uptake. I have to wonder if this is another area where it's more sci-fi driven novelty hypnosis than the beginning of a revolution.