Hacker News new | past | comments | ask | show | jobs | submit login
Google Meet’s new AI will be able to go to meetings for you (theverge.com)
46 points by andsoitis 9 months ago | hide | past | favorite | 59 comments



This has been my big question with all of these Generative AI stuff that will

- write emails

- go to meetings

- any other type of activity that involves sharing information.

Humanity didn't develop emails, or have meetings, or communications because it some sort of ritual to a strange god (barring the meetings produced by the priests of agile to appease the Agile God). We communicate in order to share information from one person to another for a reason. So if we have a workflow where someone uses AI to generate an email, that results in a meeting that is then summarized by AI, that is then sent to an inbox that is then scanned and the contents of which is summarized by AI. What was the point of any of it?

At no point did a human learn any new information. It was all just various LLMs generating symbols that changed nothing in the real world.

What is the end goal of this? Or is the hope that if we get enough AI hooked up to each other it will magically just start creating and selling products and money will magically appear and everyone will be happy?

At the end of the day the internet revolution was all about Information Technology. That information exists to be consumed, understood and acted upon by human beings, to change our understanding of the world. If we are just having AI talk to AI to talk to AI, it was all an exercise in pointlessness.


There's an entire superstructure of people that exist in jobs that might not be necessary or valuable (the jobs, not the people.) The reports, the meetings, the evaluations, etc, are all constructed to fulfill the wants and needs of those jobs, and the jobs above them and so on. Does it meaningfully inform the work that is being done, or the product that's being made?

I think what this tends to reveal is just how much of a house of cards the modern business world is.


It's more simple than that. Most jobs don't need to exist, but most people need jobs for a society to function.

There is a basic conflict then between business (who optimize for cost) and government (who optimize for stability).


So why then would private business not eliminate those unnecessary jobs in order to control cost? If they are set in their ways, then why don't they get outcompeted by more efficient companies that skip those vestigial roles?


The simple answer is that the people who would make those decisions are the people who hold those jobs. It's a classic instance of "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

And it's systemic. The product of management is reports. The product of management is meetings. To them (and the people who ultimately run the company) these are the solutions. Have a meeting, write a report, do a review. Those are the tools they know. They have hammers. Everything is nails.


> The simple answer is that the people who would make those decisions are the people who hold those jobs.

We just went through a season of layoffs - I find it hard to believe that no one thought to get rid of the so-called bullshit jobs for greater efficiencies.

For arguments sake, lets say the conspiracy for bs jobs goes all the way to the top (C-suite, boards) in established companies: how about startups then? Where are the streamlined startups that are more efficient than incumbents by simply not creating those "unnecessary" roles?

I suspect that coordination problems necessitates these roles as companies grow larger - call it corporate carcinization if you will. If you were to start your streamlined org today, it will evolve into a crab-like form.


> how about startups then?

The business model of a startup is to work everyone you have beyond their natural limits, this doesn't scale to an entire economy.

I know a few startups that laid off most of the revenue drivers in sales and marketing, but kept expensive founders and coders. They are on paper leaner, but not necessarily more efficient.


Because it's a nonsense argument repeated ad nauseum by people in technical fields (and, to be fair, sometimes in other fields; but I see it more often in technical fields). Most of us don't understand the work being done by the people we claim don't understand our work; that's fine, it's normal, people on both sides have spent years and decades learning how to do their jobs, and it's unreasonable to expect someone in another field to fully understand your job.

The problem is that we often think that if we don't understand something, it must not be valuable; and I think this problem is exacerbated by high levels of education and/or intelligence, because we usually understand things, which must mean the things we don't intuitively understand are obviously not valuable.

The reality is, there is some bloat which gets introduced as businesses grow. But it's not as much as we often think.


They are (or will soon), and that will create massive problems for stability of society at large (which is why government will have to intervene).


Obligatory plug for the book “Bullshit Jobs.” But, what you’ve said is the mail on the head.

What blows my mind is how much engineering effort we collectively put into consumerism and business process, without taking a step back to the design phase and asking ourselves if we need a new architecture.


Almost every meeting you attend, almost every email you read, is pointless or irrelevant to you. If an AI can do it for you, you probably should have never been involved to begin with. If all this AI accomplishes nothing but filtering out several hours a week of pointless bullshit, then "mission accomplished", IMO.


> Humanity didn't develop emails, or have meetings, or communications because it some sort of ritual to a strange god

We do however have lots and lots of ritual as to how those communications occur in order to confer the proper respect and tone. In most cases its pretty much noise but you can't skip it because skipping it means offense was intended. We can all recognize this but we aren't really free to escape it.

Generating emails with AI lets a person avoid the appearance of intentional offense and summarizing emails with AI lets you cut to the point while acknowledging that you are dispensing with all of the ritual.

Who knows, maybe after long enough the cultural convention can shift to just sending your prompt straight to the other person!

Obviously I'm skipping over a ton of pitfalls but I think that captures the appeal for a lot of people.


If you want to learn how to be a digital hermit then that is probably not a bad idea. Directly interacting with humans means dealing with their humanity - and that needs practice.


Totally agreed, as I said, pitfalls. I don't think constantly generating slop with the real message embedded inside to send back and forth is the best way to do things, but it's the local maximum I think we are headed towards because of the incentives and social dynamics involved.


AI taking to AI to do things is not necessarily an exercise in futility.

This paper used 3.5, not even 4 and is basically a couple AI agents collaborating to create software.

https://arxiv.org/abs/2307.07924

This is another example at multi AI agent collaboration.

https://arxiv.org/abs/2307.02485

Getting relevant intelligent text to arbitrary previous text means generating text capable of making decisions. That is the end game here. Your machine can think for you now.


That's a good point. Still, I think that's different from an AI agent attending a meeting "for you" as a surrogate. If I were in a meeting where someone sent their personal AI assistant instead of attending, I would be insulted. I'd much rather someone send me a note explaining why they can't be there and letting me know anything they want from those of us in the meeting.


Well if I have it correctly right now for duet, the personal AI assistant act is only on the receiving end of the person missing the meeting. You can talk to the AI about the meeting but the AI won't talk to the attendees (only share generated notes)


This reminds me of the book priced at $23,698,655.93 (plus $3.99 shipping) by algorithms.

https://www.wired.com/2011/04/amazon-flies-24-million/


> Humanity didn't develop emails, or have meetings, or communications because it some sort of ritual to a strange god (barring the meetings produced by the priests of agile to appease the Agile God). We communicate in order to share information from one person to another for a reason. So if we have a workflow where someone uses AI to generate an email, that results in a meeting that is then summarized by AI, that is then sent to an inbox that is then scanned and the contents of which is summarized by AI. What was the point of any of it?

Because we didn't have a machine that can do this whole collective decision-making process. What if we make an AI that's powerful enough to replace a whole organization? A whole company? A whole industry?


One imagines an AI replacing an engineering company would not hold meetings, it'd produce plans, schematics and software.

I won't expect the robot manager to have a robot secretary, who complains to the RR department when manager makes inappropriate remarks.


Ya but I'm not talking about Skynet at this point, maybe when the AI gets more capable we'll get there, but I don't imagine that ChatGPT is going to be replacing most of the engineers on my team right now.


It's not even Skynet. Imagine if in a few years we can make a big LLM that reads and sends emails instead of an organization and can make decisions. I don't think that's that far off.


For a really fun exploration of this, read Avogadro Corp by William Hertling.


Thanks for the rec!


I had similar thoughts when I read a comment from a (non-technical) project manager in a discussion a few months ago. He said something like he uses ChatGPT to summarize the long winded emails of their developer. (He was working for/owning some kind of agency where apparently software developers weren't in the focus, really.)

He also mentioned writing emails with ChatGPT (though I can't remember if these were for communicating with the devs, but it doesn't really matter). I remember thinking that well, you my friend are pretty ignorant and not someone I'd like to work with. Even unethical.

So someone puts in the work to inform you about important details of their work/thought process and then this guy just ignores the whole thing. Even if the developer in question is really sharing more details than needed the right thing to do is to actually manage them and teach them to share less and make their own decision. (Hard to tell from the outside, of course, and the manager in question might be wrong too.)

Now if someone also blows up their own emails with generated details, especially if it's toward the same person whose email they have compressed with the AI then besides robbing the time of others, they may successfully deceive others and themselves and pretend that they have the knowledge what they don't have. And that will hurt the decisions made by them and by others. Basically the shared reality that you want to create with communication will not be a shared reality but out-of-sync views and you'll end up with a lot more "oh, but when I said that I thought you meant X...."


You raise a good point, and it reminds me of a sort of "Prisoners Dilemma" (sender uses/avoids LLM-verbosification and/or recipient filters all emails through LLM-summarization).

The worst-case scenario is that human communication becomes completely routed through this LLM-in-the-middle.


> everyone will be happy?

Shareholders will be happy the day a company produces a new product and no human ever worked on its development.

All of the profits, none of those pesky workers demanding silly things such as payments and OSHA protections.


> Humanity didn't develop emails, or have meetings, or communications because it some sort of ritual to a strange god

Yeah, you say this like it's obvious, but it's quite an honest and productive line of inquiry.

But whatever the history of meetings are, it's undeniable that a lot of the current ones are merely ritualistic.

And yes, the one single way to deal with the ritualistic meetings is to not attend them. I am nearly convinced that this headline is an attempt of modern art, instead of objective information. Even though the application looks like something with real goals.


I can't help but perceive the smell of a solution in search of a problem with all those ‘things that “AI” can do for you’.


We developed that stuff because we were paid to.

We were paid to make and use those technologies because we were taught that’s how society works.

Fiat economics is not solving human problems. It’s propagating fiat economics.

Humans have no divine purpose. The universe does not force social norms on us. We do that to each other.

You exist to pass the butt… dollars around. Fiat economics has filled the biological hole religion stumbled upon, probably helped evolve.

The only reason for any of this is deference to those who came before.


There is too much information flowing between humans right now. AI can help filter it.


LLMs constantly show most promise for work that really has very little value add, and I wonder given just economics if this will guide behavior towards not even bothering with its applications. Starting with cover letters, and then towards arbitrary content generation. Some companies already started doing this, if a meeting is not really productive you don't need to go or can feel free to leave. Not sure how many people would even read follow up notes, if it's free I guess another thing people won't use.


It's definitely a weird experience when people use LLMs to generate some text and then the recipient uses LLMs to summarize that text. I also feel similarly with the office suite AI tools - if the AI can generate the presentation, do we even need the presentation at all? Do we need the people who watch the presentation and make decisions?


Just to point out the absurdity of this:

If one day everyone sends an AI to attend a meeting, will all these AI's (which are all instances of the SAME platform!) still have to act out a human meeting...?

Reminds me of the "Electric Monk" from Douglas Adams' Dirk Gently novels, which is an appliance whose job it is to believe in things on the behalf of its owner...


> If one day everyone sends an AI to attend a meeting...

I think this is the absurd part. I can see an obvious use for some meetings, for example: where lots of people are invited 'just in case'. You could get the AI generated summary and see if anything relevant came up, then follow up if it did.


I would still prefer the meeting organizer to write the meeting notes and action items instead of him assuming "my AI" had captured all topics and decisions, and even worse, then expecting me to take on assignments without him ever explicitly requesting such resources...


The headline is way overselling it. It’s basically allowing you to decline a meeting and leave a note (questions, comments) that will pop up on people’s screens during the meeting.


It's interesting how conflicted I am about this. On one hand having an assistant attend a meeting to report back the distilled ~2 minutes that are useful has a strong value proposition. On the other hand why isn't the meeting lead talking care of forwarding that distilled information themselves?

I feel like this says more about a somewhat bad meeting culture that a lot of us partake in and less about the inevitable rise of AI.


Why do you have an assistant attend the meeting at all if they aren’t directly participating. You could distill from a recording, or have your assistant distill from a recording, or maybe someday have an AI distill from the meeting based on your interests?

I get the value in being in on a meeting is popping in when necessary (ask question, clarify something from your team, etc…).

Many meetings could also just be done more efficiently as chat discussions.


> why isn't the meeting lead talking care of forwarding that distilled information themselves

To me what this says is that devs like having their thinking done for them and the requirements spoon fed, instead of having to actually make decisions.


You don't need "an AI" for that - you can just use any of the dozens of CI tools. Most of them can do auto summarization and auto highlights.

Although looked at that way, this is really Google's entry into the CI space.


If you go to so many meetings that you can't remember what happened, you need to stop hoarding your legos and focus. Let others take some of the responsibility. Delegate.

But, like, actually delegate. Let others make decisions. Trust that they know more about an area than you do. Just because you've read their notes doesn't mean you're an expert.

Trust your team.


This is a hack for poor corporate culture where there is no autonomy or trust to shape the communications channels based on efficiency, which is shockingly common.

If you don't feel comfortable enough to say, "This could be an email thread instead." or "Is this meeting really necessary?", this is probably a great fit for your org, who will eat this up as "innovation."


It’s unclear what this is. It sounds like it might just be a way to automatically generate meeting notes for people who couldn’t attend? That could be useful for the kind of meeting where you’re just an audience member, not a participant.

But here’s a more science fiction spin on it:

Instead of thinking of this as “going to a meeting” think of it as “representing you.” That is, maybe it can try to answer questions on your behalf and it can give you a summary of what happened afterwards. Everyone will know it’s a representative, it can’t commit to anything, and it might not be entirely accurate, but maybe it’s better than not being able to ask you questions at all? (For example, when someone’s on vacation.)

I don’t expect that sending a person to represent your interests is common outside of maybe upper management and of course the legal profession, but perhaps it’s useful. It might be better than sitting through an entire meeting just in case someone might want to ask you a question?

I wonder, though, whether it would be better to just make a chatbot available for people to talk to at any time without it being a formal meeting. Also, what information does your representative actually use to answer questions?

Maybe a chatbot could be a way for people to make their semi-organized work notes available to others? Or make a vacation auto-responder that’s trained on a subset of your outgoing email.

Imagine what happens after someone leaves. “Sorry, she’s not available, but you can chat with her bot.”


So one day all attendees will be AI's, and the platform will generate the meeting-notes straight from the Meeting-Agenda without pretending to hold a meeting.

We won't even need a calendar then to schedule it, you just write some text and press a button to have a machine agree/disagree with you.

Soon people will wonder why this process is even called a "Meeting"...

I can almost hear the AI-bubble inflating already...


With a bunch of these AI tools at some point some folks will have to do the actual work. Some have to attend meetings, get an idea off of a random comment which the AI won't consider important to add to the meeting notes and then come up with a way to launch it to market without AI because that data set doesn't exist yet. Those who can do work will be rewarded beyond their wildest dreams and many many jobs will be lost. The gap between haves and have nots will increase even further. What's worse is those who are using AI to do creative work are actually atrophying their own capacity to do work.


This sentiment stands at odds with centuries of evidence.


Is there evidence from centuries of AI revolutions to support this? What makes you think it's the same? I don't see how it's comparable. History shows that low-skilled jobs were typically replaced with similar low-skilled jobs, such as transitioning from farming to factory work. However, we are now entering an era where we can potentially replace most low-skilled jobs. The question is, what will happen to the hundreds of millions of people who will be displaced? Without a significant revolution in the economic model, it may not be possible to address this issue. What will a 50-year-old truck driver do? Go to college and learn software engineering? We know that this is not feasible in the majority of cases.


A 50-year-old truck driver will become a 60-year-old truck driver.

A 20-year-old truck driver will go to trade school and become a mechanic.

As much as we like to think of ourselves as special, this has all happened before and will all happen again.


Too much evidence to even begin to enumerate, presumably.


Between 1900 and 1970, agricultural jobs went from 35% to 5% of U.S. jobs.

One out of three jobs lost in a single lifetime.

During the same time, the standard of living skyrocketed.


This appears to be a misguided application of AI, essentially operating under the premise that "meetings are useless, so let's automate them." However, the value in meetings often lies in the consensus-building among participants, especially when it comes to action items, ownership, and deadlines. Simply replacing this process with AI risks losing that collaborative nuance and the reason why you need the meeting.

For example, meeting notes are notes all participants agree on (action items, owners, dates, etc.).


Soon all this can be replaced with a single platform spawning an AI which will pretend to have a human meeting with other iterations of itself, ultimately provide the most probable reaction given the input data and finish with meeting notes sent to all.

While other humans attend this will still be very compute-intensive, but as soon as all the attendees are AI's it will only take a split-second to generate meeting-notes straight from the Agenda! ;)


I’ve been doing this for a while, using a bot that attends meetings for me, records and transcribes it, and sends me a summary. I’ve seen other people do this too with meetings I do attend.

These are big-audience meetings usually with one person presenting and then a Q&A.

I read the summary and then if it looks interesting I listen to the meeting when it’s convenient.

(Not gonna shill for the app: it’s just one of the ones in the Zoom “store” — there are several).


This would be like the notes you take at a meeting talking back at you.

I think we humans do just fine when we can't attend meetings. Most are unneccessary anyway.


This makes me miss all the crypto spam. At least crypto applications sounded cool even if it was just snake oil (helping artists get funding et.c.). AI's applications are just horrifyingly dystopic, between this, AI art, AI girlfriends etc.


I can think of a use case for me: Sometimes I got invited as an optional participant. Most of the time, I don't attend. If I can receive a summary focused on my area, where the decisions affect or depend on my end, it would be nice.


Does this remind anyone else of the movie "Real Genius"? https://www.youtube.com/watch?v=wB1X4o-MV6o


R.I.P Tactiq.io




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: