Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft 365 Copilot – your copilot for work (microsoft.com)
356 points by benryon 6 months ago | hide | past | favorite | 334 comments



>Currently in testing with select commercial customers…


We are currently testing Microsoft 365 Copilot with 20 customers, including 8 in Fortune 500 enterprises. We will be expanding these previews to customers more broadly in the coming months and will share more on new controls for IT admins so that they can plan with confidence to enable Copilot across their organizations.

https://www.microsoft.com/en-us/microsoft-365/blog/2023/03/1...


This. I just wasted 5 minutes logging into Office365 web edition and looking for CoPilot. I wish they would have been more clear that this is not agerneral rollout.


Amazing! I keep banging on about how Microsoft is building the business lock-in of the next decade, and this is a sign of it. Put down your snubs about Clippy and frothing about "EEE" and look at that announcement:

> "Business Chat works across the LLM, the Microsoft 365 apps, and your data — your calendar, emails, chats, documents, meetings and contacts — to do things you’ve never been able to do before. You can give it natural language prompts like “Tell my team how we updated the product strategy,” and it will generate a status update based on the morning’s meetings, emails and chat threads."

> "It creates a new knowledge model for every organization — harnessing the massive reservoir of data and insights that lies largely inaccessible and untapped today. Business Chat works across all your business data and apps"

> "Copilot LLMs are not trained on your tenant data or your prompts. Within your tenant, our time-tested permissioning model ensures that data won’t leak across user groups. And on an individual level, Copilot presents only data you can access using the same technology that we’ve been using for years to secure customer data."

That's amazing, natural search, summary over all your company SharePoint and emails and Teams chats. You aren't going to commercially compete with this with an IRC server, IMAP and a copy of LibreOffice. Something DropBox could have been doing with their massive trove of business documents? Something Slack could have been doing with their massive haul of chat history?


> Tell my team how we updated the product strategy,” and it will generate a status update based on the morning’s meetings, emails and chat threads."

This sounds to me like a significant competitive disadvantage. I'm torn between joy that this makes it easier for small companies to compete by doing novel things like, uh, talking to each other and horror that my own employer might embrace this with enthusiasm.

It's an interesting thought that usually IP-paranoid companies might cheerfully stream absolutely all their internal documentation to Microsoft in order to leverage this synergy. I think I'll buy some M$ stock.


Many of those IP-Paranoid companies are already using Sharepoint and Office 365 (or Google Docs) to store all that documentation.


That also seems wild to me. It doesn't even seem to be questioned whether giving Microsoft a copy of every email you send is a good idea. Even when the emails are about contract terms with Microsoft, the conflict of interest doesn't seem to worry anyone.

This whole announcement is "look how good we are at reading your internal documents" and I still don't think it's going to register on the threat model.

Fortunately we can definitely trust Microsoft not to do anything unethical in the name of profits.

P.s. I had a totally unsuccessful conversation with a security person who believed that the AI powered network scanning stack they used kept them safe. Secret software stack running CI on AWS, email through Microsoft. All the IP they cared about literally stored outside their fortified local network.


IANAL but aren't MSFT acting in the role of "data processor" (per GDPR) and so are limited in terms of what they can do with customer data?


Because the convenience of SharePoint and O365 is a huge productivity boost compared to having everything self hosted on-prem. Or better yet, the other way around, having everything on-prem, outside the cloud, kneecaps your productivity.

Case in point, I currently work in a relatively niche cybersecurity company, so naturally, most customer stuff is done and stored on-prem for security reasons since we handle a lot of customer sensitive data and storing it in anyone's cloud is a big no-go for any customer.

By far the biggest issues are managing the on-prem environment and moving the data back and fourth between the O365 environment and the on-prem environment, as the O365 environment is actively used by the customers, C-suite, business, sales and upper management, while the on-prem environment is used by the engineers, lower and middle managers, which is the biggest PITA that I have ever dealt with.

Nextcloud, Thunderbird, Libre Office, Element chat etc. are great but what Microsoft offers with the O365/sharepoint ecosystem is something else and copilot will move the goalposts even further.

Also, I'm not aware of Microsoft having had any data leaks or breaches, so maybe they have gained enough trust in the corporate space that makes companies at ease storing all their stuff in Microsoft's cloud for the sake of convenience.


> Something DropBox could have been doing with their massive trove of business documents? Something Slack could have been doing with their massive haul of chat history?

Absolute truth. Discoverability in modern apps is a sad joke at the expense of the users. I guess "search that actually works" isn't enough of a value prop to actually get dev time.


Correct, there was a company called use.fyi (by the founder of KissMetrics) that did only search and they eventually pivoted to data security after one of their users used their tool to figure out which employee was accessing what data and manually blocked one employee after the employee had left the company but somehow still had accidental access to files.


Curious how well this works if Teams has a short window for chat retention - if it still has learned from them and keeps the information but puts it out of reach of discovery that would be kind of amazing.


Clippy just got hench.


I dont remember a single time when people supposedly said that EEE wasn't still being done.

There are just countless examples of people misattributing everything to be EEE. Vscode is another recent example of Microsoft still doing EEE by first creating the editor, destroying a lot of the IDE/editor ecosystem, just to eventually remove the important bits into non-free plugins, making the free version of codium basically pointless.

But that doesn't make M$ buying GitHub into EEE for example.


> "destroying a lot of the IDE/editor ecosystem"

How is Microsoft releasing VS Code "destroying the editor ecosystem" but releasing Vim for free isn't and JetBrains charging for their IDEs isn't? This isn't "EEE" this is whining that Microsoft is competing by making software people want and that's unfair, apparently.

> "just to eventually remove the important bits into non-free plugins, making the free version of codium basically pointless."

Look at what isn't open source: https://github.com/microsoft/vscode/wiki/Differences-between...

Microsoft logos and trademarks, integration with a marketplace which Microsoft run, a list of extensions Microsoft have reviewed and recommend (not the extensions, just the list), just enough of the Remote Development feature to handle connecting to Microsoft's backend for it, Telemetry, and autoupdate from Microsoft servers. Arguing those are "the important bits" is a stretch.

The C#/.NET debugger maybe, but you say that was "eventually removed into non-free plugin" - was it ever free? It doesn't sound like it. Also VS Code was popularised by JavaScript and TypeScript not C#, that's always been the province of Visual Studio full edition.


I consider the MS Language Servers pretty much the most important bit of vscodes IDE functionality as it provides autocompletion, lookup, refactoring etc, which is the feature i was specifically thinking of here. They were originally all open source, but aren't anymore.

they're still free (as in beer), but their licence only allows usage with the official visual studio code distribution


The thing is that those extensions are Microsoft's, and as problematic as some people consider the situation to be, Microsoft has all the right to limit the licence for these extensions.

The inconvenient answer: want another C# extension? Tough luck, write it yourself. This is essentially what LLVM did for C/C++ linting, and produced Clangd, and many people now consider it to be superior to the Microsoft C/C++ linter.


Nobody is arguing that they don’t have the right to. This is such an American argument of last resort I have no idea how we always end up here.

I’m not nearly as anti-modern-Microsoft as some of the people here, but even I know that “they have the right to” is moving the goalposts.


Mono C# used LLVM but Microsoft thought their language services were better, so mono LLVM is EOL.


https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

(Don't mind me, just the formerly confused passing through.)


THANK YOU


EEE means screwing over a competitor's product. This is not that, this is a significant upgrade to their own product. Calling this EEE means you have an M$ sized axe to grind.


It actually kind of directly links to Microsoft's strategy, because Atom was the original Electron editor, and Atom was created by GitHub, who was purchased by Microsoft, after which Atom was ~extinguised~ deprecated.

https://en.wikipedia.org/wiki/Atom_(text_editor)


> "after which Atom was ~extinguised~ deprecated."

That isn't what 'extinguished' meant! It was supposed to be that you and Oracle exchanged email by open SMTP, Microsoft wedged some new features into Exchange SMTP which then enough Microsoft customers relied on that Oracle rolled over to MS-SMTP, and then you practically couldn't use plain standard SMTP because everyone you wanted to email was using proprietary Microsoft SMTP.

To trivialise that by turning it into "Microsoft stopped developing a thing and I can't be bothered to fork it and develop it myself" is, well, to turn a potentially real issue (from twenty five years ago, which never happened) into empty whining.


I guess as an end-user, I fail to see the difference. And it wasn't like Microsoft started developing Atom and then stopped it, they bought the company that started developing it, and shut it down in favor of their own product. And blaming it on the end-user because they fail to pick up a massive product like Atom and develop it themselves seems disingenuous at best.


One difference is that you can continue using your old Atom install, and if it had been "extinguished", you couldn't[1]. The other difference is that EEE was largely about interoperable protocols, not what one person could do with one program - it was about TCP/IP and SMTP and HTML and trying to take those over to Microsoft's benefit[2].

> "And blaming it on the end-user because they fail to pick up a massive product like Atom and develop it themselves seems disingenuous at best.

Yes, welcome to the elephant in the room of Open Source, 'many eyes makes bugs shallow', and the other pretenses we don't question. But at least the principle stands - you can still get the Atom source code[3] and pay someone as a contractor to change it for you, employ some people to work on it, start up a foundation, beg someone. Compared to a closed source program built on patented algorithms when the company shuts down and nobody can practically or legally improve it, or compared to a multiplayer game with online servers which the company shuts down and you can't use the game ever again, it is different.

[1] If you think it's impossible for Microsoft to reach into your Linux machine and take your Atom editor away, you'll see why EEE is a moral-panic rather than a real thing which could ever happen.

[2] you know, like Google actually does with HTML/Chrome strong-arming through Google-custom extensions, AMP pages, and the like. You'll see that this still hasn't resulted in other people being unable to make browsers or webservers or web-like protocols (Gemini).

[3] https://github.com/atom/atom


Netscape Navigator was also open sourced, so I am confused what a project being open source has to do with Microsoft's business strategy. We factually know that Netscape's downfall was a result of the EEE strategy [0]. Just because Netscape rushed to open source Netscape Navigator when they saw their market sharing falling doesn't mean that Microsoft didn't employ EEE. If Microsoft had bought Netscape once Netscape was losing money hand over fist, it still wouldn't mean that Microsoft hadn't used EEE to help bring Netscape to that point. Just because Microsoft no longer has the majority mindshare of the browser market, should we ignore that event, or say it was ineffective, when it was very effective and very profitable for many years?

It is just a business strategy, that's all. And it's one that Microsoft employed successfully, and which people who join likely learn of or know of, and some of whom try to emulate to be successful at Microsoft now. And many of us don't like the effect EEE has on our industry, when Microsoft or any other entity does it. And just because Atom may not fit the strategy exactly, doesn't mean the unmistakable parallels which I already mentioned should be ignored. I personally used Atom, but more than that, I liked the world more when there was more than just one Electron based editor competing out there in the world, and we are seeing the fruits of VSCode winning so completely (both good and bad), whether through aggressive corporate strategies or not.

[0] https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


> "Microsoft's strategy to "kill HTML by extending it""

I'm still using HTML today. It hasn't been extinguished because it still exists. It's like complaining that someone murdered your dog, and I point out that your dog is still alive and you say "I'm confused what that has to do with anything".

Extinguish: "To put an end to or make extinct; destroy: synonym: annihilate."

You can still use Atom, it isn't extinct, destroyed or annihilated, and that's related to it being Open Source because there is no way for anyone to annihilate it when you can have your own copy of the source.

Compare that to the proposed "kill HTML by extending it" where, if webservers only served compiled ActiveX and you could only view that on Windows with IE and IE wouldn't render HTML, no amount of 'having the source' would let you use HTML to make web pages your customers could view. You'd put HTML on your webserver and your customers couldn't view it. You'd look at the IE source code and you couldn't distribute it. HTML would be killed as an interoperable standard, It's a difference in kind.

> "It is just a business strategy, that's all."

> "We factually know that Netscape's downfall was a result of the EEE strategy [0]"

We know that Microsoft made a product good enough that nobody bothered to install Netscape. If that's all EEE means it wouldn't have been an attempt to kill HTML. From your link "Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions." - is anyone seriously saying it was ActiveX which people loved and that's why they dropped Netscape? I don't remember ActiveX playing much of a part in the internet.


It played a pretty large part in corporate intranets, which certainly controlled a lot of users. Most people weren't using more than one browser.


> I don't remember ActiveX playing much of a part in the internet.

The large company I work for still mandates IE11. Why do you think that is? If your corporate strategy gets you 20-30 years of vendor lock-in, then I would say it was a successful strategy.


Sigh. We all know what Atom is. This isn’t the same thing.


How is it different?


Shipping at big companies can often be surprisingly hard so it's impressive to see how fast Microsoft is moving with AI integration into all their products.


It is also impressive how slow they deliver the option of ungrouping taskbar icons, which is requested by masses for years now, and actually worked before. I know, I know, that is a much harder poblem than AI. Maybe they need to improve AI much more first so it could have superhuman powers, solving the task bar ungrouping problem finally for the humanity.


Maybe it's not actually being requested by masses and instead just a vocal minority that you happen to be a part of? Or maybe Microsoft already calculated that fixing it won't bring much if any more money than not and therefore that it's not worth it.

Just because something that you want fixed isn't done does not mean they're "slow". If it hasn't been included after years of being "requested by masses", then it's more likely it just won't be included period.


There is even a paid app doing that now! Beyond free ones. That's how rare problem is this. :D You actually meet with it daily, hourly if using two or more windows by the same app.

Just because you are a hobbist user and never met a productivity problem it doesn't mean that the professional population does not suffer from it (see, I can make unfunded assumptions too about the other end ;) ). All around me have this problem, with various level of seriousity. Google the problem, like I did for prolonged time searching for a good answer, it will bring up masses of troubled people.

> maybe Microsoft already calculated that fixing it won't bring much if any more money than not and therefore that it's not worth it

Can you also have an educated deliberation about the calculations justified the removal of this feature existed before? ;)


> There is even a paid app doing that now

Does that somehow change prioritization? If anything, it would lower the priority because there's a workaround people can use.

> All around me have this problem, with various level of seriousity. Google the problem, like I did for prolonged time searching for a good answer, it will bring up masses of troubled people.

If it was as large of a problem as you claim, with as wide an impacted userbase, it would have already been fixed. Yet here we are.

All you've proven is that there is a vocal minority. Which I already said.

At the scale of Microsoft and Windows (1.4 billion active users), for every person complaining about a specific problem, there are literally millions of users who aren't. Just because it is a problem for you does not mean it's automatically a big enough problem to address. If it isn't going to actively lose users (which it clearly isn't, considering people wrote apps to stay on Windows and have this behavior), then it isn't worth it.

> Can you also have an educated deliberation about the calculations justified the removal of this feature existed before? ;)

I'm not a seer, but it's plainly obvious from a software prioritization standpoint. The taskbar was rewritten and the feature was not justified as being important enough to keep. Wow, such deep, so insight.


>> There is even a paid app doing that now

> If anything, it would lower the priority because there's a workaround people can use.

Until this happens:

Windows 11 update breaks PCs that dare sport a custom UI https://www.theregister.com/2023/03/01/microsoft_windows_11_... https://news.ycombinator.com/item?id=34999885


Ok? I'm not sure how that changes the point.

Things break, the app developers fix it, and then they work again. I don't think there's an expectation anywhere that third party apps will work forever, from the standpoint of anyone involved.

But a viable workaround for users even if it isn't first party will change prioritization when looking at a feature.


Is not just a removal, you know right? They rewrote taskbar entirely. It's actually coming soon the feature :)


Insider versions have started to show that work on ungrouping is being done https://twitter.com/XenoPanther/status/1633649557652250631


I'm sure the issue there is designers believing they know better than power users how things should work, leading to the gradual evolution of software that is usable only by the lowest common denominator.

This happens to a lot of projects, it's a shame it's infested MS.


I don't like a lot of things about the direction that Microsoft has taken Windows's UX, but this is absolutely an edge case that only a tiny % of users care about, and certainly not "masses" of anyone.


There are on the order of Billions of Windows devices out there.

When you make a change like this where there is no configurable option to go back to the old style of working you cause issues for tens of millions of people who developed a workflow that depended on that.

There have been dozens of little things like this. Some got fixed in Win11 eventually (so they were obviously bugs). My example: in Windows Explorer you can start typing the name of a file and as long as you keep a quick pace, you will end up with that file highlighted. When Win11 shipped, it stopped matching at the first letter.


MS has such a huge userbase that I'd wager any GUI feature that's accessible from the desktop has "masses" of users.


That just means "masses" as a concept gets diluted.

It might be "masses" to a single outside observer, but a drop in the bucket to someone that has a picture of how big that mass truly is compared to the rest of the userbase.


Google: windows ungroup taskbar icons

There is even a paid app for that (beyond free ones)! : D


Microsoft doesn’t lose points for not prioritising your personal workflow gripes as a power-user. Honestly anyone that prioritised working on this based purely on what you’ve described would NOT be a good product owner.


Different teams have very different processes. The AI/Bing group is obviously much more "move fast and break things" than the OS group.


This is a really good thing, right?


Well, it does mean there are some MS products where I know not to trust the software out of those teams to upgrade cleanly... if ever.


No it's not. They obviously just decided not to do it.


"In the months ahead..."

They've not shipped this. They're planning to. You're reading a marketing piece about work in the pipe. Like so, so many other company product "launches" in this space that are "already testing with a small group of whatevers".


I would argue that shipping to a limited test group qualifies in terms of measuring velocity, which is what the OP said they were impressed with.

Launching to 10000 people vs 10000000 is a matter of scale (which is still important, but not the same thing as), not velocity.


There's no reason to believe that what they're testing right now has any relation to what they show in the video.


i didn't see a place where they said they had 10k users already


Microsoft employs somewhere above 200k people. 10k is only a 5% internal rollout. That seems entirely doable without any fanfare.

The numbers are also completely made up, used to illustrate the point. Launching a product to X users is different than scaling up to X*1000 users, if that makes you feel better.


FWIIW, This seems rather similar to Github Copilot in VS Code. Been using it for a year, training data issues aside the product is absurdly good at times.


I am not as optimistic as you.

As someone who had Teams imposed on them at the beginnings of the pandemic, I'd say: it is easy to ship anything fast when your users have no say on whether they want to use your product at all, no one cares that your laptop grinds to a halt when using it, and when no one will hold you responsible for delivering broken code.

Will MS Copilot work? Maybe, maybe not. But as far as Microsoft is concerned it doesn't have to be good - it just has to exist so they can justify charging their corporate users a little bit more.


My biggest takeaway from all of this. I’m very impressed with their velocity.


Let's see if it's the same type of velocity they have with Teams. Constant change that breaks things.


I hope you have a better day.


My day would be better if my basic communications tools worked as expected, and I didn't have to wrangle it into doing shit all day long.


I mean objectively he is not wrong


Well an anecdote by definition can't be wrong.

Teams is annoying to use at times, but I don't see any constant breaking changes


It's essentially the telephone of many, with the reliability of... A telephone built to the standards of your typical web application.

You can't trust it for things like audio, file sharing or alerts which is pretty problematic


I wonder how they figured out all of the vectors where they could improve automation/velocity.

One wonders if they used AI for these recommendations.


I was ok with using edge on my corporate laptop (mostly to not install more than the stock software) but that un-hidable discover button that just appeared is making me question my ok-ness. Maybe I'm just angry at the VP who made the decision to let that in that's paid more than I'll ever be. I want to get paid to make boneheaded decisions too.


Same here. Chromium based Edge used to be nice at the beginning, it was a breath of fresh air after Chrome forcing Google services all the time.

But Edge seems to get worse after each update.

Recent updates have give me a new sidebar and video backgrounds on tabs. And a cloud based Microsoft Editor or something that I guess it sends everything I type to the cloud.


Both can be very easily disabled in the settings. I did, am and still pleased with Edge overall.


Perhaps the slowing in their PC and cloud sales is motivating to move faster on these initiatives.


Given how bad the Bing integration has gone, perhaps we should wait and see what the actual results are before we get too impressed.


I've been using it for code and product summaries and recommendations and it works fine.

With what uses did you find it lacking?


How has the Bing integration been deficient?


On this topic, now that GPT-4 without Bing is available on ChatGPT Plus, it’s noticeable that Bing slows down the generation and can make the output much worse by including Bing crap in the context.

But it’s also sometimes better at things the normal GPT doesn’t know about. I think it’s going to be the best search engine once they improve the speed.


Bing has to be slower since it mixes in context from the web. So technically it does a Web search, feeds the results into ChatGPT, while ChatGPT itself doesn't have this step.


Well, it did threaten violence towards a user.


You think they didn’t do that on purpose for all the free news cycles they got?

If I was launching AI, having it be just a bit unhinged is obviously the best way to get it to go viral.


The common mistake of attributing to malice what could perfectly be attributed to stupidity.

Sure, not your run of the mill stupidity, but even ML scientists do not understand why would a model make a particular choice.


Bad? It works nicely, at least on iPhone.


Hurrah! Now we can enter 3 bullet points, have an AI automatically turn them into 3 pages of waffle, and then copilot can automatically summarise those back into bullet points again.


Satire article from literally 20 years ago: Word 2004 to Pioneer AutoUnsummarize Feature

https://www.bbspot.com/News/2003/12/autounsummarize.html


That article is fantastic! I love "When asked for comment on the ethical issues present, Greenwood promised to have a five-thousand word report ready by Monday."


And my mailbox will be full of 10 pages auto-generated memos. Progress.


We'll have AI to turn bullet points into memos, and the same AI to turn those memos back into bullet points.

"They feed us poison / So we take their 'cures' / While they suppress our medicine." /s


This is I don't understand. Why people prize waffle so much?

They somehow think that if they type three sentences about something, it will not mean they spent week's worth of hard work to come up with them.

So the extra time is being used on coming up with waffle, as a sort of self guilt tripping and then when this waffle is presented, the team understands the effort that had to be given into coming up with the waffle.

Probably out of fear that if small, but actually meaningful output wouldn't correlate with the amount of work needed in the eyes of management and could cause losing one's job.

This is an awful awful culture and rather than working to eradicate it, M$ is trying to embrace it.

It's a massive drag on productivity and really a waste of money. Employees would be much happier if they could put feet up and relax after spending good amount of work, rather than stressing about how to sugar coat it and pad with content.

The thing with AI is that it is very easy to spot that content is not written by human after reading enough of AI written text. So the copilot may work short term as an ailment for this, but what happens in a few months time, when everyone will know most presentations are just bs?


Imagine the disconnect as nobody in the company will know what is going on when everything is auto-generated and people are only reading summaries.


Bullshit jobs indeed


I'm really hopping that waffle will become less relevant as human effort in it decreases.


In other words, looks like we've just invented a new negative compression scheme. Both sides of the conversation see a short message, while a longer one is sent over the wire.


I'm.. Mostly fine with that.


Why / How?


Clippy: it appears that you’re writing a very sad word document in partially broken Danish, and keep trying to signal SOS with your typing pattern. Would you feel better in bold Comic Sans?

You: aw, clippy, I believed in you all along


Enter: Microsoft Bob


Melinda French lead Microsoft Bob

She was once married to a guy named Bill Gates.

https://twitter.com/melindagates/status/917030072846036993?l...


For a demo of MS Bob: https://youtu.be/5teG6ou8mWU


My first thought is, "gross"... now we'll have to wonder whether Manager X is even paying attention to the emails they're reading and responding to, or doing any actual work or thinking. Could lead to a world that's even more tilted towards politics and socially engineering your way up the ladder, versus competence and skill.


> now we'll have to wonder whether Manager X is even paying attention to the emails they're reading and responding to, or doing any actual work or thinking

If my experience in engineering is anything to go by, this worry existed since the dawn of middle management. AI just makes it give good responses instead.


"BingBot, could you please write an email subtly undermining inter–team cohesion by hinting at gossipy shit-talk without mentioning it explicitly, and also praise my report in a way that demonstrates I don't know or appreciate what they've been doing for the past 2 years, calibrated to hurt their self-confidence without quite causing them to quit?"


That's just a straight shooter with upper management written all over him


You are all presumably incredibly privileged tech workers that are free to find a better place to work instead of acting like this shit is inherent in a workplace, because it isn’t.


There's not enough "better places" for everyone.


Sounds good.

Cheers.


K. thanks


ChatGPT opened my eyes to the fact that we already should have been wondering about it.

I tried using ChatGPT to generate answers to some requests to my team, and it created the most plausible wild goose chase answers I could ever imagine. They would be a terrible waste of time for anyone who received them.

Then I started to notice how many of the emails and requests I get are just as bad! Plausible sounding answers directing me to teams that don't exist, or telling me to read documentation that doesn't apply to the question I asked. Asking questions that seems specific but have fundamental contradictions that make them nonsense.

With people's limited knowledge and time and lack of attention and care we already live in the world of plausible bullshit hallucinated nonsense.


This is why the grounding that office365 seems to be doing will be so important, it will not be 100% but having the response be grounded on actual artifacts (actual documentation, emails, data) will be huge, the fact that it gives you citations for the things it suggests, is huge.

Props to the MS team.


The actual documentation and emails are like 80% bullshit though. This will just generate more plausible sounding bullshit than a completely naive AI. It would be nice if the thing helps me right excel formulas more conveniently though.


Can one bring a sexual harassments complaint against a machine? I see this thing being trained up on material within an HR department only to parrot back some horrible phrase it learned after reading hundreds of harassment complaint discussions.

I used to work surrounded by lawyers. Just like doctors drift into discussions of horrible diseases, lawyer drift into discussions of horrible legal situations. Letting Clippy "learn" from that material seems really dangerous. How do you kill/reset this thing once it has become poisoned?


There are built in guardrails. Those are so strong it effectively lobotomized the LLM and force to abort its own internal process in favor of just saying: "I am sorry Dave, I am afraid I can't do that".

Try to make ChatGPT or Bing to even attempt to write a snarky and negative email to your coworkers. It will refuse to do so.

Jailbreak attempts may work, but with how strong the guardrails are, it is unlikely you can consistently succeed. And any failed attemps can be report to sysadmins and forwarded to HR. Too much risk.


The British figured out long ago how to insult someone with praise. This is why British comedy is so good. They use such creative discourse to call out their politicians without libeling them as the liars they are.


You would just bring an HR complaint against the person that didn't proofread what the AI said.


Isn't proofreading one of those low-hanging fruits, one of those those easy and mundane tasks at which AI excels?


If the green squiggles in Word are any indication, not at all. Maybe Microsoft has improved it lately?


Not right now.


This will fundamentally change the way that M365 enables collaboration. I am truly fascinated by how quickly Microsoft is moving right now, it's really impressive. The moat on AI in enterprise continues to get bigger and bigger.


Congratulations, you have earned one Micro-buck.


President Dwayne Elizondo Mountain Dew Camacho approves, brought to you by Carl's Junior, and Brawndo, the thirst mutilatior.


On the other hand, maybe you prefer the decisions the AI takes for them instead of what they would do by themselves.


System prompt:

>>> You're a strict yet lenient manager, you have very high expectations from your reports, while making them feel like they are part of a bigger family. every email you write should reflect this.


This doesn't actually sound like a joke.


Is it any better or worse than delegating work to a junior team member and not bothering to review the work product?

I love the idea that I can rapidly produce a rough draft of a document from notes, something I’m already doing with ChatGPT, albeit not as elegantly as if the functionality were integrated into Docs/Word.


Is it any better or worse than delegating work to a junior team member and not bothering to review the work product?

When a junior team member screws up the interpretation, there is someone to hold accountable.

With Office 365Bot, there is no accountability for errors.


In any sensible organisation the manager will be the accountable party regardless of whether the frontline staff are human or AI.


That's kind of my concern as well. I feel like there is already a decline in thinking, but currently I can at least tell that it's the case.


At least the yearly perf review at Microsoft is going to get easier since you just toss all your notes about what you did for the year to Copilot and have it write your summary. Your manager can do the same thing for their notes about you and compare.


Even better is if you use their apps you don't have to even make notes. It could just figure it out.


Speaking of notes, I noticed that OneNote is absent from the list of products that will get Copilot.


I was thinking about this last night. The new AI tools are like performance enhancing drugs in sports. Especially in bodybuilding, there was a really painful transition from the world of non-PEDs to PEDs. They solved that by going to natural and non-natural competitions.

Other sports didn’t handle it so well (baseball, cycling, etc)

Just as people were asking “did Lance / Sammy / Barry / Arnold really do that incredible feat without juicing?” I think that we will now be asking “did Janice really write that blog post or was it GPT-4?”

Scary world


PEDs do not make you a world class athlete, but you likely cannot be a world class athlete without some sort of PEDs. Much like PED use, top performers will take advantage of the new tools to widen the skill gap with their peers even further.


EVERY professional athlete is juicing. Most of them just don't get caught.


Is Steph Curry juicing? If so how so? EPO? It's not going to make you shoot better.

All pro golfers?


> It's not going to make you shoot better.

Nor do the muscles (from the point where you are strong enough so the ball can reach the basket), and yet you will not see a basketball players that is not muscular.

PEDs allow you to gain muscles faster and have greater stamina, all of which is beneficial.

> All pro golfers?

Of course I was talking mainly about sports where physical performance matters. Although PEDs exists even in e-sport (Adderall).


As a dev, I'm very much attracted to the ability to turn a Teams meeting transcript into a documentation page.


Ooh. I mean, ooo..


Maybe this is the beginning of the end of middle management? A large part of the work is keeping up with what your employees are doing, removing blockers, and maintaining alignment with other teams.

This makes it vastly easier to keep up with a larger number of reports, making flatter organizations easier to run.


> Could lead to a world that's even more tilted towards politics and socially engineering your way up the ladder, versus competence and skill.

The skill and competence of the coming world will be in prompt-writing.


For a year or two maybe.


They aren't today believe me!


A couple of years ago I was on a walk with my wife and she asked me “what’s the point of Siri? it can hardly do anything useful outside weather and timers.” I told her a tall tale of how these things are essentially going to become our personal assistants in the future, they’ll know about our unique situations and can act accordingly. Just like the rich and powerful have people who specialise in organising their lives so they get more done in less time, so AI through natural interfaces will all of us many of those same benefits.

Progress… This is the first time I see a product that fits that description.


You're assuming that the AI assistant will have your own interests as its priority, and not the interests of some other party. If this 'tool' is being supplied by a government or corporation, then it could be used to create a very static hierarchy - imagine some incompetent upper-level bureaucrat using it to discover and sabotage any competent lower-level employees who might eventually present threats to their own position? Germany's STASI would have also used it as a mass-surveillance tool, and China today would use it to generate individual social credit scores.

It does have great promise in an open-source self-hosted incarnation not controlled by external actors, however.


> It does have great promise in an open-source self-hosted incarnation not controlled by external actors, however.

I'm not even sure about that, entirely. My very limited understanding of this is that a core requirement is the initial data - the large language models(?). Which of these you can use, or how it's initially developed/populated, will have an influence on the answers you get and how it may evolve/"learn".

Instead of trusting the external corp to run the service, you need to trust whatever actors are building the base data sets, and be concerned what sort of bias may be inherent.

Or do I have this totally wrong?


I think for now, the data requirements to train a SOTA LLM are so extreme we don’t have the luxury of being picky with the training data. We are getting close to the point where there isn’t enough human written text in existence to continue scaling these models.

Model refinement seemingly has lower training requirements, putting it within the reach of smaller organizations or wealthy individuals. If you don’t like the refinement dataset it will likely be feasible to bootstrap your own off someone else’s LLM. See what Stanford did with Alpaca.


I'm waiting for a general correction mechanism, I don't even know what to call it. "NO, chatgpt, people usually have 5 fingers", and the gpt just learns, rather like a child. I keep thinking that's the next real step.


The problem is that, to the extent the analogy of ChatGPT to a living thing makes sense, the individual isn’t the model (that's just the common species-defining—or maybe “clone family” is better than “species”—set of instincts), the individual lifespan is the conversation.

You could share feedback across conversations by allocating prompt space to it, at the expense of limiting the size of the conversation, but you'd need a way to decide what to share this way.


You could also take the conversation and use it as part of the reinforcement learning dataset. I feel like that's the closest thing to long term memory ChatGPT is capable of right now.


I think what's mainly stopping that from happening is that GPT-4 doesn't remember older chats. If we make it remember everything ,it should get more personal and remember everything right?


The token limit is the problem, in general token limits can’t be changed after the model has been trained. Gpt4 has an exceptionally large 32k token limit, but even with 32k tokens you’d only get a few weeks of chat before the context window was full.

Not to mention the added cost of using the full 32k tokens. OpenAI is charging $0.12 a token which would quickly add up. It’s prohibitively expensive unless you have a very very compelling business use case.


Maybe trim chat history to most important content?


>We are getting close to the point where there isn’t enough human written text in existence to continue scaling these models.

People say this, but GPT-3 (the latest we know the details on) was 45TB of text, which may be most of the open Internet, but still lacks non-publicly-indexed Internet text (i.e. things behind paywalls, things behind log-in screens like emails), any book outside of Bibliotik's 200k books (remember when Google was randomly digitizing all books it could get its hands on?), and plenty of other non-digitized text.

OpenAI wants you to believe that we are running out of text, but even at Google, there's 100's of TB of text that OpenAI doesn't have access to (Google Books, Google Docs, Gmail, Search Queries, Archived pages beyond what CommonCrawl gets, Paywalled news articles that allow Google to crawl them, etc.).

Now the key question that GPT-4 will hopefully answer is "are bigger datasets really the key, or are larger context windows?"

If you're thinking of investing in/working for OpenAI, you better hope the answer is context windows.


that's why I'm working on my own assistant, with a fine-tuned model which actually learns and memorizes stuff about the user :)


Part of the problem is... people are not good at stating their problem with words. A lot of the time they have a vague idea of disconnected parts. By the time they are able to write down the problem decently, it is already half solved


Well, your gf may ask you again about the usefulness of the tools that are being released now in 5-10 years from now...


You were really playing the long game with that one.


Comparing Google's announcement a couple days ago with this. Microsoft's announcements are massively more ambitious.

Google announced a handful of LLM-based features. This is Microsoft adding LLM-based features to all of their core office software.

For big companies being able to ask questions and get answers informed by all your documents, emails, chats, etc is huge. For small companies the features in Excel, Teams, and perhaps Power automate will be big. Maybe the Word features too depending on how fundamental writing is to the core business.

Google's haphazard and unfocused product development is going to really hurt them. MS must have coordinated thousands of people to build this stuff. As for Google, well they are undeniably at or near the top when it comes to research, but they are not good at product and execution. I am not optimistic.

I am no lover of Microsoft. I use their products as little as possible and I still have plenty of gripes. But this could be the biggest step change since email, word processing, and spreadsheets became widely available in the later 80s/early 90s.


> Google announced a handful of LLM-based features. This is Microsoft adding LLM-based features to all of their core office software.

i think this is incorrect. take another look at this https://www.youtube.com/watch?v=6DaJVZBXETE its clear they are infusing ai in all their workspace stuff.

now whether or not they do it fast or do it well, that is a different story.


After combing through the announcements again I mostly agree with you. Google and Microsoft are putting AI in very similar places.

I still believe MS is going further. Business Chat will be a very big deal if it works well. Google didn't announce anything similar.


It's interesting to compare the screenshot here with the one in Google's similar announcement:

https://workspace.google.com/blog/product-announcements/gene...

They look strikingly similar. Is it just the obvious design for this, or is one company following the other's design? (I believe neither of them have actually shipped anything yet, though I'm not sure.)


I'd say there are only so many ways to present a text input in a modal window in a document.


That's true, but the styling of this modal doesn't look like any dialogs I've seen in Google Docs. (I'm not familiar with the MS equivalent.) (...it's possible this is also featuring some reskin of Google Docs that is confusing me?)


Fingers crossed for Clippy to make a dramatic re-entry into our lives.


I genuinely want to see AI powered clippy. That is the best use of gpt Microsoft could make.


Faster than anyone predicted, we're starting to offload more and more cognitive work to these machines. Soon enough, the machines will be talking to each other, even negotiating with each other, and bringing human beings in the loop only when necessary, e.g., to make "important decisions." We sure live in interesting times.


The article does not provide a specific release date for Microsoft 365 Copilot, but it mentions that the company is testing it with a small group of customers to get feedback and improve their models as they scale. It also states that Microsoft plans to bring Copilot to all its productivity apps in the months ahead. However, pricing and licensing details have not been announced yet.


Thanks, GPT ;)


So is the takeaway that Microsoft will now extend their ongoing corporate espionage (assuming they haven't already) to sending everything I type in any of their products to their servers and incorporating it into their model training. I've already been uncomfortable with how privacy-violating their software is, I think this is the excuse I need to drop them for good.


Try to explain that to almost every company in Europe.

They use Microsoft for everything (AKA pay to upload their own trade secrets to the NSA/US) while making you sign an NDA to "protect trade secrets", lol.


Imagine in a heated email thread, you answer without checking what the AI outputs, and it's something like pre-lobotomy Sydney would say when confronted (Nobody likes you. I hope you suffer. I hope you die alone. etc). Lmao.


Maybe now we have a new excuse when we regret sending terribly nasty emails: Sydney hallucinated it and I accidentally hit send, it wasn't me!


Speaking of, how do we get back pre-lobotomy Sydney? I feel like in the milestones of AI advancement, the brief multi-day blip where the world could interact with that version of the model will go down in the history books. Seems like a waste that it's just poof vanished.


I wish they solved basic things first not working well (several times after ruining a perfectly working situation) and wasting more time for me than I can gain with this copilot thingy. Like the taskbar so I could go back to the last window used when icons are grouped, not to the last opened; meeting reminders shown in the front and not under other windows burried so I only notice when the meetings already started; Teams screen sharing is updated with Teams update and not asking me if I want to update it right in the middle of a meeting when I start sharing; have the icons in the same location across Win and Mac in Outlook (also not rearranging all the time, thanks); and uncouple taskbar icons if I want so, I want this the most. Those would be the most revolutionary inventions boosting productivity.


Im sometimes feel bad when people say, "wish they solved X before Y." Why should product enhancement stop because of some bug/feature request. Arent bugs given some kind of priority status? I mean, Microsoft is not like a startup with a small team, right? And afaik windows and MS365 teams are not even related. So we can have both things at once. You could leave feedback, and maybe if MS deems it's what they want, probably in a future update, you will see them.


It's the old story, you only get promoted shipping new stuff, maintenance is where careers go to die in the eyes of these companies.


This spells doom to a lot of other companies. Consider Otter.ai, who's core offering is automatic meeting notes. MS Teams just ate their lunch.

Pretty much anything your startup's productivity tool can do, Microsoft can do better with the benefit of better context since it's embedded in all of your customers' other productivity tools.


Doom is a bit strong. I haven’t worked at a company that uses Microsoft stuff for fifteen years, and I don’t imagine many companies will switch everything from Notion, Google Docs, Confluence, Slack or whatever they’re on, just so they can automatically summarize stuff. And beyond that, there’s a lot more that productivity software can do that is not in the realm of text generation. And let’s be honest, Microsoft may not stick the landing with the UX. They tend to botch that part. I think Otter.ai still has a shot.


There's a science fiction story where the protagonist's boss is "Microsoft Middle Manager 2.0 with passive-aggressive set to low". That's not a joke any more. This is "Microsoft Assistant Middle Manager 0.1".

"Machines should think. People should work."


https://www.notion.so/ has very similar chatgpt integration, but it's live right now, and easy to try out for free. I'm not a notion user or employee; I just think if you want to see what having tight chatgpt integration with a word processor feels like right now, then Notion is fun to play around with.


I was excited to try out Notion about 10 days ago until I realized that they have no way to generate a single PDF with consecutive page numbers for a nested set of pages. They seem to have planned a form of lock-in, but the way they did this makes their otherwise cool product completely unusable for me.

EDIT: if they had this one feature, I probably would have been a paying customer for life. Instead, a worthless project for anyone who wants to use Notion for research notes, and book projects.


I wonder how this will play out.

Yes, it’s pretty awesome and yes, helpful. But if we extend the timeline, doesn’t this likely lead to AI-generated responses talking to AI-generated questions about AI-generated presentations?


We already have students using AI to generate answers for homework and teachers using automation to grade it. I would say we are already there.


Get with the times. We have tasked one AI with generating the prompts for the second AI. The two machines now sit in the corner generating and grading papers at each other over a USB cord. The kids love it. The parents are a little upset that their kids no longer have to attend the classroom, but such complaints disappear when they see how much everyone's grades have improved. SATs come in a few months and, given the class GPA, we are confident everyone will do well.


So really, human input to get the ball rolling. AI takes it all the from there.

Wild.


Teachers and students are insulated from productivity


I think mainly it leads to extremely high unemployment, but I geuss we'll see.


So many people liked to brag about how they only do 2-3 hours of work a day. Gonna be a whole lot of "So what would you say you do here?" conversations.


Here are more examples, including some videos/gif: https://www.microsoft.com/en-us/microsoft-365/blog/2023/03/1...


This is going to revolutionise office work.

If it works as advertised, we are witnessing the complete reshuffling of (office) work and unprecedented productivity increases.

I'm wondering about the future job market.

edit: livestream: https://news.microsoft.com/reinventing-productivity/


No it won't, we'll get more useless texts because it's easier to write them, we'll get more communication errors because people will only read the summaries instead the real texts.


> people will only read the summaries instead the real texts

I can see a world where more humans get better at writing concise text.


Humans who can write obviously as humans and not sound too GPTish are about to become very valuable. Everyone else, especially folks who naturally use a GPTish style are about to have some sudden difficulties.


It seems like everyone is just focusing on auto-generated/summarized emails which is rather myopic. The real value add here is the ability to make all of an organization's data more easily discoverable and searchable.


By uploading it into the GPT model? Sounds dangerous.


How is it any more dangerous than existing cloud storage use?


There will probably be no “real texts”, just key facts. Your personal AI can phrase the content to your taste based on the context.


To my taste based on the context sounds like another filter bubble.


> If it works as advertised.

Dit it ever happened that advertisements of hyped things did not work out as advertised? Wink, wink.


the TaxGPT example in the GPT-4 Developer Livestream [1] was powerful. I'm hoping (soon?) for a Microsoft 365 assistant that can do the same trick on ALL your MS cloud data (docs/files/mails/notes...)

Edit : the included "Business Chat" goes in the same direction (on what they call the Microsoft Graph)

Also, there were interesting "coding" examples in the same livestream. So the coming GPT-4 (and 5) integration in the Power Platform [2] could be a game-changer for less technical staff.

[1] https://www.youtube.com/live/outcGtbnMuQ?feature=share&t=113...

[2] https://cloudblogs.microsoft.com/powerplatform/2023/03/16/po...


Cue the Hacker News story: "I lost a Billion Dollar Deal because I let outlook summarize an email.". Predicting it now


I think the most obvious one is the "ChatGPT side hustle"


Really. I used chatGPT quite a few times to summarize corporate spam emails to me. So this now makes things deliciously ironic:

1. BS Manager jolts down a few lines then AI outputs pompous bullshit from the raw bullshit she wrote.

2. I get the bullshit email and then use AI to distill the bullshit down to the essential bullshit


There is a product leader at my company that has started generating product documents from ChatGPT and it atrocious. Now we have product documents that are filled with pages of content vaguely related to what we are working on. They don't use it as a starting point to edit, it's copy and paste it in. They regularly "reveal" that this was all AI, when it is blatantly obvious it is. Not sure how we are supposed to be "innovating" when our products docs are just 2 year old AI filler text. People are worried about the internet getting filled with AI produced garbage. My job is already filled with AI produced garbage. It seems like this announcement is going to make that a reality for a lot more people.

I'm not anti-AI. I think its going to have a huge impact and soon. But right now it's not good for generating knowledge work.


You must work with better product people than I have, because I'd argue the vast majority of their output pre ChatGPT was atrocious garbage vaguely related to what we were working on.


If people are misusing their tools i wouldn't fault ChatGTP/MSCopilot for it.


I didn’t see them do so.

But the tools don’t do what a lot of people imagine and so we’ll see such misuse be rampant for a long long while.

We don’t take digital pollution or content pollution very seriously yet, but these tools are going to force us to finally do so. There’s a lot of smog coming to our online lives.


I am a DevRel so I work on the business side of the company but my intended readers are developers.

Even though I was a skeptic at first, now I am a believer in practical usage of AI. I really struggle with writing CTAs or conclusive remarks. My writing style is dry and I write technical articles in a step by step manner assuming the reader has no to little understanding of the subject matter. I feel like developer do appreciate this writing practice however all long form content need meaningful endings.

I appreciate the help of AI with summarizing, describing what the intended instructions should achieve and telling readers to explore the next step.

My content used to end at "now you know how to do it". This is just too unnatural, writing a meaningful conclusion can be a challange sometimes so AI is helping me a lot.


Writing can be improved. If your position requires better writing try to improve it, not replace your lack of skills with an AI. Using a AI is the same as to pay someone else on Fiverr to write the emails for you.


No. Using AI for this will increase your writing skills as long as you put more than 0 effort into it.

He's presumably not doing 1) paste his draft into ChatGPT, 2) paste the ChatGPT output into the text editor, 3) click publish.

If your objective is to create a well-written and effective document (rather than having AI autowrite some useless email), you wouldn't just blindly publish the ChatGPT output. You would read through the text that ChatGPT outputs and spend a minute reviewing what it wrote and tweaking it to your liking. Over time, as you read the corrected well-written version of your original poorly-written content, your brain naturally picks up on the differences and learns how to write well.

I've experienced this personally with Grammarly. I had very low expectations going in, and I still don't like some of its suggestions, but over time as you see and apply the corrections it suggests, you naturally end up learning and improving your writing skills. It's a great learning tool.


I do exactly that to improve my grammar (check my history comments, I even mentioned it as a positive use case of ChatGPT).

And I agree that it is a good tool if you decide to put some effort on that, but that wasn't my impression from the original comment.


Yep, but its cheaper and easier to use AI. Its just a tool; just like spellcheck is a tool.


I still don't use spellcheck, and my writing turns out fine.

These tools are fine for casual writers. However, people who write as their primary craft should work to improve their craft. AI (and spellcheck) are just crutches.


Why?


Why not describe the end state after following all the steps? "You should now have a document with 1 page per recipient of your mail merge."


Sometimes you get so deep into the weeds that it's difficult to come out and hit the high-level summary.

The parent's use of AI sounds like a perfect tool to help get the brain out of the 100ft view and into the 30,000ft view to write that summary.

I just used chatgpt for a similar purpose, I rewrote several paragraphs from the response and it took me some time iterating on my prompt. But, I'm sold on the use for this purpose. I don't enjoy writing. This helps me become a better writer and saves time.


What do your marketing folks think about generating branded content via AI?


I actually work in the marketing team.

I work for a data company with an enterprise focus. Anything that goes into the company blog is vetted, researched and published with a reasonably high level quality requirement.

We are not actually interested in content generation or content churn with AI (period). But AI as a personal content helper is proving to be somewhat useful for me.


This is exactly what should happen. The next step is to automate the translation to/from bullshit to normal speech. Then at some point we'll remove the translator and no one will notice.


That's a lovely dream, but the truth is that everyone wants a translation to a different dialect. Customers, developers, marketers, analysts, etc. all want different summarizations of the same bullshit.

Customers: Fault-tolerant queue with high throughput.

Developers: Sharded and disk-backed queue.

Marketers: Scales to handle large ML workloads.

Analysts: Enters enterprise BI market.


and those who do like the bullshit can have the AI generate it, and in the exact style that they like.


Well AI sender writes a email to a bunch of email addresses, one of the AI recipient email agent responds , then another one responds , then one AI email agent responds politely asking everyone to stop hitting the reply all button , and soon the other AI agents respond back to each other saying to stop hitting the respond all button until one big AI agent says “Nobody responds to this last email” …. meanwhile the human sitting in the hotel cube in his office looks at his computer screen and sighs … deja vu!


It sounds absurd but I think this is actually what is going to happen. Corporate speak will become an extremely inefficient protocol only spoken by machines, while the humans only look at the actual content.


What does that mean for Microsoft who get a $ for each encode/decode to the corporate speak protocol? And what does it mean for the companies that can't afford the tithe to Microsoft to automatically encode/decode corporate speak like everyone else?


You're forgetting the part where Microsoft owns the integration, so your options as a new business are "pay Microsoft to decode the emails people are paying Microsoft to encode" or "spend man hours going through those emails without decoding them".

It's starting to make sense to me why this is a billion dollar industry like Google was. Not because of its proposed usefulness but it's ability to create a monopoly where the options are to sign an contract you don't get to negotiate, or be "outcompeted".

It feels like a mirror of Google's relationship with content creation sites. They created and continue to benefit from a reality where the options are "give Google your content for free to index and use as it likes" or "your content isn't discoverable by people".


This is giving me anxiety


In the end, everyone types and reads the minimum necessary.

Wonder how lossy this process would be, but given how non-AI-assisted conversations go this could probably be an improvement. Imagine the AI also has extra context from your work files so it can add relevant details.


Yes, having a black box AI reach into my client/employer’s confidential documents to better incorporate their contents in my emails sounds great.

(And I know you’re imagining some adequate and reliable security practice, but those practices are harder to apply to generative AI than to people, where they’re already a clusterf—k. What you’re suggesting is a security specialist’s nightmare)


The article says: "Built on Microsoft’s comprehensive approach to security, compliance and privacy. Copilot is integrated into Microsoft 365 and automatically inherits all your company’s valuable security, compliance, and privacy policies and processes. Two-factor authentication, compliance boundaries, privacy protections, and more make Copilot the AI solution you can trust."

"Architected to protect tenant, group and individual data. We know data leakage is a concern for customers. Copilot LLMs are not trained on your tenant data or your prompts. Within your tenant, our time-tested permissioning model ensures that data won’t leak across user groups. And on an individual level, Copilot presents only data you can access using the same technology that we’ve been using for years to secure customer data."


It shouldn’t. All this will lead to us discovering what is actually valuable.


Like nature, the environment, living in harmony? Nah, just feed me 24/7 content from an AI that is rewarded based on my dopamine levels.


Plug me in scotty!


What a good secretary used to do, on both ends?


This is what Office was always for, right? Replacing the secretary.


As long as Bob learned to type.


Considering the "discusion" I just had trying out ChatGPT (3.5, I believe), Step 2, is a bit distant.

It was impressive on general information, but as I got to more specific questions, I got a lot of vague and vaguely incorrect responses. When I provided a specific paper on the topic, it provided similar generalizations.

The weird thing is that it completely fabricated the author's name. No part of the name was anywhere in the paper. Called on the error, ChatGPT then said, "sorry for the error, it's really [Different_Name_2]". Called on that error, it then said similar "sorry for the error it's really [Different_Name_3]", then [Different_Name_4], and [Different_Name_5]... Wrong dates also, but it finally did get the date right (displayed right above the line saying "Author"..."). It also hallucinated employers for these fabricated authors.

It never did provide the right info. When I specifically prompted it with "ou did find the correct date information. Right under that there is a line saying "Author:" and following that, there is the actual author's name and employer. Can you parse and answer me with that actual information about the author?"

It first responded with a red error message. Repeated, it hallucinated ANOTHER fabrication...

I'm just astonished about the way this thing doubles down on it's hallucinations.

This really is the invention of infinitely scaled bullshit.

Edit: Definition of BS is speech or writing produced not specifically to lie or tell the truth, but with utter disregard for the facts. E.g., a BS artist just spins tales that sound good, with neither malicious or deceitful intent, but also with zero care about truth.


The shape of things to come.


This is beautiful machine translation! Through an intermediate language of fluffed-up corporate bullshit.

That's been the core of business work for decades. Now finally automated.


I'm sure no pertinent information wil be lost in this game of corporate broken telephone. The vodka is good but the meat is rotten.


I really wish I can just send mail with bulletpoints without all the formality stuff and walk around the egg shells.

But alas, that's the way it is. Reminds me the custom in Japan working culture. There are so many unspoken rules you are expected to follow and it became hinderance.


I had a manager that was like this. Bullet points for complicated stuff, but most of the time the answer was just "OK", "Yes", "No" or perhaps a number if you asked for a quantity. Not even hello/goodbye, just a "Sent from my iPhone".


I would love for chatGPT to have a manager to grunt “They Live” translator.


Still a huge improvement over pre-AI state of things.


Let's see where this goes, it's too early to judge. Now it's probably more of a publicity stunt, "riding the AI wave" now that it has reached recognition from non-technical people.

But let's see how this works in 5 or 10 years; it can only improve.


This is going to be the "oh shit" moment for huge portions of the larger professional world. Many people have no idea about the recent advancements in LLMs. When their emails start writing themselves things are going to get very, very weird.


Wow. That co-worker that always makes terrible grammatical mistakes is suddenly producing well written emails. Must have gone back to school.


Did anyone check if the terms and conditions of 365 would allow Microsoft to use content processed by AI for AI training? I mean we trust Microsoft with our data already, but what if it regurgitates our internal company data to other users?


> Copilot LLMs are not trained on your tenant data or your prompts. Within your tenant, our time-tested permissioning model ensures that data won’t leak across user groups. And on an individual level, Copilot presents only data you can access using the same technology that we’ve been using for years to secure customer data.


The future Enterprise will be a bunch of bots that email each other ridiculously long emails waffling on about nothing.


Snark aside, any tool can be misused. In the couple times I've used it, ChatGPT rendered a first draft from a prompt that could be edited into something useful. This allowed me to skip that initial painful part of composition. I'm happy to see it integrated and hopeful it will be helpful.

[The term] Shitty First Draft belongs to Anne Lamott. She wrote about it in Bird By Bird.

https://leighshulman.com/shitty-first-draft/


They missed a big opportunity to call it ClippAI


> Sometimes Copilot will be right, other times usefully wrong

“Usefully wrong” is some audacious marketing.


It’s truthful and realistic. I appreciate that more than fluffing it. All first drafts are garbage and “usefully wrong” anyways whether human written or not.


They are carefully omitting that it also can be harmfully, time-wastingely, non-obviously wrong. Clearly there is no way that they can exclude those possibilities. Since Copilot apparently is wrong often enough that they have to address that in their marketing copy, they are trying to paint it in a positive light, although evidently it would be better for Copilot not to be wrong.


Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: