Hacker News new | past | comments | ask | show | jobs | submit login

It's true I'll hate on micrsoft for about anything but they didn't say "some products" "some services" "some processes", they said "EVERY SINGLE THING!!!", see the difference?



>they said "EVERY SINGLE THING!!!", see the difference?

Where did they say this? I read "almost every".


Does it make a difference of context whether or not they've successfully shoehorned only 99%?


Given the quote is referring to what they've done "for years and years and years [...] in our product groups" outside of the OpenAI arrangement, the fact that a large number their of products have come to make some use of AI models without much fanfare (search, spell-check, spam filtering, voice dictation, language translation, recommendation systems, ...) is not inherently due to the more recent LLM shoehorning. Machine learning is just the best choice for a good number of tasks.


I'm not talking about LLMs in particular. I guess this is a company wide mandate to grow knowledge of how to do this stuff well, I mean that makes sense. But in the trenches (aka hells-ahole) that means a lot of bad bad stuff is being relied on and it generates calcification of business segments and kafkaesque anti-patterns for the uninitiated. This doesn't only apply to "AI" its a generic feature of shoe-hornings. The problem with the shoe-horn is that its politically costly to resist even if it makes good business sense to resist at the micro level.


I'd agree that "We're going to shove a chatbot in every single one of our products", like the recent Copilot integrations, would reek of shoe-horning and a possible company wide mandate.

But remarks more along the lines of "Looking back over the past decade, we've made use of ML models in some part of almost all of our products" seems fairly reasonable to me, not necessarily indicative of much other than machine learning being the best tool for an increasing number of tasks. If they weren't using ML-based echo cancellation in Teams calls for instance, they would have a worse product than competitors that do.


Teams doesn't even do copy paste or open Microsoft based formats in under 10 seconds (IME)... I rest my case.


I don't claim that Microsoft products are perfect, just that it seems a reasonable use of machine learning. The things I've seen them use ML models for are genuinely useful, and mostly added without too much fanfare years prior to the recent generative AI hype.


I'm saying that forcing particular technology leads to worse products in many cases.


I understand, but think that in many relevant cases it'd now be non-ML approaches that would be "forced". Machine learning is just the easiest and best way to accomplish a large range of tasks.


I guess a general application of statistics and control is going to be called ML then? If that's the world we're living in then I wonder how the missing fraction are being governed. Pure malice?


Not all applications of statistics/control are ML - I don't know what I said to give that impression.

A spam filter based on regex or manually-selected criteria and thresholds would not be ML for instance, whereas modern effective spam filters typically do make use of machine learning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: