Hacker News new | past | comments | ask | show | jobs | submit login

Do you really think a multi billion dollar company like Microsoft wouldn't have their legal team all over this? Do you not think they would have researched this out. Discussed their implementation, and made sure everything they were doing was going to meet the GPL copyright standards?



This same "multi billion dollar" company had an AI bot tweeting Nazi propaganda a week ago. They spectacularly failed in their xbox one release, having to completely retool and regroup. Their Windows Phone efforts remain a complete disaster and are now doomed to failure.

The whole "they're a big company...don't you think they've thought of this!" argument (and its many "do you really think they'll lose?" variations) is always a fallacy. That doesn't make the argument about the copyright of ABIs valid, but at the same time the notion that Microsoft is big therefore they must be right is absurd.


Well if we really believe the bot was AI, then it wasn't Microsoft's bot. It was was it's own "artificial intelligence".

But the rest of those have nothing to do with their legal team. They wouldn't implement a copy of another OS into this OS without making sure it was legal to do so.


You would think that Google wouldn't implement a copy of the Java APIs in their operating system without making sure it was legal to do so, but apparently not.

Ozweiller is quite right. Big companies copy other people's stuff, breach trademarks (Metro?) and generally mess up all the time.

I doubt the ABI emulation is actually a problem, but calling it "Windows Subsystem for Linux" might well be a trademark violation as it doesn't involve Linux itself. Imaging if Wine called itself "Linux Subsystem for Windows". I think Microsoft would be deploying their legal team right quick.


I think the AI comment was more to the fact that, they didn't safeguard against seemingly obvious outcomes - such as internet trolls trying to get the bot to say bad things. Many companies put no-go words during username creation, hitler, racist words, etc - so why didn't Microsoft?

It might not have been simple to do, but still - hard not to see the outcome.


lol what the hell are you talking about. this thing is SUPPOSED to learn. you can't have ai and restrict what it learns, it defeats the entire purpose. isn't this the same thing that happens to people too? they go around the internet and soak up knowledge, sometimes racist, harmful, misinformation, but they soak it up nonetheless.


Well, to be clear, i didn't say restrict what it learns - i said safeguard against outcomes. Or, are you arguing that Microsoft knew the bot would slur racist insults in a laughably short timeframe, and only planned to run the bot for said timeframe?

The very fact that they had to pull the plug seems to suggest that it was not desired, and as such, it should have been safe guarded against.

An example safeguard being, limit what it can say. If it has racist/etc stuff in it, literally don't send to twitter. The bot still learns, the algos don't change, and Microsoft still gets to see what the given AI will behave like in full public. And above all else, the bot isn't a Microsoft branded Hail Hitler AI.

It sounds like you believe what happened is perfectly within reason - if that's the case, why do you believe they pulled the plug?


Did they even have any sort of filter? If they at least blacklisted these words [0], then that seems like a reasonable enough effort on its own. However, these developers would have had to be living in a bubble to not know about trolls from 4chan.

All in all, this is a lesson that some high-profile person/group eventually had to learn on our behalf. Now, when an unknowing manager asks why your chat bot needs to avoid certain offensive phrases because, "our clientele aren't a bunch of racists", you can just point him to this story. The actual racists are tame by comparison to what trolls will do to your software.

[0] = https://github.com/shutterstock/List-of-Dirty-Naughty-Obscen...


Now to be fair, we restrict what humans learn all the time. We try to teach morals and ethics to our children. We generally don't let kiddos run wild and learn whatever is around without some sort of structure.


Aside from the obvious outcome that it would be manipulated (which anyone could predict, and if well thought out it would have had "learning guards"), it didn't require some deep artificial learning -- you could tell the thing to repeat various offensive statements. It was just a giant miscalculation.

However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.


Sure, but that doesn't answer my question of 'why'.


Because it's a faulty premise? There is no license violation, that's why a license violation isn't being discussed (your question).


You make it sound as if the "law" is easy. Everyone can have their own interpretation of the law, and often those interpretations are complete opposite. That's why we have two sides in a court of law.

Microsoft's lawyers likely decided that the move is "worth the risk". But they wouldn't be able to be 100% sure that it's either legal or illegal anyway. You can only be 100% sure after someone challenges you in Court, and then judges decide a certain way.


Lawyers never decide that something is "worth the risk"; that's not their job. In this context, the job of the lawyer is to assess the legal risk, and it's a business executive's job to decide whether a risk is worthwhile.


Microsoft as a company culture is like the opposite from the "move fast and break things" (as it probably should when you're a platform company)

but legally speaking, they seem to have adopted that culture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: