I think this is one of the most important points in the article - the way they handle these pricing changes destroys trust in Google's other business offerings. How can people use Google products and services as a core piece of their infrastructure when they're willing to bump their prices >10x with only a few months of notice? That could literally be a business-ending event, depending on how core that service is to the business.
In the case of maps, there weren't many great alternatives for a long time, due to Google sucking all the oxygen/profit potential out of the field with their excellent free offerings. Fortunately, their last (sudden) price bump seems to have allowed the creation of some good alternatives.
> According to Lord, the terms of service when he made the initial pledge aren’t the same terms of service they are today. The original terms of service, according to RSI’s own records, make no mention of arbitration before February 2015. “These Terms of Service (TOS) do not affect any transactions made before its effective date,” RSI’s terms site said. “All prior transactions are governed by the TOS in effect on the date of such transactions.”
> Lord came to court prepared. He had printed out multiple versions of the terms of service, all records of communication with RSI, and a long document recording the 77 promises RSI hasn’t fulfilled in a timely fashion, including citations showing where and when RSI made those promises. But the case never got that far. He said RSI’s representatives understood that Lord’s pledges weren’t covered by the arbitration clause, and he offered to settle, again, for $3,800. They declined.
> According to Lord, when RSI’s representatives stood before the judge, they tried to argue the arbitration clause of their TOS. “Right off the bat, they assert the arbitration clause applied to everything, even though it plainly didn't,” Lord said. “I had to give the judge a copy of the first terms of services that clearly show that the arbitration clause was not there for the first few transactions.”
> ...According to Lord, the judge decided to apply the current TOS to all of the transactions in dispute. “He said he didn’t want two rulings floating out there,” Lord said. He may have lost this case, but he’s not done fighting. “I’m going to pursue it further. I’m not sure in what direction. I’m going to be speaking with a couple of different attorneys to evaluate my options.”
I mean... what? So even if you do exercise your choice as a consumer to avoid a forced-arbitration clause, companies can simply add it to their ToSes later on and retroactively make it apply to all interactions ever with the company?
The justice system should always be an option when arbitration fails, and arbitration should take no more than a reasonable time to fail (say, two weeks for this $5000 amount). The whole point of a small claims court is to handle such cases, not offload them to a dodgy corporate lawyer masquerading as a judge.
If the public service of justice is slow and expensive, we need to fix the public service, not replace it with a free market simulacrum. That's always the case with non marketable but essential public goods.
Sounds a lot how Microsoft abused licensing agreements with OEMs to discourage them from selling PCs not bundled with Windows.
You may still have to ban them from certain elements of your game, like player economies (auction house, etc). But the more legitimate their experience looks the better.
The idea is that instead of fully banning them and triggering the next iteration of the arms race, you trap and release them into a competitive arena for cheaters. It's actually fun for them to compete with each other at who can cheat the hardest and no one else gets hurt. We hooked them up with a community rep. They found bugs and generally improved our security. Everyone won.
There's no way to win with an adversarial approach to cheating IMO, not when you let the client run on their machine
>Any abuse by one or more undertakings of a dominant position within the internal market or in a substantial part of it shall be prohibited as incompatible with the internal market in so far as it may affect trade between Member States.
>Such abuse may, in particular, consist in:
>b) limiting production, markets or technical development to the prejudice of consumers
>d)making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial usage, have no connection with the subject of such contracts
Ok, so, Google said "You can't use Google Play unless you force users to have Google Search installed".
How is that not clearly breaching d?
Then they said "You can't use Google Play if you try to help develop any android forks."
How is that not clearly breaching b?
>but simply saying "Surprise....enormous fine" is ridiculous
They've had at least two years notice, so could have reduced their fine by complying when they were first warned. http://europa.eu/rapid/press-release_IP-16-1492_en.htm The article literally warns about the exact things they're still doing.
> The European Commission has accused Google of abusing its Android market dominance by bundling its search engine and Chrome apps into the operating system. Google has also allegedly blocked phone makers from creating devices that run forked versions of Android.
> How is it any different
Apple doesn’t have a dominent search engine to push down the throat of device makers.
They also don’t have an iOS consortium nor do work with other makers, so there is no bullying makers into doing what they want “or else”.
As others pointed out Apple is not in a majority position in the first place, but this fine is mainly bound to how the search engine and google suitr services come in the picture, and not on android on its own.
Since then, whenever I see a headline with Intel in it, I heavily discount it until I can verify the facts. They’ve damaged my trust, and I suspect many others.
This is the correct conclusion. The people getting outraged over this are the same that will hold long and boring monologues about how everybody does REST wrong.
His argument is such a slam dunk that the article concludes with stunned disbelief, speculating that maybe there's "something wrong with the MacBook Pro with Core i9 chip that Lee received".
Yes there's something wrong with it, that was the exact point of his video. Do people really think he just got a lemon?
It was introduced after reports of ISPs doing just that, and it was Netflix’ way of hitting back.
Bless this mindset. The internet would be a lot less interesting without the makers and their write-ups.
The other guy nodded in agreement and replied, "that's the point."
It'll also randomly kick you from games for having various programs installed or running. Programs such as VMware. You have to disable all VMware services or PUBG will kick you randomly for using "unauthorized applications." God forbid you have any VMs running, that might amount to a ban (seriously).
Worse still is that when you take your complaints to their social media, or in anyway speak ill of it, you get hordes of fanboys saying that you shouldn't install anything other than games on your PC or you're a dirty cheater. "Oh you want to do things _other_ than gaming on your PC? You should buy another PC then."
Don't even get me started about trying to run games in a virtual machine w/ GPU passthrough. The communities will tear you a new one telling you to do things "normally" and by attempting to use anything other than the "normal" setup makes you a cheater. Just google anything like "steam vac kvm" or "battleye kvm" and you'll get hordes of people claiming they heard some guy say virtualization is the future of game cheating therefore VMs are cheating tools and should be banned.
Seriously, if I could get a refund for every game that uses BattlEye, I would try.
Correlated failures are common in drives. That could be a power surge taking out a whole rack, a firmware bug in the drives making them stop working in the year 2038, an errant software engineer reformatting the wrong thing, etc.
When calculating your chance of failure, you have to include that, or your result is bogus.
Eg. Model A of drive has a failure rate of 1% per year, but when failed the symptom is failure of the drive to spin up from cold, however if already spinning it will keep working as normal.
3 years later, the datacenter goes down due to a grid power outage and a dispute with diesel suppliers so the generators go down. It's a controlled shutdown, so you believe no data is lost.
2 days later when grid power is back on, you boot everything back up, only to find out that 3% of drives have failed.
Not a problem. Our 17 out of 20 redundancy can recover up to 15% failure!
However, each customers data is split into files around 8MB, which are in turn split into the 20 redundancy chunks. Each customer stores say 1TB with you. That means each customer has ~100k files.
The chances that you only have 16 good drives for a file is about (0.97^16 * 0.03^4)2019*18 = 0.3%
Yet your customer has 100k files! The chance they can recover all their data is only (1-0.003)^100000... Which means every customer suffers data loss :-(
We already have our own aerial tiles, built off the free NAIP imagery, and I just went through rebuilding the 2017 NAIP tiles last week to make them higher quality and fix some seam issues.
This combination is pretty good and lets us take control of our map tile destiny.
References I used in setting things up:
* shutting down the service entirely because the user base never grew into customers who actually valued the service
* changing your terms of service to forbid an activity that was previously allowed, because someone discovered a use that messed up the price points and the service owners would rather forbid that than offer a reasonable price point allowing it
* moving to opaquely metered service potentially with apparently arbitrary levels of financial exposure to the client
A big price bump with a few months notice is painful (and I'm glad it's bringing competition), but it tells me they're thinking seriously about how to sustain/develop the product and lets both them and me explore the real value of the service.
When it comes to Google, I'm more worried that they might just arbitrarily mothball something on a management roadmap whim.
In my opinion, GraphQL moves too much of the burden to the user of the API. It makes most sense if the data is highly dynamic, you have a mobile app and every call is expensive, or (and this seems more common) the backend and frontend teams don't like to talk to each other. As a user, I just want to GET /foo, with a good old API token I pasted from your dev docs, and move on to the next thing. I don't want to spend time figuring out your database schema. Or perhaps I've just yet to see single good public GraphQL API. I just recently had look at the Github GraphQL API and it's non-existent docs (no, I don't want introspect your data models at runtime), noped the hell out of that, and got the REST integration for the bit I needed done in an hour.
> I must admit to being somewhat uncomfortable that Stripe seems to be spreading themselves out into areas outside their core business
The vast majority of Stripe employees (and there are now more than 1,000) work on our core functionality today. But we see our core business as building tools and infrastructure that help grow the online economy. ("Increase the GDP of the internet.") When we think about that problem, we see that one of the main limits on Stripe's growth is the number of successful startups in the world. If we can cheaply help increase that number, it makes a lot of business sense for us to do so. (And, hopefully, doing so will create a ton of spillover value for others as well.)
As we grow, we have to get good at walking and chewing gum -- just as Google or Amazon have. However, while we go and tackle other problems, our aim is not only to continue to improve our core payments infrastructure, but do deliver improvements at an accelerating rate.
On the technical side, I recommend _listening_ a lot before making any suggestions. _Maybe_ it's all backwards, and there's _always_ a better way to do it, but showing them you understand how they work _first_, then help them push something - even a small win - out of the door _first_ will get you a lot more "street cred" than trying to mandate things out of the gate.
If you're looking for some literature, "Five Disfunctions of a Team" and "Culture Code" are good starting points on how to tackle cultural aspects (and how to be accepted as a newcomer "playing the boss", which will also be your case).
Other than that - leading teams is difficult, leading teams _well_ is an incredibly frustrating/counterintuitive exercise, but a huge growth position :-) Good luck!
Horse shit. And I say that as someone who writes claims management software for the healthcare industry.
I challenge ANY health insurer to provide examples of this. Of course, they won't, because they'll cite "patient confidentiality", but that just doesn't happen.
They've wanted to do this for years, though, and try to. Requests to be able mine claim data for familial predispositions to diseases was one that we fended off multiple times.
I partially disagree about the transparency of this article, while they do explain most of their approach to anti-cheat (and that is pretty cool for them to do), they seem to leave out any mention of anything that could be controversial.
It suppose that it does make sense to not mention the implementation details of their anti-cheat, but I wish that they would be a little more transparent about how/when/what they snoop around and send to their servers. The current Mac game client for League Of Legends contains full debug symbols and it doesn't have Packman (the packer described in this article), which makes it quite easy to look through the symbols. Inside you can find all of the anti-cheat-related network packets, in specific:
Now, I personally expect anti-cheat to snoop around my system when I'm doing something shady like scanning its memory. However, if I was a normal user of the game, I would be a bit concerned to know that it might be sending my recently used file names, drive names, system driver names, currently running processes, processor information, system state, and even entire binary files that it automatically deems as "suspicious", to their servers.
I don't want to remember if "US/Pacific" currently has daylight savings or not.
Its a very strange decision especially considering that GCP has numerous regions outside of "US/Pacific".