Hacker News new | comments | show | ask | jobs | submit | best comments login

"Sudden change of policy by Google, which is directed specifically at startups (as smaller web sites should largely remain below even the new lower thresholds), is surely an unpleasant surprise for us and does not create much trust in Google as a vendor. In the future we would therefore keep our distance from Google Cloud and avoid deep integration with any Google services on which it can pull a similar trick. For example we would be wary on taking free Google Analytics for granted."

I think this is one of the most important points in the article - the way they handle these pricing changes destroys trust in Google's other business offerings. How can people use Google products and services as a core piece of their infrastructure when they're willing to bump their prices >10x with only a few months of notice? That could literally be a business-ending event, depending on how core that service is to the business.

In the case of maps, there weren't many great alternatives for a long time, due to Google sucking all the oxygen/profit potential out of the field with their excellent free offerings. Fortunately, their last (sudden) price bump seems to have allowed the creation of some good alternatives.

I'm more disturbed by the parts of the article which say that he never agreed to a forced arbitration clause in the first place because it wasn't in the ToS when he paid, but the judge decided to go with the later ToS anyway:

> According to Lord, the terms of service when he made the initial pledge aren’t the same terms of service they are today. The original terms of service, according to RSI’s own records, make no mention of arbitration before February 2015. “These Terms of Service (TOS) do not affect any transactions made before its effective date,” RSI’s terms site said. “All prior transactions are governed by the TOS in effect on the date of such transactions.”

> Lord came to court prepared. He had printed out multiple versions of the terms of service, all records of communication with RSI, and a long document recording the 77 promises RSI hasn’t fulfilled in a timely fashion, including citations showing where and when RSI made those promises. But the case never got that far. He said RSI’s representatives understood that Lord’s pledges weren’t covered by the arbitration clause, and he offered to settle, again, for $3,800. They declined.

> According to Lord, when RSI’s representatives stood before the judge, they tried to argue the arbitration clause of their TOS. “Right off the bat, they assert the arbitration clause applied to everything, even though it plainly didn't,” Lord said. “I had to give the judge a copy of the first terms of services that clearly show that the arbitration clause was not there for the first few transactions.”

> ...According to Lord, the judge decided to apply the current TOS to all of the transactions in dispute. “He said he didn’t want two rulings floating out there,” Lord said. He may have lost this case, but he’s not done fighting. “I’m going to pursue it further. I’m not sure in what direction. I’m going to be speaking with a couple of different attorneys to evaluate my options.”

I mean... what? So even if you do exercise your choice as a consumer to avoid a forced-arbitration clause, companies can simply add it to their ToSes later on and retroactively make it apply to all interactions ever with the company?

He lost because of a forced arbitration clause. This massive privatization of justice where any boilerplate service or product now comes with forced arbitration is making my blood boil.

The justice system should always be an option when arbitration fails, and arbitration should take no more than a reasonable time to fail (say, two weeks for this $5000 amount). The whole point of a small claims court is to handle such cases, not offload them to a dodgy corporate lawyer masquerading as a judge.

If the public service of justice is slow and expensive, we need to fix the public service, not replace it with a free market simulacrum. That's always the case with non marketable but essential public goods.

>In particular, Google has prevented manufacturers wishing to pre-install Google apps from selling even a single smart mobile device running on alternative versions of Android that were not approved by Google (so-called "Android forks").

Sounds a lot how Microsoft abused licensing agreements with OEMs to discourage them from selling PCs not bundled with Windows.


I can't speak for the Geek Squad (as I have never needed to use them), but they price match to Amazon (and several other online stores), and will generally have what I am looking for (at least for consumer electronics). I know that I will get it that day (vs. Amazon's "two day shipping" that regularly turned into three or four days) at an Amazon price, it will certainly NOT be a counterfeit, and I don't have to give my information so I don't feel like I am being tracked.

It will be easier to boost the Earth’s spin back to a 86,400-second day than to fix all the code.

Back when I worked in games we would detect cheaters and then shadow ban. Quarantine them by only matching them into games with other cheaters.

You may still have to ban them from certain elements of your game, like player economies (auction house, etc). But the more legitimate their experience looks the better.

The idea is that instead of fully banning them and triggering the next iteration of the arms race, you trap and release them into a competitive arena for cheaters. It's actually fun for them to compete with each other at who can cheat the hardest and no one else gets hurt. We hooked them up with a community rep. They found bugs and generally improved our security. Everyone won.

There's no way to win with an adversarial approach to cheating IMO, not when you let the client run on their machine

Let's read the law: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...

>Article 102

>Any abuse by one or more undertakings of a dominant position within the internal market or in a substantial part of it shall be prohibited as incompatible with the internal market in so far as it may affect trade between Member States.

>Such abuse may, in particular, consist in:

>b) limiting production, markets or technical development to the prejudice of consumers

>d)making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial usage, have no connection with the subject of such contracts

Ok, so, Google said "You can't use Google Play unless you force users to have Google Search installed".

How is that not clearly breaching d?

Then they said "You can't use Google Play if you try to help develop any android forks."

How is that not clearly breaching b?

>but simply saying "Surprise....enormous fine" is ridiculous

They've had at least two years notice, so could have reduced their fine by complying when they were first warned. http://europa.eu/rapid/press-release_IP-16-1492_en.htm The article literally warns about the exact things they're still doing.

A better source:

> The European Commission has accused Google of abusing its Android market dominance by bundling its search engine and Chrome apps into the operating system. Google has also allegedly blocked phone makers from creating devices that run forked versions of Android.


> How is it any different

Apple doesn’t have a dominent search engine to push down the throat of device makers.

They also don’t have an iOS consortium nor do work with other makers, so there is no bullying makers into doing what they want “or else”.

As others pointed out Apple is not in a majority position in the first place, but this fine is mainly bound to how the search engine and google suitr services come in the picture, and not on android on its own.

With all due respect, it's not a postmortem, it's an advert. It doesn't really say anything other than "We had a problem, we fixed it.". There are virtually no technical details in there other than "something would restart spontaneously, which shifted the load somewhere else". Maybe i'm a bit jaded by cloudflare and aws writeups, but this really isn't anything special or worthwhile reading.

A few months ago Intel pulled a stunt where they showed a 28 core 5GHz CPU, implying it was a production CPU that would ship this year. They failed to mention that it was attached to an industrial compressor to supply the necessary cooling for the overclocked CPU, and that it was a server socket (https://www.anandtech.com/show/12932/intel-confirms-some-det...).

Since then, whenever I see a headline with Intel in it, I heavily discount it until I can verify the facts. They’ve damaged my trust, and I suspect many others.

> Everything is broken. Everything is fine.

This is the correct conclusion. The people getting outraged over this are the same that will hold long and boring monologues about how everybody does REST wrong.

This reminds me of the desirability of pre-WWII battleship steel in particle physics experiments. Due to required detection sensitivity they need to construct experiments from materials that have as low background radiation as possible in order to not mask the actual information of interest. Ever since the first atomic bombs were detonated, sufficiently low-radioactivity steel became much more difficult to find. A large portion of available low-background steel supply is from battleships that were built before the bombs [1]

[1] https://en.wikipedia.org/wiki/Low-background_steel

Dave Lee's youtube video is a withering takedown, presented dispassionately.

His argument is such a slam dunk that the article concludes with stunned disbelief, speculating that maybe there's "something wrong with the MacBook Pro with Core i9 chip that Lee received".

Yes there's something wrong with it, that was the exact point of his video. Do people really think he just got a lemon?

When they say “bathed in radioactive cloud” they actually mean “wind carried trace amounts of radioactive materials”. Anytime anything “nuclear” is involved, the reporting gets very poetic.

That’s a feature, not a bug. It was originally created in the context of the net neutrality debate. This allows you to see if your ISP is cheating you on Netflix bandwidth, supposedly in order to push their own media-on-demand product.

It was introduced after reports of ISPs doing just that, and it was Netflix’ way of hitting back.

> At this point, I decided the only thing that made sense was to build my own mattress from scratch.

Bless this mindset. The internet would be a lot less interesting without the makers and their write-ups.

Almost thirty years ago now, driving around on a hot summer day with a couple of friends. One guy puts a punk mix tape in the player, and the guy with a college music scholarship was aesthetically offended. "Anyone could make this music," he complained.

The other guy nodded in agreement and replied, "that's the point."

I have mixed feelings about anti-cheat, especially in the last few years. A lot of them are getting rather intrusive. Take Player Unknown's Battlegrounds for instance, which uses BattlEye. It actually injects a kernel mode driver into Windows that spies on whatever else your system is doing and exfiltrates unknown data in the name of "guaranteeing a fair game experience." I didn't even realize that this is what it was doing until my system crashed one day and the cause was some .sys file in PUBG.

It'll also randomly kick you from games for having various programs installed or running. Programs such as VMware. You have to disable all VMware services or PUBG will kick you randomly for using "unauthorized applications." God forbid you have any VMs running, that might amount to a ban (seriously).

Worse still is that when you take your complaints to their social media, or in anyway speak ill of it, you get hordes of fanboys saying that you shouldn't install anything other than games on your PC or you're a dirty cheater. "Oh you want to do things _other_ than gaming on your PC? You should buy another PC then."

Don't even get me started about trying to run games in a virtual machine w/ GPU passthrough. The communities will tear you a new one telling you to do things "normally" and by attempting to use anything other than the "normal" setup makes you a cheater. Just google anything like "steam vac kvm" or "battleye kvm" and you'll get hordes of people claiming they heard some guy say virtualization is the future of game cheating therefore VMs are cheating tools and should be banned.

Seriously, if I could get a refund for every game that uses BattlEye, I would try.


This analysis is simplistic.

Correlated failures are common in drives. That could be a power surge taking out a whole rack, a firmware bug in the drives making them stop working in the year 2038, an errant software engineer reformatting the wrong thing, etc.

When calculating your chance of failure, you have to include that, or your result is bogus.

Eg. Model A of drive has a failure rate of 1% per year, but when failed the symptom is failure of the drive to spin up from cold, however if already spinning it will keep working as normal.

3 years later, the datacenter goes down due to a grid power outage and a dispute with diesel suppliers so the generators go down. It's a controlled shutdown, so you believe no data is lost.

2 days later when grid power is back on, you boot everything back up, only to find out that 3% of drives have failed.

Not a problem. Our 17 out of 20 redundancy can recover up to 15% failure!

However, each customers data is split into files around 8MB, which are in turn split into the 20 redundancy chunks. Each customer stores say 1TB with you. That means each customer has ~100k files.

The chances that you only have 16 good drives for a file is about (0.97^16 * 0.03^4)2019*18 = 0.3%

Yet your customer has 100k files! The chance they can recover all their data is only (1-0.003)^100000... Which means every customer suffers data loss :-(

I'm just finishing spinning up OpenStreetMap servers in EC2 as one of the last stages of our testing, and they look great. We almost went with Mapbox, which is probably a pretty good alternative. But as part of that move I decided to try OSM tiles and I had them up and running in an afternoon via a docker image, created an Ansible role for deploying virtual machines that evening, and had tiles for our region built the next night after tuning some CartoCSS for our needs.

We already have our own aerial tiles, built off the free NAIP imagery, and I just went through rebuilding the 2017 NAIP tiles last week to make them higher quality and fix some seam issues.

This combination is pretty good and lets us take control of our map tile destiny.

References I used in setting things up:

https://github.com/zavpyj/osm-tiles-docker https://switch2osm.org/ https://ircama.github.io/osm-carto-tutorials/tile-server-ubu...

While I've experienced the frustration of having a formerly free service develop paid tiers (and policies that put me in those tiers), of all the changes a software service can make, this is the one that frustrates me the least, or at any rate less than:

* shutting down the service entirely because the user base never grew into customers who actually valued the service

* changing your terms of service to forbid an activity that was previously allowed, because someone discovered a use that messed up the price points and the service owners would rather forbid that than offer a reasonable price point allowing it

* moving to opaquely metered service potentially with apparently arbitrary levels of financial exposure to the client

A big price bump with a few months notice is painful (and I'm glad it's bringing competition), but it tells me they're thinking seriously about how to sustain/develop the product and lets both them and me explore the real value of the service.

When it comes to Google, I'm more worried that they might just arbitrarily mothball something on a management roadmap whim.

Lol I have a friend in SF who complains about high house prices and wants increased development. When I said he should move to NYC he said its too big and crowded and he doesn't want to live with a family in an apt. Didn't even blink.

As a consumer of APIs I vastly prefer REST APIs.

In my opinion, GraphQL moves too much of the burden to the user of the API. It makes most sense if the data is highly dynamic, you have a mobile app and every call is expensive, or (and this seems more common) the backend and frontend teams don't like to talk to each other. As a user, I just want to GET /foo, with a good old API token I pasted from your dev docs, and move on to the next thing. I don't want to spend time figuring out your database schema. Or perhaps I've just yet to see single good public GraphQL API. I just recently had look at the Github GraphQL API and it's non-existent docs (no, I don't want introspect your data models at runtime), noped the hell out of that, and got the REST integration for the bit I needed done in an hour.

Stripe cofounder here. It's a very fair question.

> I must admit to being somewhat uncomfortable that Stripe seems to be spreading themselves out into areas outside their core business

The vast majority of Stripe employees (and there are now more than 1,000) work on our core functionality today. But we see our core business as building tools and infrastructure that help grow the online economy. ("Increase the GDP of the internet.") When we think about that problem, we see that one of the main limits on Stripe's growth is the number of successful startups in the world. If we can cheaply help increase that number, it makes a lot of business sense for us to do so. (And, hopefully, doing so will create a ton of spillover value for others as well.)

As we grow, we have to get good at walking and chewing gum -- just as Google or Amazon have. However, while we go and tackle other problems, our aim is not only to continue to improve our core payments infrastructure, but do deliver improvements at an accelerating rate.

"no test, a single staging dev server (no local dev env), no doc, inconsistent function and variable naming" --> none of these things prevent code from shipping, per se. I know it's convenient, as a first time leader, to focus on the technical aspects, but from your description, it seems there's underlying non-technical issues you might want to get familiar with first.

On the technical side, I recommend _listening_ a lot before making any suggestions. _Maybe_ it's all backwards, and there's _always_ a better way to do it, but showing them you understand how they work _first_, then help them push something - even a small win - out of the door _first_ will get you a lot more "street cred" than trying to mandate things out of the gate.

If you're looking for some literature, "Five Disfunctions of a Team" and "Culture Code" are good starting points on how to tackle cultural aspects (and how to be accepted as a newcomer "playing the boss", which will also be your case).

Other than that - leading teams is difficult, leading teams _well_ is an incredibly frustrating/counterintuitive exercise, but a huge growth position :-) Good luck!

It's been suggested that perhaps one way to find evidence of extrasolar life, would be to find exoplanets whose years are integer multiples of their rotational periods, i.e. they've done what you describe just to eliminate leap years and make their calendars easier.

> Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need.

Horse shit. And I say that as someone who writes claims management software for the healthcare industry.

I challenge ANY health insurer to provide examples of this. Of course, they won't, because they'll cite "patient confidentiality", but that just doesn't happen.

They've wanted to do this for years, though, and try to. Requests to be able mine claim data for familial predispositions to diseases was one that we fended off multiple times.

Fair context: I make cheats/utilities this exact game being talked about in this article, so perhaps my opinion on the subject is biased or even invalid.

I partially disagree about the transparency of this article, while they do explain most of their approach to anti-cheat (and that is pretty cool for them to do), they seem to leave out any mention of anything that could be controversial.

It suppose that it does make sense to not mention the implementation details of their anti-cheat, but I wish that they would be a little more transparent about how/when/what they snoop around and send to their servers. The current Mac game client for League Of Legends contains full debug symbols and it doesn't have Packman (the packer described in this article), which makes it quite easy to look through the symbols. Inside you can find all of the anti-cheat-related network packets, in specific:

PKT_C2S_EnumDrivers PKT_C2S_EnumProcesses PKT_C2S_EnumDrives PKT_C2S_EnumHandles PKT_C2S_EnumRecentFiles PKT_C2S_EnumModules PKT_C2S_ProcessorData PKT_C2S_SystemState PKT_C2S_ModuleLoadNotification PKT_S2C_SendModule PKT_C2S_ModuleResponse

Now, I personally expect anti-cheat to snoop around my system when I'm doing something shady like scanning its memory. However, if I was a normal user of the game, I would be a bit concerned to know that it might be sending my recently used file names, drive names, system driver names, currently running processes, processor information, system state, and even entire binary files that it automatically deems as "suspicious", to their servers.

Why can't Google just use UTC timestamps? Or at least include them alongside their "US/Pacific" timestamps.

I don't want to remember if "US/Pacific" currently has daylight savings or not.

Its a very strange decision especially considering that GCP has numerous regions outside of "US/Pacific".


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact