Hacker News new | past | comments | ask | show | jobs | submit login
Uber's technology is reportedly 'hanging by a thread' (businessinsider.com)
47 points by tangled on Sept 30, 2015 | hide | past | web | favorite | 59 comments



> The engineers in charge of these systems have been "at odds," which has created friction, according to The Information.

> Uber’s chief technology officer, Thuan Pham, later wrote to his staff that the mistake “reflects an amateurism with our overall engineering organization, its culture, its processes, and its operation.”

This makes it sound as if the two engineers in charge of the Node.js and Python systems are bickering over which technology stack is better and refuse to compromise. I get there is worry about career progression if your backend option loses, the glory of being the engineer in charge of the entire backend of Uber, and the different specs of different technologies. But to hold the entire company hostage seems like it would be a career-ending move to me, regardless of sides. Then again, I am not in management.


This sounds very much like two little pigs arguing about the merits of straw vs twigs when they should be building in brick to me.


Probably, the news article is not a technical article, focusing on what is the actual stack, because, (http://stackshare.io/uber/uber) shows that there are other layers in the stack of Uber.


By brick, which language would you want to use?


At that scale my choice would be Java. Proven, reliable, plenty of devs, vendor support, good type system, and plenty of open source code to leverage. Plus it's it's a mature ecosystem.


Java the language or any language that compiles to the JVM and can leverage existing Java libraries?


I'd go with Java itself. It's the most mature and stable. Uber is not a startup. It needs an enterprise solution.


Scala, Go, Python, Haskell, Erlang.


Instagram seems to be doing OK with python.


Personally, for infrastructure level stuff like that and if I was starting from scratch, I would chose Haskell.

At the scale of Uber and given that there would be a lot of legacy hanging around, I would probably chose to build on top of the kind of scala libraries that twitter has been putting out (which it sounds like from the other article about uber's micro services currently on HN they are doing).

The serious statement behind my slightly tongue in cheek remark earlier, was that I don't think either python or node are suitable for building infrastructure type applications that will form the backbone of a constelation of SOA type applications. Round the edges, it's slightly more defensible. However for central infrastructure, I'd rather have rather more foot-bullet barriers than those offer.


Good luck hiring Haskell programmers.

If you're really serious you go C or C++, not interpreted languages at all. But that said, plenty of big scale stuff is running on PHP, Python and Ruby. The language matters less than the people using it and the architecture design.


Actually, that Haskell developers are hard to hire is one of the biggest myths going. Supply of Haskell developers massively outstrips demand at the moment.

In my experience Haskell also neatly address the other two points of people and architecture: the average [0] level of skill of developers in the Haskell community is higher, as is their average level of grit and determination (because of the learning curve). Haskell almost encourages constructing your applications as operations over streams of events (a la event sourcing, unified log etc), which for my money is the best way to think about designing most platform type apps right now.

[0] Note: I'm not claiming that all Haskell developers are coding ubermensch, just that the average level of the pool of talent is higher.


Maybe it's the other way around. You have to be extremely serious when your goal is safe C/C++.


Perhaps part of the problem is that these interpreted languages have made programmers lazy.


Perhaps part of the problem is that a lot of people still don't understand that developer time is one of the biggest costs in developing and maintaining software.


Given that you have a lot of NodeJS and Python developers I would look at Vert.x.

You could then let both sides continue to write the code in the language they are comfortable with but still work on a common platform. Plus being on the JVM it is fast and scalable.

It is not sensible to just stop what you are doing whilst you retrain and rehire your engineering team to learn a new language. Especially in a competitive hiring market like San Francisco.


the best one for the job, I am assuming that is what Pham has been brought in to identify.


> But to hold the entire company hostage seems like it would be a career-ending move to me, regardless of sides.

It would be, if you lose. If you win, you are rich beyond the dreams of avarice. It would also be a career ending move to admit that your system is less than the other system and everyone will remember you as that loser whose system was done better by the current VP of engineering.


Easy... choose Node and put the Python engineer in charge or choose Python and put the Node guy in charge.

Or redo the whole thing in Go.


There is definitely some degree of executive decision their management could make but it would shred one of the two leads careers. This kind of hard call is why you're boss, so it says something about someone when they don't make this call (although we don't know the full story here, this situation happens enough in general).

The post I was replying to was asking why an individual lead engineer in this scenario would do this. They have reasons to fight and no reason to concede. Their bosses have every reason to, at minimum, get both sides to "hug it out" so why haven't they?


Or move everything to a microservices architecture and use the best suited language for each subsystem. It would force both camps to engineer for composability and open service apis. My tupence.


I assure you that Uber is built using "micro services". That doesn't mean there isn't value in having some language uniformity in the org and it doesn't solve the language preference between people problem.


Looks like Uber has moved to a microservices architecture. Check their engineering blog: (https://eng.uber.com/soa/)


Wasn't aware of their move. A microservices architecture then makes the technical conversation about the service, it's latency, performance rather than implementation details.

Is it such a bad thing to have many different languages in play as long as the SLAs are met?


I have a friend in SF who works at Uber and writes Go there. He said they're using it for a lot of things internally.


If you are choosing a whole new language why not one of the JVM ones.

It is faster and more scalable than Go plus it has a usable debugger.


decent engineers aren't tied to one platform, so this is not really a valid argument imo.

career progression goes faster if you aren't limited by these sorts of choices. being able to learn something new fast and apply the same principles all over the shop is something that engineers do...

you work with what you are given.


On the level of senior engineer, having chosen the Wrong platform would give the other senior engineer the promotion, at best, and at worst, get you fired. Regardless of your capability to build on a different platform.


Just because it's wrong now doesn't mean it was the wrong decision when the decision was made.

Choosing X over Y could have meant they got out the door quicker than a competitor and got them to the point they're at now, even if Y makes more sense now at the scale they're at.


Some serious link bait in that article title - sounds like they've got issues but they are working through it. The article ends by pointing out that New Years Eve went off without a hitch.


A more accurate title would have been "Uber's new CTO is doing pretty okay," but that doesn't entice people to click nearly as much


"The company's engineering staff has grown to 1,200 — a quarter of Uber's workforce — from just 400 people."

1200? I'd be very interested to know what they're all doing. Not saying that what the company is doing is easy on that scale, but it's hard to see how throwing a thousand people at the problem can be an effective solution. Unless many of them are working on new products? But Uber seems pretty young to be investing that much in R&D.


Driverless Cars. They've basically hired the entire engineering department at Carnegie Mellon in Pittsburgh, where their R&D offices are located.


What benefits would they see/get from using their own co-located servers instead of using VMs on a cloud provider?

For a fast-growing business, it would seem a huge win to not have to worry about physically scaling your infrastructure. And I can't imagine that their infrastructure size is so large (they're not, for example, indexing the internet) that they would get a huge cost savings from using their own hardware.

But certainly, I must be missing something?


What benefit would they get from using cloud? A lot more money for much worse performance. It's like buying a monthly bus pass when you could buy a Honda Civic for less.

Despite what Cloud providers want you to think, scaling (for most of us) is largely an architecture, design and software issue and it tends to be rather specific to your system (until you get to the point where you have to build your own). "Auto-scaling" doesn't help you make sure you aren't opening too many forking connections to your database, it doesn't eliminate coarse locks, un-optimized system calls, making sure you take advantage of cache locality, sharding, denormalization or really anything that requires effort.

The game changes a bit with the SaaS stuff that PaaS vendors are selling (dynamodb, for example), but then you pay a lock-in price.


"reflected an amateurism with our overall engineering organization, its culture, its processes, and its operation.”

That is not going to get the engineers on your side to solve the problem. Instead it will create even more friction between managers and engineers.


One nice thing about friction is that if it's done correctly, it can rub away the problems. Forcing people to deal with the reality of the situation often forces positive change.


Indeed.

"Uber Raided By Dutch Authorities, Seen As 'Criminal Organization'" http://yro.slashdot.org/story/15/09/29/2328232/uber-raided-b...


Unless the engineers agree. Even if they don't, if the assessment is true and they can't accept it then those engineers are probably a part of the problem.


"setting up servers in a new data center on Halloween"

That kind of problem seems remarkably common - the end of months and years can be peak times for businesses but it's also the kind of date that people pick for their project milestones ("we'll have the servers in by the end of the year").


or "lets have new capacity in place a month before busy Halloween" ... progress slips... the project manager pushes and everyone forgets the reason for the original date.


Yeah, this article is in "Business Insider" rather than "Tech Insider." Ya can tell by reading it. The real WTF is having that many developers without a solid ops team to match. At least they can charge extra for peak loads, which many of us cannot do.


Uber can't legally do it either. They just don't give a shit and do so anyway.


Every industry will have dynamic pricing in the near future, transportation has only just begun to fully embrace it


What laws ban surge pricing?


The anti-capitalist ones.


The laws that regulate taxis, which Uber TOTALLY ISNT GUYS I PROMISE


I wonder why the original article wasn't posted rather than the businessinsider article.

edit:

Paywall. I get it.


They need EEs like Twitter has.


From my perspective, Uber backend can be sharded trivially. No driver in LA is going to get matched with a passenger in NYC.


I generally find it wise to avoid calling engineering problems faced by other organisations trivial. There are often a lot of complicating factors that aren't obvious from an outside perspective.


Sure, but it's fun to talk about how you'd build it given some assumptions. I agree with your parent that it seems like a shardable dataset. To boot, I imagine that you could easily store all cars for a city/region in memory (type, x, y, driver id, status, ....).

Next I'd grid the area (maybe 250m2?) into blocks. When a user does a search, figure out his or her block, and start looking for cars in the current block, expanding outwards. You read-lock the blocks as you examine them. You only write lock 2 blocks as a car moves from one block to another (the blocks themselves would have a list of cars, maybe an array, maybe an rtree, maybe another grid).

It all falls apart when a single server can't handle all the load for a region. But then you could sub-divide the region, connect the servers with a queue (this car is now your responsibility), and let clients (not necessarily devices, but the api servers consuming these data servers) join the data.


If you can have one server per city then I agree. Especially if those cities are small and you never travel between them. But places like LA totally blow that theory as you're going to have to have multiple servers or shards to cover everywhere.


Why LA, # of drivers, size of the city or # of users? Quick googling says LA is 500 miles squared, with Beijing being 6500.

The only case where I see 1 server failing is # of requests. All their drivers in the world probably fit in a few GB of memory (if that). 99% of a car's movement requires no locking (they stay in the same block), so there's very little write locking...I dunno...give it a 24 core server (or more)...

Maybe the problem is node and python. I don't know python runtime well enough, but this kind of setup is a nightmare for node. Sharing data across processes just isn't what it was meant to do (it rather fork and have a copy, but then your memory doubles per fork, and you have to keep it in sync). That's true for a lot of dynamic languages.


Just using the google answer for "how big is LA" doesn't really cut it as there are millions of people that live outside of LA proper but still in the region that most people would call "LA". If you go to Wikipedia for Greater Los Angeles it's 34,000 square miles. https://en.wikipedia.org/wiki/Greater_Los_Angeles_Area

sqrt(34000) is nearly 200 miles on a side. And that doesn't really even cover all the areas that one might Uber from or to. So you'll have a lot of people crossing shard boundaries.

Greater LA has some 18mm residents, and again that doesn't count some of the places very close but not really IN the area. Places you might drive to in 15 minutes from the edge, over a pass.


Thank you. I remember a year or so ago a similar article was written about Twitter and one of the HN armchair engineers argued that he could build Twitter on no more than 4 servers because it was so simple.


i dunno. i'd say they are dealing with a very classically well solved set of problems. its fairly trivial stuff... although i'm sure they have made their own problems for themselves whilst approaching it.


Looking at Uber's APIs they're built to be sharded.

But sharding comes at the cost of complexity.

I work on an uber-like service and we use crude sharding between countries.

The problem is:

* sometimes your sharding is too coarse - eg. two cities which have no cross traffic

* sometimes your sharding is too fine - eg. cross border traffic

But the real complexity is;

* managing configuration and logic differences between countries and cities

* hosting so many different sharded clusters, routing between them, and keeping them all updated.

And that's where all the work goes.


One of Uber's architects talking about how they're handling that over via InfoQ

http://goo.gl/bAoFHi




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: