Hacker News new | past | comments | ask | show | jobs | submit login

in pseudo i can. or in python. avg(list) :)

anything more complex, not sure. i touched last time linked lists in 2001. never learned algorithms and in fact didn't even graduated from school. and guess what - everybody is okay with all of it. because this is not the value that I am expected to deliver to the company.

edit: when people are interviewed for system architect positions, algo/coding interviews are not part of the process. because it's not skill set that company is looking for.




For the record, with that interview, even saying “in python, I’d use the standard library avg(list), because it would be ridiculous to re-implement provided and tested ecosystem functionality” would have served him well in terms of indicating he wasn’t an utter fraud. That alone wouldn’t land the job, but it would be a strong positive.


depends on the company. i once (15 years ago) was interviewing with a company and they essentially asked me to whiteboard some regex like pattern matching flow. i did it, and after completing it to their satisfaction i said that in real world i wouldn't do it, as it's waste of time and will use lib* whatever. I didn't get this job :) a few years later, i discovered that few guys on one of the teams were from this very company. what was standing out about them, it's that they always had to reinvent any standard functionality/library with peak of it been their own xml-rpc protocol that they came up with, because xml-rpc wasn't good enough. I guess ethos of this company and all people that they hired, was to reinvent wheels, bicycles, etc and I didn't fit that ethos well .

this was also the only whiteboard coding interview in my life


To be fair, xml-rpc truly is terrible. To start with, XML is extremely verbose and you add a lot of network overhead for that, plus a lot of parsing overhead, which makes rpcs significantly more expensive than they should be. Use https://capnproto.org/ instead. (Based on protobuff, which is an open sourced version of what Google uses internally.) As far as I'm concerned, the only valid reason to use xml-rpc is because you're interfacing with someone else's system and they have chosen to use it.

Moving on, here is an excellent example of an important architecture system that almost everyone gets wrong. In any distributed system you should transparently support having rpc calls carry an optional tracing_id that causes them to be traced. Which means that you log the details of the call with a tracing id, and cause all rpcs that get emitted from that one to carry the same flag. You then have a small fraction of your starting requests set that flag, and collect the logs up afterwards in another system so that you can, live, see everything that happened in a traced rpcs. To make this easy for the programmer, you build it in to the rpc library so that programmers don't even have to think about it.

You then flag a small random small fraction of rpcs at the source for tracing. This minimizes the overhead of the system. But now when there is a production problem that affects only a small percentage of RPCs you just look to see if you have a recent traced RPC that shows the issue, look at the trace, and problems 3 layers deep are instantly findable.

Very few distributed systems do this. But those that do quickly discover that it is a critical piece of functionality. This is part of the secret sauce that lets Google debug production problems at scale. But basically nobody else has gotten the memo, and no standard library will do this.

Now I don't know why they reinvented xml-rpc for themselves. But if they had that specific feature in it, I am going to say that it wasn't a ridiculous thing to do. And the reason why not becomes obvious the first time you try to debug an intermittent problem in your service that happens because in some other service a few calls away there is an issue that happens 1% of the time based on some detail of the requests that your service is making.


It happened 13 years ago and it was exactly same but with different xml syntax :) It was way before microservices, etc. purely point to point.


It was way before microservices, etc.

It was not way before microservices at Google. But it was before there was much general knowledge about them.

Sadly the internal lessons learned by Google have not seeped into the outside world. Here are examples.

1. What I just said about how to make requests traceable through the whole system without excess load.

2. Every service should keep statistics and respond at a standard URL. Build a monitoring system which has scrapes of that operational data as a major input.

3. Your alerting system should supports rules like, "Don't fire alert X if alert Y is already firing." That is, if you're failing to save data, don't bother waking up the SRE for every service that is failing because you have a backend data problem. Send them an email for the morning, but don't page people who can't do anything useful anyways.

4. Every service stood up in multiple data centers with transparent failover. At Google the rule was n+2/n+1. Meaning that your service had to globally be in at least 2 more data centers than it needed for normal load, and in every region it had to be in at least one extra data center. With the result that if any data center goes out, no service should be interrupted, and if any 2 data centers go out the only externally visible consequence should be that requests might get slow.

Now compare that to what people usually do with Docker and Kubernetes. I just have to shake my head. They're buzzword compliant but are failing to do any of what they need to do make a distributed system operationally tractable. And then people wonder why their "scalable system" regularly falls over and nobody can fix it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: