Hacker News new | past | comments | ask | show | jobs | submit login

"As far as this outsider can tell, the systems and processes for writing code at Google must be among the best in the world, given both the scale of the company and how often people sing their praises."

Is this really the case? In high school I sure thought Google was a magical software heaven us mortals could but dream of working for, but now (and increasingly as of late) I'm strongly under the impression that 20 years ago they discovered unobtanium in the form of a very under-served search and ads market, they plopped down camp on top of it and turned it into a giant money firehose, and now to this day are simply operating that ridiculously high-margin business while trying desperately to find another vein of unobtanium because when you already have one, that's the only thing that even registers on your radar.

"Revenue from Google Network ads hit $7.5 billion (10.7%), and YouTube ad revenues registered at $6.7 billion (9.6%). In other words, at $54.5 billion, the total ad revenue from Google constituted 78.2% of the company's total revenue in the quarter." https://www.oberlo.com/statistics/how-does-google-make-money

78% between google and youtube. And given youtube isn't nearly as high-margin as the search product (can you even imagine the cost related to essentially storing the video history of the world?), I'm hesitant to say youtube is their 2nd vein.

So yeah. Turns out when you have essentially a monopoly in a high-margin business, you can do basically anything and build an entire private software ecosystem without going bankrupt.

I interpret "how google does it" to simply mean "look at all this cool stuff you can do when your product is this high-margin".




The tooling is great in some ways, but possibly not in the way you'd expect.

There is a tool that will do almost anything, but the tools are often just a bit janky. Rather than thinking of the toolchain as some polished, tightly integrated, perfect system, think of it more like an internal open-source ecosystem of things that work together but which are made by individuals, with limited resources, that still feature bugs.

This is not to downplay what we have, it's amazing in many ways, but most tools are exactly the same sort of thing that you'd create in any other place, we just have more of them that cover almost every aspect of development. So much of the time I run into neat little systems and then discover they're a few Python scripts running under someone's personal quota. Sometimes these are 20% time projects, sometimes they are bits of tooling teams build for their own needs, sometimes they are more concerted efforts.

There is a core, tightly integrated set of systems that work well and are more "productized", but most of those aren't that much more special than what you can buy at any other company. If you've used Datadog then you'd feel at home in much of the modern infra tooling. Spanner is great, but anyone can pay to use it. Our CDN is very good, but there are other CDNs and from a developer perspective there isn't a whole lot of difference.


I have yet to be impressed by internal software tools developed by one team and consumed by another. You're likely being friendly and understating the degree of jank which is present.


It's janky, and when you do choose to use a best-effort tool with no dedicated support, you absolutely have to plan for it being a pain point and the likelihood of eventual migration.

That's true at Google, just as it's true anywhere. But what makes Google a much better experience than most places is a willingness (and ability) to pay for internal dev tooling of every sort with dedicated headcount. And that results in a consistently better experience than pretty much anywhere (I understand Meta has a comparable culture and ability to fund it, and I wouldn't be surprised if some or many of their internal tools surpass Google's). It's also unfortunately not replicable as a strategy, because you need a giant money spigot to make it work.


Is this at Google, or elsewhere? If it's at Google I'd be happy to try to persuade you via chat :)

If elsewhere, I think there are 2 kinds of tool: those developed by individuals/small groups in informal ways, some of these are bad, but much like with open-source tools, the ones that are good survive and become popular. The other type is those with more formal backing, dedicated dev/design/UX/PM resources, and those are good at surveying users to figure out what the right problem to solve is, so they're normally pretty good, and when they're not, they still accept fixes (I regularly contribute fixes to other teams codebases when I find issues).


>if its google id be happy to try to persuade you via chat.

I would be curious as to how you view projects that are staffed entirely 20%.


Feel free to reach out internally! But in general, it seems to work ok for many popular tools. I haven't attempted to do it, maybe it sucks as the maintainer?


>most tools are exactly the same sort of thing that you'd create in any other place, we just have more of them

Why? Why reinvent the wheel 10 times?


Google has 180,000 employees. There's an org that is responsible for internal tooling, but they are an org with priorities and challenges like any other. So sometimes Search needs a thing and Core won't do it so some people in Search build a tool. Maybe later somebody in Ads needs a similar tool but has never heard of the thing that Search built. So there is an organic force to this sort of complexity.

There are systems to resist this and people whose whole job is to focus on unifying systems, but they aren't perfect. In general, I'd say that Google is way way way more uniform than most other companies because of the monorepo, centralized tooling, and (mostly) uniform development process across the company.


Should be easy nowadays to have a chat-based AI which, given some provided requirements, suggest similar tools that have been built by other SBUs. Even if only to look at the code.


Maybe.

But this only solves a visibility issue (and codesearch/moma is already pretty good at this). It doesn't help you when the tool supports a different language or the team that owns it isn't willing to commit to the SLO that you need or whatever.


Most engineers create tools, Google is not special in that way.

But even when there's a similar open-source tool, it may be difficult to integrate with internal systems. There's a lot of benefit to a consistent set of technologies (data formats, storage systems, etc) so that's often a reason to write internally.

There is a lot of open source software in use at Google so I wouldn't say there's a lot of reinventing the wheel. And when there is an internal clone, the requirements are often necessarily different enough to warrant writing an internal version.


I did not interpret their statement as meaning they have many variants of the same tool, but that they have tools that cover a wider variety of situations.

I found the same thing moving from a very large company to a smallish company. Both had good tooling, the latter just had significantly less and there were gaps in coverage. Whenever I encountered one of those areas I often found myself wishing I had access to some niche tool I had gotten used to at my prior company.


In many cases there was no wheel available at the time


> I'm strongly under the impression that 20 years ago they discovered unobtanium in the form of a very under-served search and ads market, they plopped down camp on top of it and turned it into a giant money firehose

In 2010 when I interned there, one of my mentors said something to me that stuck with me: "Google found a hose that money pours out of, and it's name is 'online advertising'. All we do is optimize that hose, and search for another one."


Yes and just like all tech companies.. Google steals from smaller companies to the dreamers they inspired to invent and create.

Sonos: https://www.theverge.com/2023/5/26/23739273/google-sonos-sma...

A MIT student: https://news.ycombinator.com/item?id=18566929

I met them (they were horribly unprofessional) in 2013 around time Sonos did to discuss my audio syncing tech. Best to steal tech then bother investing into R&D. Quickest way to make more money .. steal .. spend less and make more.


I've been told I shouldn't talk about this anymore on hacker news by Dang.

I haven't mentioned my experience in years cause it's old news but Sonos court victory is new relevant news to my experience meeting Google R&D that same year.

Though I can say for sure and claim is that they were extremely and laughably unprofessional to me and I met with other tech companies then who were very professional and polite.


I can't think of a single C++ library coming out of Google I had to deal with and where I didn't have massive problems integrating them into my own C++ projects because of all the Google specialties baked into those projects (their own weird build tools which don't seem to have changed much since the early 2000's and the use of other Google dependencies like abseil).

Maybe it works well inside Google, but definitely not in the real world.


Off the top of my head I used FlatBuffers in quite a few projects many moons ago and that was pretty seamless.

Arguably it was a bit thin on documentation but it just worked.


Google search was 1998. Google Maps was 2005. Gmail was 2004. Android was 2008. Youtube was 2005. Chrome was 2008. Docs was 2006. Translate was 2006.

Yet in the last decade, they really haven't had many successes (perhaps with the exception of Google Photos - 2015)

One would imagine that with nearly 200,000 employees at least one of them would have a good enough idea for a new product people like. But management and culture, with 'more wood behind fewer arrows', is no longer conducive to the good ideas getting launched.


This is a common meme, but it depends how you define "product". There are companies with millions in VC backing that would just be a feature on a Google (or other big tech company) product as most people define them.

I would argue that taking a product from 100m to 1bn users is a whole new level of success, and that has been done multiple times in the last decade.


Why should scaling matter? I thought google tried to solve problems regardless of scale? Or maybe youre commending google for its ability to more effectively commodify datacenter labour in the past decade?


I don't think we can expect businesses to solve problems without any business upside. I don't hold that against any company. That means needing some amount of scale in order to justify the work and opportunity cost.


'free' ad supported products usually have engineering costs greater than compute costs, even up to billions of users.

Compute scales with the userbase. Engineering scales with the product featureset. That means a small userbase in general cannot support much engineering and therefore cannot have much complexity/features if it wants to be profitable.

That in turn puts big companies at a massive advantage. And in fact we see that - the vast majority of my time using the web is spent on big products (google, youtube, reddit, twitter), and only a small fraction on small sites (perhaps HN being the one exception here). Those big products won the battle for my attention.


You have the wrong impression of Google Translate. Translate was started in 2006 and it sucked. 60% accuracy at best. Janky incoherent sentences. Translate revolutionized the entire field of language translation in 2016. Seriously. Went to 94% accuracy when Jeff Dean joined the project and spurred them to train against all languages at once. It revolutionized the whole research field of language translation. Translate is now better than most high school students after several years of study. I consider Translate to be one of the top-5 accomplishments that Google ever had:

https://translatepress.com/is-google-translate-correct/

Another breakthrough is picture recognition. Did you know you can type in "motorcycles" and do an image search and there is ZERO text associated with many of those pictures of motorcycles? They really are mapreducing those pictures and running image recognition against a 17M open-source image library to assess "motorcycle-ness" of all those pictures! They started talking about picture recognition in the early 2000's but it really began happening in the late 2010's after they hired fei fei li, the mother of image recognition ...

https://profiles.stanford.edu/fei-fei-li


Translate did not suck in 2006, it was a big leap over the competitors at the time. It was really the first time a company had launched productized statistical machine translation at all. Before that there was the Altavista Babelfish which was rules based.

Going neural definitely made a big difference but Translate was called "statml" by the infrastructure internally for years, maybe still is, because just using statistics and having training at all was a big deal back then.


If you compare the Android from 2008 and the Android from 2023 then you'd soon agree that within these 15 years, the whole thing was re-invented multiple times.

And I'm not talking the UX, which has already changed substantially, but I mean the underpinnings.


Sure. But as far as the user is concerned, it is much the same. Touchscreen apps, app store, camera, wifi and phone abilities, app switcher and back button. The user doesn't care that the wheel has been reinvented 3 times along the way... There hasn't been any revolutionary new features in android since shortly after launch really.

There have been revolutions in apps. For example, pre 2017 I couldn't just tap a button and get an Uber. Or tap a button and have food delivered. Or do a video call with all my friends.

But the OS can't really claim credit for most of that innovation.


> There hasn't been any revolutionary new features in android since shortly after launch really.

Not visibly. But the tech stack has changed substantially.

I'd find it quite an achievement if the UX has largely stayed the same while things under the hood got modernized over and over again and went with the times, subtly bringing innovations to the UX as well without people realizing. It's a feature that things don't look substantially different every two years.


Android Auto was 2015.


Hasn't really seen widespread acceptance. I bet a sample of 100 cars on the freeway, perhaps only 25% would have carplay or android auto connected in the USA, and outside the USA adoption is far lower.


Are you sure?

https://appleinsider.com/articles/23/05/23/carplay-android-a...

> A report from Straits Research found that 98% of newly produced vehicles were compatible with either CarPlay or Android Auto. Meanwhile, 80% of prospective car buyers strongly preferred having these smartphone-based infotainment systems in their new vehicles.

The same research also shows Asia-Pacific to be the biggest market for these products, though North America is the fastest-growing.


"connected" vs "compatible with"

Tbf. 25% connected is huge. If you just jump into your car to quickly do groceries or pick up your kids, you may not be interested in connecting your phone to your car, even if you really like that feature. So there could be a natural ceiling for the "connected" number and 25% feels getting close to that ceiling actually.


My older car requires a hardwire connection for Android Auto, but my newer car will automatically connect over wifi so I no longer need to take my phone out of my pocket unless I want to charge.


If 80% of buyers express a strong preference for having Carplay/Android Auto I don't think it's reasonable to say that they haven't seen "widespread acceptance."


I read that as "strong preference for having Carplay/Android Auto over the car manufacturers own UI".

And everyone is just frustrated with laggy UI's in cars. But in reality they'll probably still just use Waze with a 10 buck phoneholder suction-cupped onto the windscreen.


Why on earth would you do that instead of using Waze on the larger, built-in screen? Especially if you are a buyer who "strongly prefers" having that in the first place? Nobody is using that kind of holder on a car with Android Auto or Carplay.


Security-concious people like to keep the complexity low of things that are involved in their day-to-day activities. Having your car mingle with your phone is kinda the opposite of this.


I don't think this describes any significant portion of users.


Maybe, maybe not. But it describes an answer to your "why on earth" question.

People may have different preferences from yours.


The proposition in question is whether Android Auto/CarPlay is “widely accepted,” so implicitly what I’m being asked to believe is that there is a large portion of car buyers who strongly prefer a car that supports these features but then don’t actually want to use them because of security concerns. I think the idea is absurd on its face.


Not sure where you're being asked to believe that beyond a single thread contribution. I've argued your point earlier in this thread and agree that they are "widely accepted", so you may be barking up the wrong tree.

Nevertheless, some people prefer not to accept them while not being entirely anti-tech either. You may not be agreeing with choices other people make, but it goes a little far to call them "absurd on its face", don't you think?


This sounds like a result of how often cars are replaced. What if you stopped a sample of 100 cars released in the last five years? That proportion would be far higher. Pretty much every car review I watch mentions support for either or both.


Uber was founded in 2009, and pretty broadly available by 2014 Grubhub was founded in 2004, I was ordering from there all through college 2007-2011 Google actually put out hangouts in 2013, and Skype could do group calls in 2010

Not trying to downplay what happened after this: it was incremental change that made the products scale and be more usable to more people. I would say the things you describe as revolutions were actually evolutions of earlier products.


> Google search was 1998. Google Maps was 2005. Gmail was 2004. Android was 2008. Youtube was 2005. Chrome was 2008. Docs was 2006. > Yet in the last decade, they really haven't had many successes

I don't think this kind of analysis is really giving a good answer. I mean, by these standards, look what a disaster of a dinosaur legacy vendor Apple Computer is! Haven't had a successful product launch since the iPad 13 years ago! (Perhaps with the exception of Apple Watch - 2015).


> perhaps with the exception of Google Photos

Which is somewhat offset by Google dropping Picasa.


Also Maps, Android and YouTube were started by other companies which were acquired.


K8s,

TPU

Colab

Tensorflow

hmmm.


Oh yes - google still has great technology - but they aren't consumer products.


Apple has undergone a similar ossification.

I think it's just something that happens to companies that get a certain size. The Process becomes more important than the Product.

When I was younger, I wanted so badly, to work at Apple.

A few years ago, they actually approached me, and I found out that the culture seems to have drastically changed.

They ended up not wanting me anyway, so maybe it's just sour grapes, on my part.


Last I talked to people about Apple's internal tooling, it was still pretty fragmented between different organizations.

I was told about e.g. multiple internal CI/build systems because one org didn't want to rely on the other, so they both built their own little software stacks.

That's a bit different than what I've heard from meta/google/amazon who seem to be more open to running centralized internal services


Amazon famously mandated such internal services at one point:

https://gist.github.com/chitchcock/1281611

This was rather a long time ago now so I wouldn't be surprised if things have changed since then, though having never worked at Amazon I wouldn't know one way or the other.


The main thing that changed is that now the service approach also generally enforces a particular framework (Smithy/Coral)


Has it really? The transition to Apple Silicon was pretty impressive, and something that Apple could have simply decided not to do at all while still remaining hugely profitable. I don't know anything about the company's internal culture, but it's not exactly resting on its laurels just yet.


The hardware people certainly aren't: I think they're reveling in their newfound freedom to make the best, not just the thinnest, computers.

But the software side? There seems to be a massive variance in both ability, and giving-a-shit across the software orgs, and no leadership filter on the output of these disparate groups.

Perhaps they have the opposite of the resting-on-laurels problem: a smattering of very senior people who are talented and care about the product/users, and a wide base of careerists who aren't very good and just want the money and the CV, users be damned.


The Apple Silicon transition also involves software (Rosetta), which works absurdly well given the complexity of what it's doing.


Exactly my point: the same company wrote Rosetta and the new Settings. Clearly not the same level of talent and commitment in those two teams.


I think it would be difficult to imagine a company the size of Apple not having variance in talent and commitment between different teams.


When you have the golden goose, you don't actually know how it makes the gold. You think you know, and you aren't completely clueless, but you don't really know, because it's complicated.

The obvious thing to do then is to not change too many things around it. Or at least document the changes.

So then you get a bunch of processes that are designed to not rock the boat too much, in case the magic dies.


> I found out that the culture seems to have drastically changed.

Can you explain in detail?


I'm not going to go into detail, as I don't think that it's helpful.

But I don't think that someone like me would be very welcome there.

That is both good and bad. I am certainly not God's Gift to Programming, so they may well be better off without folks like me. They certainly seem to be making a lot of green.


> I'm not going to go into detail, as I don't think that it's helpful.

Wouldn't it be helpful to other people who are interested in working there?


I don’t think so.

This is a professional venue, and I don’t feel that slagging companies is a particularly professional thing to do. I like Apple, and wish them the best. I have no interest in throwing shade on them.

When I do say less-than-positive things, I find it best to stay vague, so people can easily write off what I say. I think that folks here, are perfectly capable of making their own decisions, and that I’m best off, letting them do so, unencumbered by my opinions.


Steve isn't there, for one.


Everyone knows that Steve Jobs died. That's not an answer to my question.


> The Process becomes more important than the Product.

I don't know if the process is more important, but at a certain size codified process becomes a requirement.


> Apple has undergone a similar ossification.

> I think it's just something that happens to companies that get a certain size. The Process becomes more important than the Product.

Why change it if it's working well? Keep doing what you are good at, keep being the best in the world/market at this, and the customers will reward you.

I seriously don't understand the common theme here in these threads saying that Google is basically stagnating. Why do they need to churn out a major new product every other year to stay relevant? They continuously re-vamp their entire stack, their data centers, their networks. Nothing ever stays as it is. As a user I actually find that things change too much. (But I'm not what counts, it's ad revenue that counts..)


I don't work there but am curious can you share what you saw?


Google software engineering may not be perfect, but that's because we all have high standards probably. I've worked at both FAANG and non-FAANG companies, and while the sample size is small and not representative of ALL non-FAANG companies, I will say that my experience at FAANG companies is the engineers there do to get dream big and come up with ingenious engineering solutions that make engineering work at non-FAANG look like college senior undergrad projects. I think we should celebrate what Google has done for software engineering instead of nitpicking every little detail that they got wrong.


> I will say that my experience at FAANG companies is the engineers there do to get dream big and come up with ingenious engineering solutions that make engineering work at non-FAANG look like college senior undergrad projects

My own experience at FAANG and non-FAANG is that __some engineers__ at FAANG get to dream big and demonstrate ingenuity, whereas at non-FAANG (especially startups I've been at) everyone can do this, though not all take the opportunity.

At my last FAANG where I was a manager, my opinion is that performance management and the focus on bullet points, metrics, and peer feedback for review time is a huge limiter for everyone else. I spent a lot of time creating cover for people on my team to actually try something new and innovative. More senior management constantly wanted to know why person X on my team was working on a new thing instead of trying to move some incremental metric somewhere else. Based on my experience during performance reviews for senior ICs, risk taking wasn't particularly encouraged at L6 (or even L7) and below as the cost of failure was substantial.


I used to work there and feel about the same. Google’s internal tech is really good where it needs to be, and barely good enough where it doesn’t. Borg, Blaze, d, google3, codesearch, etc. are all very good. Almost everything else falls by the wayside.

Generally programmers at Google are better than programmers I’ve worked with outside of the company. But not by leaps and bounds. And in my opinion it doesn’t matter as your pace of development is so much slower it matters little how good of a programmer you are.


You're not wrong. Google gets to sidestep a lot of problems other lower-margin companies have because they've spent their whole history being near market-leader on search and ads. Things like "deadlines" were foreign to the company for a long time; when you're already the pack leader, you release new features when you feel like it, not to play catch-up with competitors. You can see them struggle in spaces where that's not true (Cloud, for example).

There is one underlying technology special sauce they refined to an art form, which is building a reliable system on incredibly unreliable components. This comes from their inception when the two guys building out the company knew how to write code but not build custom machines; their initial attempts at racks (hard drives and motherboards mounted to flexible plywood) are hilarious-looking by modern standards and broke down all the time, so they had to learn how to write software that was fault-tolerant at every link in the chain. That resulted in building an infrastructure that was extremely supportive of experimentation (if the system can survive half of it disappearing, it can survive a software bug crashing half the machines), and the rest of the shape of the company kind of flows from that.

It is worth repeating a lot that most companies do not operate at the scale or constraints Google does, and their approach to engineering is emphatically not one-size-fits-all.


I think people forget that the standard back then in the industry was very high margin servers with redundant everything (psu, network/nic, disks, etc).

Going to (eventually) mostly dependable simple servers with no redundancy was a big leap.


There were a couple places in Google's internal infrastructure where they had picked up traditional "big iron" architectures (mostly by acquisitions), and it was funny to observe the sheer plane between their regular best practices and those systems (when they hadn't yet been replaced).

One high-ticket item relied on a huge Oracle database. Every other aspect of that product was patch-on-the-fly, silent-release new versions, forward-and-backwards compatibility of software... But the whole product had regular scheduled outages on the weekends because the Oracle part could not be patched on the fly, had a rigid schema baked into its relational architecture, and had to be brought offline for updates and migrations.

It was like everyone was zipping around in racecars and then they all had to pause for Grandpa to cross the street.


I don't understand your argument - why is the one trick pony revenue generation incompatible with high quality systems and processes for writing code?

Both can be true at the same time, and my feeling is that they are, having worked there. The business is not very diversified, and the tools are great.


> I don't understand your argument - why does the lack of a diversified business have imply a lack of quality for systems and processes for writing code?

That wasn't the argument. The argument was simply that having a profitable monopoly doesn't imply the presence of quality.


How did they achieve monopoly if it weren't for the quality?


The quality of what exactly?

Google's massive internal processes and systems are the result of having already been huge for many years. Google needed the time and resources to build them up. They're not the cause of Google's becoming huge in the first place.

Of course the quality of Google's external search results helped Google achieve a monopoly. Ironically, many people say that Google's search results have been getting worse, and I'd have to agree with that sentiment.

See also the list of mergers and acquisitions by Alphabet: https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...


I think you hit on a key part of the debate in this thread. There clearly is some sort of functional quality floor, in the sense that poor enough quality means something doesn't actually work to do its job. But beyond that, what is "quality"? Some would consider it to mean elegance of algorithmic and architectural design, or readability of code, or some other more abstract measure. Some consider it suitability to purpose, with low-bug count.

I've give one example I observed commonly early in my career. The unexplained memory leak. Some process is running and the memory usage continues to grow. Eventually it will use the entire memory of its machine and die. You have a few options: 1) debug the issue and address the root cause 2) debug the issue and workaround it in some way 3) Give up and rewrite the code using some other kind of tooling 4) wake up people when the process dies and have them restart it 5) write a cron-job that restarts the process periodically.

What is the right answer? The best from a QA perspective is probably to identify the root cause and fix the underlying issue. Tools like valgrind have made this much easier in recent years, but it still can be a challenge. Pragmatically, my own answer (speaking generally, there are more different cases for different contexts) would be to time-box and investigation and fix, and if that wasn't achievable in reasonable time, just write the cron-job and work on the next problem. You can imagine very successful operations filled with kludges like that. Is that low quality?


It's not incompatible, but making money hand over fist can hide many problems.


If Google's code writing process really was superior, you'd expect them to consistently produce killer products in other fields.

But I can't think of anything like that in the last 15 years.


I don't think this is true.

It means that you wouldn't expect them to have products that fail because of unresolvable tech problems. You can see plenty of cases like Stadia where the tech was solid but the product strategy and leadership follow through was garbage.


Unfortunately good code does not mean a good product, and Google is anything but a good product company.


If anything, these are orthogonal.

I had a front-row seat to some really revolutionary ideas in Google getting to the prototype stage before being squashed in the gears of "We're chasing a market and that's not how users see this product working." Stuff where, if it caught on, it'd be a paradigm shift... But it turns out users don't want every paradigm shift that comes down the lane.

Because Google has (traditionally; this has changed in recent years) a real push-pull in authority between management and engineering leadership, the company can't commit fully to building a quality implementation of a status quo. Nor can it commit fully to chasing entirely new ways of doing things that could shake up an established market. In general, this... Actually kind of works out fine for them, more fine than critics often realize, because neither of those answers are always correct. Sometimes you get Gmail. Sometimes you get Google Drive. And sometimes you get iGoogle or Wave. And sometimes you get the stuff in between, like Reader or App Engine (really popular among the users, but the users don't have the money to make it profitable to commit to it).


> If Google's code writing process really was superior, you'd expect them to consistently produce killer products in other fields.

This is a very simplistic notion of what enables the creation of killer products. That has much more to do with understanding users' needs and identifying market opportunities. Good code writing processes are about code maintenance and scaling engineering effort, not dreaming up the next killer app.


Well, you're of course right about that.

I'll retreat to saying that claims about the superiority of Google's code writing process remain unsubstantiated.

During my 3 years at Google, I observed little brilliance. Not that my little corner of the giant org is a statistically valid sample.

I did see a fair amount of "Google is The Best!" sentiments and rejection of anything invented outside the company.


I was there for well over a decade, and my read on it is that the tools are great for building solutions for all sorts of problems, including web applications and big data analysis systems. The biggest issues with the ability to launch a killer app are IMO:

- The risk aversion to anything that threatens a big existing successful product.

- The product/feature approval processes that implements the above.

- The concern about launching something experimental or half-baked under the brand (vs a startup which does that by default).

Personally, I saw a lot of product and engineering creativity, but it was often stifled or watered down by the above.


Don't mistake publicly visible products with internal products. There is a lot of amazing infrastructure internally that has no public presence at all.


I’d be interested in the perspective of a high schooler/fresh college grad on this too. I was in college when Google appeared on the scene and my impression was very much that it was an engineering paradise (20% time had a lot to do with that!), in the many years since most of the interactions I’ve had with Google have changed that perception a ton. But I’m not sure what it looks like to someone just starting out.


I used to work there. The tooling is pretty good.


I’ve not worked there but I’ve worked with many exgooglers. They tend to think Google * is the best in the world and they created it before anyone else and everyone else with similar capabilities copied them, and that Googlers are smarter. However every company of their size and technical complexity has built almost exactly the same tooling and processes, but don’t carry the same attitude around. At a certain point I’ve learned to smile and nod as they explain and take credit for things I’ve developed myself at other FAANG and adjacent highly technical megacorps. The final thing of note is they seem to be unable to replicate their magic outside of Google, and blame the org they’re now in for being deficient in some way for that. “At Google we had X and didn’t have to worry about Y, without Y we can’t do X,” where Y is some nebulous qualitative feature of google culture. After I finish smiling and nodding I just go build X.

N.b., I’m not saying nosefrog is this person, or even everyone at google or ex-google is like this. This was just my experience working with senior principal or other very senior engineering staff.


Great comment. Google and others blazed a trail, but by 2010 you could find similar, often specialized tooling and practices emerging at most Fortune 500 tech companies.


Today you can almost universally get a better off-the-shelf solution with a SaaS product than what Googlers are able to access internally.


I think this is broadly correct but I do think their cloud services business holds some promise. Although a distant third at best behind AWS/Azure (and maybe fourth behind OCI), I have seen some pretty large shops with a ton of volume of all of their compute to GCP. It does seem like the immense infrastructure Google had to build to power this firehose of money has the skeleton of a great product to power the cloud workloads of many other businesses, in ways that can compete with AWS on their home turf. But this is conjecture from my end, I haven't used GCP in the heat of production.


Probably all of the above is true.

From an outsiders, it looks like all of FAANG has scaled to the point where delivery, perf and managing the business has become quite cumbersome. This, I suspect is why they're all used the economic climate to cut back, as they largely are unable to see the benefit of the additional hiring.

I have no doubt most of the talent is top tier, but being a strong engineer doesn't deliver profits nor cost savings on it's own.


Hiring is always profitable as it lowers the average cost of your engineers. Layoffs are a way to accelerate that cost averaging.


It's possible that such techniques are unreasonable in low margin companies, and it could also be said that perhaps these engineering techniques have been a factor in allowing Google to keep its margins high as it scaled up by many orders of magnitude. Google in 2003 was making less than $1bn/year in revenue... 200x growth in 20 years




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: