Yes, it should not be crazy to see that a website started by people like Paul Graham and Sam Altman would naturally be full of hyped up VC nonsense. Their entire business model is to hype up these ideas and the startups working on them and then cash out!
I recently left a job at a very different large company with a similar timeframe (a little under ten years). Pretty much everything this author states is related to my experience.
There is nothing all that special about Google. Maybe there was twenty years ago, but that ship has long since sailed. It’s just another large US tech company. Like Microsoft and IBM before it.
For a long time google had cachet as the most engineering friendly big tech firm (which was mostly deserved) and also the place with the highest level of talent (which is more team dependent but also somewhat deserved). You might end up working on ads or some other inane thing, but at least your engineer coworkers would be really good. They're still riding that wave to some degree because they haven't scared away all their top talent yet.
> It’s just another large US tech company. Like Microsoft and IBM before it.
This is just a hyperbolic statement that should not be taken seriously at all.
Look, Google isn't some fantasy land that some people might have lauded it as once upon a time, and it isn't unique in terms of pay or talent, but it is certainly at the top echelon.
I did an interview loop for high level IC at both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
IBM has never a company where engineers could rise to the same ranks as directors and higher with a solid IC track.
Is Google special compared to Apple/Netflix/Meta? No. Is it special compared to Microsoft, IBM, and any non FAANG or a company that isn't a top deca-corn? Yes.
Microsoft and IBM used to have a similar extremely talented teams. IBM ran research centers full of the world's top Ph.D. The innovation that happened at those places easily rivals Google's.
It's a similar trajectory is what people are saying. When Google was small and everyone wanted to work there they could take their pick of the top talent. When you run a huge company you inevitably end up with something around the average. I.e. all those huge companies that pay similar wages and do similar work basically end up having similar talent +/- and within that there are likely some great people and some less than great people.
Yes! It’s sad how ignorant of IBM and US technology industry history some of these comments are. Then again, I suppose every generation does a lot of its own “this time we’re different” myth making. Not everyone has the wisdom to see the broader context.
Indeee. I think because for younger generation is physically impossible to have experienced it, while for the older generations it's complicated to get into a disruptive startup.
Obviously people could read about the past, but sometimes that's asking too much, they are busy creating "the future".
>I personally know people who moved up the ranks there to director and above,
I didn't mean that engineers can't become directors, I meant that IBM didn't have a track for top ICs to get paid more than directors and still not be on a manager track.
> ...both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
This is maybe the third time I've heard this mentioned here on HN, so now I'm curious: What specific kinds of differences?
I imagine there might be a certain kind of prejudice against Microsoft and its employees, especially for "using Windows" or whatever, which I've found often unfairly coloring the opinions of people from Silicon Valley that are used to Linux.
If you don't mind sharing, what specific differences did you notice that gave you a bad impression of the Microsoft team and such a good impression of the Google team?
Overall talent level. Almost everyone I've interviewed with at Google impressed me, as well as came across as thoughtful and kind.
I did interviews with many teams at Microsoft (9 technical interviews total) and the only person that impressed me is now at OpenAI.
Every single interview question I got at Microsoft was straight out of intro to CS /classic Leetcode.
They would straight up ask "find all anagrams", "string distance", "LCA of a tree".
Google instead disguises many classic CS questions, so it takes a lot more thinking. Microsoft seemed to just verify that you can quickly regurgitate classic algorithms in code.
I'm sure there are some great teams at Microsoft: but because each division/org is much more silo'd I think it's more likely a team has a lower overall bar.
Google makes everyone pass through a hiring committee and you're interviewed by people that have nothing to do with the team you might end up on. Meta is similar. Amazon has the team interview you, but they also have bar raisers come from other teams.
Microsoft seems the outlier here that someone can get on a team with only interviewing with people on said team.
There's a bit of contradiction in the article. The main objection is the author's feeling of uneasiness in open spaces. "liminal spaces" created by, for example, large parking lots. The author then complains that these wide open spaces are not "walkable". What? They are certainly walkable by their very design! What they are not, it seems is the real objection, are cozy spaces lined with tacqueiras and coffee shops.
Walkable in the sense that I can meet most of my needs via walking.
I live 2 blocks away from a grocery store, for example. There is a 24 hour pharmacy roughly the same distance, and a couple coffee shops + a gas station + McDonalds not too much further away.
There is an expansive set of tennis courts and beach vollyball area within walking distance, and next to it is a great park & playground. A bit further in the other direction is an elementary school with playgrounds, and beyond that, at the edge of what I'd consider walkable, is a splash park.
Get on a bike and the offerings double.
Meanwhile, my parents are 20 minutes away from anything outside of a single gas station. Plenty of nice houses, and at least one school and fire dept., but they basically have to drive into town -- even though they're surrounded by houses -- just to snag a simple coffee or quick grocery store run.
the article states this person is walking in commercial area. that is what is creating the liminal space. she’s not strolling through the woods. she’s basically just bothered that her neighborhood isn’t gentrified enough yet.
This article predates the resignation of SawyerX (due to a lot of the abuse and misery heaped on him), who was the Perl release manager (aka "Pumpking") that was in charge of Perl 7.
The short version of it was that there was a bit of a power play by people who felt ignored and wanted a bigger part of what was felt to be an important development.
I'd say you're being a bit overly cynical. There's plenty of good news in Perl too. specifically, since you mentioned smartmatch, that's been pretty well fixed as of a couple of weeks ago with Switch::Right https://metacpan.org/dist/Switch-Right.
Switch modules have been around for years. There's Moo and Try::Tiny too. Some of this stuff should really be part of the language by now. Same with exporting functions.
Sure. I've been using the language for more than 10 years but this is dumb: different modules for every trivial feature that should be a language feature instead. Smart match is a perfect example. Smoothes off nothing. I'll be off using Ruby, thank you.
You manage feature differences one way or another. If you like choosing rbenv vs rvm vs asdf and then using them to manage your ruby versions and gem dependencies rather than having in-band switches in a single interpreter, great, you're welcome. I could even see someone making a case that it fits neatly within an org where systems/devops folks take more of the environments/dependencies division of labor.
If what you really like though is the charge you get out of just saying "this is dumb" while indulging the privilege to not even notice that you did a repeat performance of unsupported shoulds vs worthwhile tradeoffs, though, well, maybe you should examine that.
I use the system Ruby and don't have to worry too much about rbenv and rvm. 2.7 and even 3.0 is well supported. That's what I also did with Perl, except when I used MacOS which was a pain because of modules that used C libraries like LibXML. On Linux we can also use containers without worrying about speed penalty. There are sufficient solutions and okay tooling. Ruby's also got not one but two JIT compilers right now.
* the module system literally just runs a script, so it can do -anything-. As a result there are 3 or 4 competing install systems all with their own cruft, some defined entirely using Perl as config, some using YAML. You need to have all of them installed.
* Of these, Module::Build is a common one written by someone who completely overengineered it, and it installs hundreds of dependencies, even though all it really does is just copy some files.
* Install scripts can do stuff like ask you interactive questions that prevent automated installation, which is a constant hindrance when packaging Perl modules e.g. to RPM
* Perl leaves literally everything up to external modules, including exporting functions or even defining module dependencies (e.g. 'use base', 'use autoclean', 'use Exporter' ...) and often the module config is written entirely in Perl rather than YAML or JSON file, so trying to do anything clever (like add IDE/language server support) is an absolute nightmare.
* The client to install new modules initially asks about 20 questions and does a slow mirror test, making it difficult to use in automated settings. Luckily someone wrote cpanminus which literally does exactly what you want - installs the damn package.
I have been unimpressed with ChatGPT4's ability to generate code for problems of even medium complexity. Even if given partial or complete code from another language and told to translate it!
If we're seeing a heavy drop in StackOverflow usage then my guess is that Stack Overflow was getting most traffic from some very basic queries and ChatGPT is eating that base out from under them. Better for StackOverflow that they partner with OpenAI and focus on serving the higher end that they have left.
There is basically no information out there on how to write error tolerant parsers for language servers. My entire knowledge before starting work on this was someone giving me a three sentence explanation on the F# Discord for an approach using an intermediary AST doing a tolerant first pass.
The key with handling medium and large complexity tasks with an LLM is to break it up into less complex tasks.
First, I showed an example of a very simple parser, parsing floats between brackets and asked for a response that parsed just strings between brackets, then I asked:
I'm working on a language server for a custom parser. What I really want to do is make sure that only floats are between brackets, but since I want to return multiple error messages at a time for the developer I figure I need to loosely check for a string first, build the AST, and then parse each node for a float. Does this seem correct?
I get a response of some of the code and then specifically ask for this case:
can you show the complete code, where I can give "[1.23][notAFloat]" as input and get back the validatedAST?
There's an error in the parser so I paste in the logged message. It corrects the error, so I then ask:
now, instead of just "Error", can we also get the line and char numbers where the error occured?
There's some more back and forth but in just a few iterations I've got what amounts to a tutorial on using FParsec to create error tolerant parsers with line and column reporting ready for integration with the language server protocol.
If anyone would like to point me in the direction of such a tutorial that already exists I would very much appreciate it!
"However, we must not forget that AI needs to learn as well from vast sources."
Well, that's actually the problem. This current wave of AI is not "learning" anything really. An AI with any sort of generalizable reasoning ability would just need basic sources on programming syntax and semantics and figure the rest out on its own. Here, instead, we see the need to effectively memorize variations of the same thing, say, answers to related programming questions, so that they can be part of a intelligent sounding response.
I was dubious at the value of GenAI as a search tool at first, but now see that it's actually well suited for the role. These massive models are largely storing information in a compressed form and are great at retrieving and doing basic rewrites. The next evolution in Expert Systems I suppose, although lacking strong reasoning.
Exactly, imagine thinking you can learn how to program Java or C just from being handed the language specification, or even learn how to play chess just by being told the rules of the game.
Humans don't learn anything of substance just from being told the strict rules, we also learn from a wealth of examples expressed through a variety of means some of which is formal, some poetic, some even comedic.
Heck, we wouldn't even need Stack Overflow to begin with if we could learn things just from basic sources.
> imagine thinking you can learn how to program Java or C just from being handed the language specification
Throw in the standard library documentation, and that's exactly how many of us learned how to program before projects like stackoverflow or even the web existed for the public. We took those rules, explored the limits of them using the compiler, and learned.
Stackoverflow is, IMO, a shortcut through portions of the exploration and learning phase. Not a bad thing, but importantly it's not required either.
As the only general intelligences we know of so far, I'd say it's support for the assertion that an AI with general reasoning abilities wouldn't need SO or other examples to figure out how to do specific tasks.