Hacker News new | past | comments | ask | show | jobs | submit | PheonixPharts's comments login

> a quote of himself

Curious, did you seriously not recognize that this is a famous quote from the final scene in Blader Runner?

I read that more as quirky call out to a famous film, not that he was claiming this was his own view of the world.


I did not recognize the quote. I haven’t seen Blade Runner.

The quote is attributed to “Matt,” and it’s at the top of Matt’s resume. The speaker in Blade Runner was Roy Batty. If Matt was trying to include a famous quote on his resume, why did he attribute the quote to himself?

Regardless of the answer, I don’t think a technical resume is the right place for quirky call-outs to films. Particularly if you are substituting your own name as the speaker of the quote. Maybe I’m being too harsh, but in the context of this resume, IMO, it’s just another red flag.


well, given his experience and assumed compensation, people will perceive him (not necessarily that he is, but companies will think) as some 200+ IQ genius that can get stuff done. It'll come off as snarky to the non-elite, but I guess that's what you do to get top dollar here.

Or any industry, really.


> Asking me to solve an extremely esoteric problem that has zero relevance to my day-to-day

I'm always surprised how useless something is when I don't know it, and suddenly once I do know it, I solve lots of problems with it!

I've heard programmers grumble about how useless calculus is, before I learned calc I used to grumble about that too. After I learned it there were countless problems I unlocked solutions for by applying the thinking I learned in calculus.

I've heard programmers say that you'll never need to implement your own sort for mundane tasks, but, it turns out that after really grokking topological sort I used it countless times for fairly mundane problems like creating plots.

I've heard programmers say that learning the lambda calculus is a waste of time, and nobody uses functional programming. Yet it was people that understood these things that transformed Javascript from a useless browser oddity into one of the most widely used languages. It was seeing that Javascript was essentially a Scheme that unlocked it's true potential.

Over my career it's remarkable how many "esoteric problems" have lead to me solving hard tasks or even shipping entirely new products. If you're only focused on what is required of your day job today you're only going to be at best a mediocre engineer.


> after really grokking topological sort I used it countless times for fairly mundane problems like creating plots.

I'm interested in learning more - in what scenario was topological sorting essential for generating plots, and what specific problem did it solve?


Essentially a funnel report where you want to know the total percent of the population that has reached a given path, but you only know the output probabilities of each step in the funnel (node). This is a fairly common situation.

As a simple example: you know after signup 20% of customers purchase, 80% don't, but what you want to trivially add in is the fact that of the users in a marketing campaign, 10% of them signed up, which means for the marketing funnel 2% purchase. Now consider that you have 20 or more such events in your funnel and you want to combine them all with out doing all the math by hand. Likewise you want to be able to add a newly discovered step in the funnel at will.

Using a topological sort you can take an arbitrary collection of nodes where each node only knows that probability the next nodes are, sort them and then compute the conditional probabilities for any user reaching a specific node fairly trivially, given the assumption that your funnel does represent a DAG.

If you don't perform the topological sort then you can't know you have calculated all the conditional probabilities for the upstream nodes, which makes the computation much more complicated. Topological sort is very useful any time you have an implied DAG and you don't want to have to worry about manually connecting the nodes in that DAG.


That section was less complaining about the nature of the problem and more about the harshness of judging the solution. The irrelevance to day-to-day work merely emphasizes the unfairness of the judgement.

If I'm put on the spot, under time pressure, to solve a problem I've never seen before and will likely never see again on the job, AND you reject me because my solution was slightly incorrect or naive, well it's obvious what the nature of the job is at that point. You're filtering for candidates that can and will devote dozens of hours to Cracking the Coding Interview and LeetCode. Sorry, I have a full-time engineering job and two young kids, and you clearly don't value my capabilities or experience or time, you value my willingness to spend my extremely limited free time studying to ace your half-baked engineering IQ test, for the honor of possibly working for you.

I once had a company cancel a scheduled interview when I informed them I had received an offer from another company, but was more interested in them and was wondering if I could step up the interview schedule. They told me unless I was willing to reject the existing offer and submit to their multi-week interview process we couldn't move forward. Esoteric, irrelevant algorithm questions with strict judgements are just a different version of that same arrogance.


> It really sucks right now

For me, it's been the opposite: the last 2 years have been the best time I've had working in tech since the early 2010s.

Around 2019 I was seriously considering leaving the field (if it didn't pay so much) as the entire industry had turned into a bunch of leet code grinding, TC chasing, mediocre drones. It was incredibly hard to find people working on actual problems let alone challenging/interesting ones. Nobody I worked with for years cared one bit about programming or computer science. Nobody learned anything for fun, nobody hacked on personal projects during the weekend, and if they were interest in their field it was only so they could add a few more bullet points to their resume.

But the last two years I've worked with several teams doing really cool work, found teams that are entirely made up of scrappy, smart people. Starting building projects using a range of new tricks and techniques (mostly around AI).

Right now there are so many small teams working on hard problems getting funding. So many interesting, talented and down right weird programmers are being sought after again. People who like to create things and solve problems are the ones getting work again (my experience was these people were just labeled as trouble makers before).

I'm probably getting, inflation adjusted, paid the least that I have in a long time, but finally work is enjoyable again. I get to hack on things with other people who are obsessed with hacking on things.


Where did you find your team, or all these small teams? I don't suppose it's through regular job boards, is it through more intimate channels like irl events, connections?


I agree. Despite high compensation and a hiring boom, or perhaps because of it, 2020-2022 was the worst time to work in tech. I knew interns in 2012 who could code circles around those bootcampers turned “staff engineers” in 2021. Everyone at my series B employer turned into a “manager” or “leader” overnight. Being a shitty B2B SaaS meant that sales ran the show and our product was absolute dogshit.

2023 was awful too because everyone stayed put — we somehow avoided layoffs — even though they were absolutely miserable.

Now in 2024, I’ve just started a job search and things seem much better. There’s actual innovation now and I feel a sense of optimism about the future of tech that I haven’t in 10 years.


> I knew interns in 2012 who could code circles around those bootcampers turned “staff engineers” in 2021. Everyone at my series B employer turned into a “manager” or “leader” overnight.

Thought it was just me seeing this. The title inflation is out of control. "Senior" titles lacking basic fundamental "table stakes" skills.


I agree but for different reasons.

The whole AI thing is renewing interest in self-hosted infra which happens to be a specialty of mine. Cutting out the "cloud" means having people that actually understand how things work, which means better colleagues, bosses that appreciate what I know and can do and less dealing with bullshit vendor garbage.

I don't know how long this AI fad will last or if the cloud providers will find a way to make their offerings affordable vs self-hosting going forward but for now I'm just enjoying renewed relevancy of one of my more enjoyable skill sets.


Yeah fair, I could see why it’s good for us who has a decent chunk of experience. Kinda makes sense from managerial perspective as well - lay off bunch of under-performers/juniors, hire back other seniors from other companies that got laid off and save 25-30% while delivering about the same results. I’m over-simplifying it, but we’re going through an over-correction phase, in my opinion.


This can be solved by just running "restart and re-run all" as a matter of habit whenever you feel like you are in a good state. If you're looking at someone else's notebook, this should also be your first step.

If you're using Python, Jupyter still offers a major advantage for sharing prototyping/research work in that it it is the only real way to provide literate programming. There are cases where code inside of a notebook should be pulled into a library, but pure code will always be inferior when most of your work is expository and exploratory.

Literate Haskell, and R Markdown are superior forms of literate programming, but Jupyter is not terrible if you just take some basic precautions before sharing.

Notebooks work great as... notebooks. Which is primarily how they're used in my experience. When the bulk of your work is prototyping and and exploring an idea, they're still an excellent choice.


Quarto is like R Markdown, but it supports the jupyter kernel, so Python support is essentially the same as jupyter notebooks. But the good practices are built in, instead of you having to remember running "restart and re-run all". So it's a valid option for literate programming in Python (and several other languages).

When I say "confusing", I don't mean I don't understand it. I mean that while using it, there is a higher than zero chance that you get confused at some point and mess up.

I love the feature that you can interactivly develop/prototype research. I just think better/less error-prone tools are available (which also track nicer on git btw, since quarto docs are flat text files).


Many people misunderstand what this case was about.

> Homeless need shelter and help. It is inhumane that we let them rot on the streets.

The issue being argued was not that homeless should be able to camp where ever and whenever they want. This issue was that if you don't provide shelter then you cannot kick them out of public spaces.

If that was the law, then it would mean you need to build shelters if you don't want homeless in the streets.

What this new verdict means is:

- You can forcibly remove homeless people who have nowhere else to go

- Thus homelessness can be effectively illegal

- The only realistic solutions now are:

    - Put the homeless in prisons

    - Move them to cities and towns that don't have the resources to remove them.


I genuinely expect mass arrests and quasi-deportations of the homeless (along the lines of "we won't prosecute you if you take this bus ticket to San Francisco") over the next few months in red states and in red cities in blue states.


Why not blue cities in blue states? California's governor and San Francisco's mayor praised the decision.[1]

[1] https://www.nbcbayarea.com/news/local/san-francisco/sf-supre...


Because it's inhumane to treat people as a punishment?


"All people to the right of my position are inhumane"

lel


If you have a response that follows HN guidelines, I'd love to hear it.


> Rather than educating founders on more fundamental topics such as how to get from 0-1, how to hire, fostering positive culture etc.

My experience is that none of these things actually matter anymore. I wish they did, but my experience has been that getting any of these things right in today's environment is in the best case inconsequential to success and in the worse an actual determent.

I've worked at horrifically toxic startup cultures, but it never seemed to hurt them getting funding from outside teams that didn't care in the slightest how healthy the org was or even the product was even remotely good for customers.

I've worked at companies that literally did not know how they were every going to make a profit IPO, then continue to fail to figure this out and changed their strategy to shrug-your-shoulder-and-wait-for-the-return-of-zirp, with no observable consequences on their stock price or investor support for years. Maybe this will eventually catch up to them, but so far they've been quite successful burning a dollar to make $0.70.

I've worked at companies that hired hundreds of completely useless data scientist, filled management with toxic management consultants, driven out all the serious talent they had... with again, no real consequences.

And all of the companies I've known that checked all of your bullet points? Well they've mostly stayed small, saw revenue decline, and eventually start falling apart. All of the best companies I've worked for ended up having to get rid of all the things that made them good in order to appease investors and keep growing.

I wish that we lived in a world where your advice was correct, but I haven't seen any evidence that that is the world we actually inhabit.


> no revenue and no profit.

Revenue has gone up, but fewer publicly traded companies are making a profit than ever before [0].

I could never understand why so many people just talk about revenue. Revenue without profits is meaningless. There's the old logic of "get enough revenue and then figure out profits and you're highly profitable", but it's very clear that switching the "profit switch" is not so easy in practice.

Investors are still basically waiting for the fed to drop rates, which means that people have abandoned rationally thinking about businesses and are just holding until the free money starts pouring in again.

I honestly don't think the AI bubble is anything like the dotcom bubble. There's something much stranger happening here since the entire market is basically hallucinating and AI is just one manifestation of that.

0. https://finimize.com/content/beware-the-rise-of-unprofitable...


>and are just holding until the free money starts pouring in again.

What guarantee is there that "free money" will come back again?

Wasn't the last free money printer run something like a first time in history, and supposed to be only a temporary measure that went on for far too long leading to inflation and assets spiraling out of control creating various speculative bubbles like crypto, Gamestop fiasco, housing, and dozens to hundreds of crappy overhyped "start-ups" adn food delivery apps, that were never able to be very profitable on their own but still grew like crazy thanks to that free money and gullible investors to stay afloat, leading to an artificial over demand of SW devs which also crashed with them.

Seeing all it lead to, do we even want/need it to come back again? And "But this time will be different" doesn't scan for me as a believable answer since we all know it'll definitely be the same.


The free money printer was running for a solid 20 years or so.


Wasn't that post-2009?


started in 2002


"Now, [the unprofitable companies] might not be the big publicly traded kahunas – collectively they hold just a 10% slice of the market’s total revenue pie"

It's detailed in the article, but that graph is way misleading because it isn't weighted by revenue size and the unprofitable part is dominated by tiny companies.


Yes, it’s interesting but mostly a shift of concentration of earnings. Market weighted PE multiples are slightly elevated historically but not insane; forward PE even less so (taken with a grain of salt of course).


> Revenue has gone up, but fewer publicly traded companies are making a profit than ever before

That chart is deceiving. (from your link)

If you look carefully, you'll see that "very profitable" companies over the decades is unchanged.

What changed is the balance between "barely positive" and "negative".


I guess we'll see when we are well past the yield curve inversion. If we get past far enough without a collapse I would say we are in a paradigm shift.


> compensation packages that truly gifted AI researchers can make now

I guess it depends on your definition of "truly gifted" but, working in this space, I've found that there is very little correlation between comp and quality of AI research. There's absolutely some brilliant people working for big names and making serious money, there's also plenty of really talented people working for smaller startups doing incredible work but getting paid less, academics making very little, and even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar.

OpenAI clearly has some talented people, but there's also a bunch of the typical "TC optimization" crowd in there these days. The fact that so many were willing to resign with sama if necessary appears largely because they were more concerned with losing their nice compensation packages than any of their obsession with doing top tier research.


Two people I knew recently left Google to join OpenAI. They were solid L5 engineers on the verge of being promoted to L6, and their TC is now $900k. And they are not even doing AI research, just general backend infra. You don't need to be gifted, just good. And of course I can't really fault them for joining a company for the purpose of optimizing TC.


> their TC is now $900k

As a community we should stop throwing numbers around like this when more than half of this number is speculative. You shouldn't be able to count it as "total compensation" unless you are compensated.


Word on town is OpenAI folks heavily selling shares in secondaries in 100s of millions.

The number is as real as someone else is willing to pay for them. Plenty of VCs willing to pay for it.


Word in town is [1] openai "plans" to let employees sell "some" equity through a "tender process" which ex-employees are excluded from; and also that openai can "claw back" vested equity, and has used the threat of doing so in the past to pressure people into signing sketchy legal documents.

[1] https://www.cnbc.com/2024/06/11/openai-insider-stock-sales-a...


I would definitely discount OpenAI equity compared to even other private AI labs (i.e. Anthropic) given the shenanigans, but they have in fact held 3 tender offers and former employees were not, as far as we know, excluded (though they may have been limited to selling $2m worth of equity, rather than $10m).


> Word on town is OpenAI folks heavily selling shares in secondaries in 100s of millions

OpenAI heavily restricts the selling of its "shares," which tends to come with management picking the winners and losers among its ESOs. Heavily, heavily discount an asset you cannot liquidate without someone's position, particularly if that person is your employer.


Did you mean permission, not position? I’m not sure I understand what someone’s position could mean.


Yes.


You have no idea what youre talking about


don’t comment if you don’t know what you’re talking about, they have tender offers


Google itself is now filled with TC optimizing folks, just one level lower than the ones at Open AI.


> their TC is now $900k.

Everyone knows that openai TC is heavily weighted by ~~RSUs~~ options that themselves are heavily weighted by hopes and dreams.


When I looked into it and talked to some hiring managers, the big names were offering cash comp similar to total comp for big tech, with stock (sometimes complicated arrangements that were not options or RSUs) on top of that. I’m talking $400k cash for a senior engineer with equity on top.


> big names

Big names where? Inside of openai? What does that even mean?

The only place you can get 400k cash base for senior is quantfi


> The only place you can get 400k cash base for senior is quantfi

confident yet wrong

not only can you get that much at AI companies, netflix will also pay that much all cash - and that’s fully public info


> not only can you get that much at AI companies

Please show not tell

> netflix will also pay that much all cash

Okay that's true


Netflix is just cash, no stock. That’s different from 400k stock + cash.


> The only place you can get 400k cash base for senior is quantfi

That statement is false for the reasons I said. I’m not sure why your point matters to what I’m saying


Because op’s usage of base implies base + stock. including a place where base = total comp is really misleading and is just being unnecessarily pedantic about terminology.

OP is correct that a base cash of 400k is truly rare if you’re talking about typical total comp packages where 50% is base and 50% is stock.


quantfi doesn’t pay stock at all usually so i disagree that it implies cash + stock


I don’t know what point you’re trying to make other than being super pedantic. This was a discussion about how OpenAI’s base of 400k is unique within the context of a TCO in the 800-900k range. It is. That quantfi and Netflix offer similar base because that’s also their TCO is a silly argument to make.


> This was a discussion about how OpenAI’s base of 400k is unique within the context of a TCO in the 800-900k range.

That's not how I interpret the conversation.

I see a claim that 900k is a BS number, a counterargument that many big AI companies will give you 400k of that in cash so the offers are in fact very hot, then a claim that only finance offers 400k cash, and a claim that netflix offers 400k cash.

I don't see anything that limits these comparisons to companies with specific TCOs.

Even if the use of the word "base" is intended to imply that there's some stock, it doesn't imply any particular amount of stock. But my reading is that the word "base" is there to say that stock can be added on top.

You're the one being pedantic when you insist that 400k cash is not a valid example of 400k cash base.

Notice how the person being replied to looked at the Netflix example and said "Okay that's true". They know what they meant a lot better than you do.


ok so the conversation starts out with 900k TCO with 400k in cash, a claim that that’s BS and then morphs into a discussion about a TCO of 400k all cash being an example of equivalent compensation to OpenAI packages?


Nobody said it was equivalent. The subdiscussion was about whether you can even get that much cash anywhere else, once TCO got pulled apart into cash and stock to be compared in more detail.

Again, the person that made the original claim about where you can get "400k cash base" accepted the Netflix example. Are you saying they're wrong about what they meant?


Amazon pays 400k cash lol. Many companies do


Up until 2 years ago you were extremely wrong and you're still wrong: https://www.bloomberg.com/news/articles/2022-02-07/amazon-is...

It's amazing to me how many people are willing to just say the first thing that comes to their head while knowing they can be fact-checked in a heartbeat.


Everything OpenAI does is about weights.


bro does their ceo even lift?


You mean PPUs or smoke and mirrors compensation. RSUs are actually worth something.


why are PPUs “smoke and mirrors” and RSUs “worth something”?

i suspect people commenting this don’t have a clue how PPU compensation actually works


> Note at offer time candidates do not know how many PPUs they will be receiving or how many exist in total. This is important because it’s not clear to candidates if they are receiving 1% or 0.001% of profits for instance. Even when giving options, some startups are often unclear or simply do not share the total number of outstanding shares. That said, this is generally considered bad practice and unfavorable for employees. Additionally, tender offers are not guaranteed to happen and the cadence may also not be known.

> PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M

> The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.

https://www.levels.fyi/blog/openai-compensation.html

Edit:

I’m realizing we had the exact same conversation a month ago. It sounds like you have more insider information.


Seems like you need to have been working at a place like Google too


the thing about mentioning compensation numbers on HN is you will get tons of pissy/ressentiment-y replies


I don't care about these. I care about the readers who might not have done job searching recently and might not know their worth in the market.


"...even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar."

the people i often have the most respect for.


Half the advancements around Stable Diffusion (Controlnet etc.) came from internet randoms wanting better anime waifus


advancements around parameter efficient fine tuning came from internet randoms because big cos don’t care about PEFT


... Sort of?

HF is sort of big now. Stanford is well funded and they did PyReft.


HF is not very big, Stanford doesn’t have lots of compute.

Neither of these are even remotely big labs like what I’m discussing


HF has raised more than $400m. If that doesn't qualify them as "big", I don't know what does.


TC optimization being tail call optimization?


You don't get to that level by thinking about code...


Could be sarcasm, but I'll engage in good faith: Total Compensation


Nope, that's a misnomer, it's tail-call elimination. You can't call it an optimization if it's essential for proper functioning of the program.

(they mean total compensation)


Curie temperature


Definitely true of even normal software engineering; my experience has been the opposite of expectations, that TC-creep has infected the industry to an irreparable degree and the most talented people I've ever worked around or with are in boring, medium-sized enterprises in the midwest US or australia, you'll probably never hear of them, and every big tech company would absolutely love to hire them but just can't figure out the interview process to weed them apart from the TC grifters.

TC is actually totally uncorrelated with the quality of talent you can hire, beyond some low number that pretty much any funded startup could pay. Businesses hate to hear this, because money is easy to turn the dial up on; but most have no idea how to turn the dial up on what really matters to high talent individuals. Fortunately, I doubt Ilya will have any problem with that.


I find this hard to believe having worked in multiple enterprises and in the FAANG world.

In my anecdotal experience, I can only think of one or two examples of someone from the enterprise world who I would consider outstanding.

The overall quality of engineers is much higher at the FAANG companies.


I have also worked in multiple different sized companies, including FAANG, and multiple countries. My assessment is that FAANGs tend to select for generally intelligent people who can learn quickly and adapt to new situations easily but who nowadays tend to be passionless and indifferent to anything but money and prestige. Personally I think passion is the differentiator here, rather than talent, when it comes to doing a good job. Passion means caring about your work and its impact beyond what it means for your own career advancement. It means caring about building the best possible products where “best” is defined as delivering the most value for your users rather than the most value for the company. The question is whether big tech is unable to select for passion or whether there are simply not enough passionate people to hire when operating at FAANG scale. Most likely it’s the latter.

So I guess I agree with both you and the parent comment somewhat in that in general the bar is higher at FAANGs but at the same time I have multiple former colleagues from smaller companies who I consider to be excellent, passionate engineers but who cannot be lured to big tech by any amount of money or prestige (I’ve tried). While many passionless “arbitrary metric optimizers” happily join FAANGs and do whatever needs to be done to climb the ladder without a second thought.


I sort of agree and disagree. I wouldn't agree with the idea that most FAANG engineers are not passionate by nature about their work.

What I would say is that the bureaucracy and bullshit one has to deal with makes it hard to maintain that passion and that many end up as TC optimizers in the sense that they stay instead of working someplace better for less TC.

That said, I am not sure how many would make different choices. Many who join a FAANG company don't have the slightest inkling of what it will be like and once they realize that they are tiny cog in a giant machine it's hard to leave the TC and perks behind.


perfect sort of thing to say to get lots of upvotes, but absolutely false in my experience at both enterprise and bigtech


> things like chain-of-thought/reasoning perform way worse in structured responses

That is fairly well establish to be not true.


> Why do I want to read an intro CS book? I've read those before.

"We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time." - TS Eliot.

You aren't a curious person who studies things for their own sake and finds wonder in exploring ideas. That's fine, this book is clearly not for you so why concern yourself with it? Most of my friends who are obsessed with their field count many intro books as their favorites and frequent re-reads.

> condescending promises of enlightenment.

It sounds like you're more upset that other people enjoy and therefore recommend this book. You're the one asking for proof that it's worth your time. It's clearly not. People who are curious and like to explore ideas in computing recommend this book frequently to other like minded people.

If you don't like sushi, why question somebody's recommendation for their favorite omakase place?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: