I have tried and failed to understand what is meant by "layoff due to overhiring", in the context of Software Engineering. When an engineer is employed in a successful project, they are incredibly profitable. A handful of engineers can write and maintain products worth millions of dollars. Big companies, which are the ones we discuss when talking about "overhiring" should be capable to find or create projects where an engineer will be at least modestly profitable. And having an employee creating a small amount of profit, who might eventually be moved to something that's better, is certainly more profitable than firing him and paying severance.
Note that here I'm talking about firing good performers because you have "too many" of them. Firing bad performers because the estimated cost of training them is not justified, I can understand.
The only explanation I found satisfying is that investors heuristically care about profit per capita(ppc) as well as total profit, and employees who don't produce _enough_ profit reduce ppc and thus investment to point the opportunity cost of firing them aligns. You'll make investors happy, which will raise valuation, more than what you are losing from the lost profit. But this is not "rational" in a full information economic sense. It's essentially the company virtue signaling that they are capable to fire if they had to, even at the cost of actual dollars.
Yeah. I had to get a second last name when I was granted Spanish citizenship, which leads to my full name not matching my Argentinian ID. This generates (small) problems when flying between Spain and Argentina. Also partially due to my full now being too long to print on a boarding pass.
I love paperwork, so I always handle passport applications and stuff for us, and whenever I have to fill out my wife's stuff, there's that part about other last names, and I get super paranoid trying to remember which ones she's officially used (she doesn't even remember, or even have records sometimes due to the nature of immigrating), because between Latin America and the US pre- and post-citizenship plus getting married, it's kind of a nightmare to remember when there was a de something, an y something, or just one surname, or two surnames.
And then her parents are from another country with different surname rules, throwing a crazy wrench in things when she has to deal with her other citizenship documents, which adhere to that other country's rules.
> trying to remember which ones she's officially used
I had an uncle who was very proud of the fact that his birth certificate, passport, and the spelling he actually used for his first name all disagreed.
Slightly related story but, I too, found out my documents were out of wack when I applied for financial aid in college. I originally had two middle names at birth but then it was switched to only (the first) one a year or two later. My birth certificate had both names, my drivers license the first middle name, and my social security had the second middle name. It was a huge pain to get fixed, ended up just changing the two easiest to change to match the third.
Actually a somewhat decently known way of avoiding tax on real estate transfers within the family in Japan is to move abroad, gift it, and then move back. The other way is gift it as a wedding gift, as wedding gifts (and others made out of customary social obligations) are not taxable over there.
Yup. Quite common with kids with one parent from a country using, say, the roman characters and another from an asian country (like say a France / Japanese mixed kid). If the (french) father goes to the french embassy or to France to declare the kid under one name and then the (japanese) mom goes to declare the kid with a japanese name, the kid literally has two identities. Not just two passports (which is highly common) but two identities.
In less common case it can happen with just the given name being different in two countries: I know a dude who as a Portuguese given name on his Portuguese passport and the french version of that name on his french passport. They're considered by the authorities to be two different persons and he already got into trouble (administrative stuff) so now he's careful.
Also note that it's a documented fact that for fraud there have been people caught declaring a kid that wasn't their: kid born at the hospital, quickly "rent" the kid to friends from the community, declare the kid as if he was born at home (by having a doctor come). Profit from welfare (in the EU) money due to the fact that you now "have" one more kid. One such case was uncovered when the doctor who gave birth to the kid was then sent later in the day to witness a "born at home" kid.
More specifically, for the verb “give birth to”, the mother is the direct object and their new born child is the indirect object. The verb “deliver” can have the doctor or midwife or so on as the direct object.
I am bringing this up because I had to read your comment several times before I realized it was a comment about language use rather than about the role of doctors in England.
Finally, to be completely pandemic, doctors can give birth to other people‘s kids. My wife, a doctor, gave birth to my sons; there was another doctor there who delivered them.
A history book is written by someone who knows the topic, and then reviewed by more people who also know the topic, and then it's out there where people can read it and criticize it if it's wrong about the topic.
A question asked to an AI is not reviewed by anyone, and it's ephemeral. The AI can answer "yes" today, and "no" tomorrow, so it's not possible to build a consensus on whether it answers specific questions correctly.
A pop sci fi book can be written by someone who knows the topic and reviewed by people who know the topic — and a history book can also not.
LLM generated answers are more comparable to ad-hoc human expert's answers and not to written books. But it's much simpler to statistically evaluate and correct them. That is how we can know that, on average, LLMs are improving and are outperforming human experts on an increasing number of tasks and topics.
In my experience LLM generated answers are more comparable to an ad-hoc answer by a human with no special expertise, moderate google skills, but good bullshitting skills spending a few minutes searching the web, reading what they find and synthesizing it, waiting long enough for the details to get kind of hazy, and then writing up an answer off the top of their head based on that, filling in any missing material by just making something up. They can do this significantly faster than a human undergraduate student might be able to, so if you need someone to do this task very quickly / prolifically this can be beneficial (e.g. this could be effective for generating banter for video game non-player characters, for astroturfing social media, or for cheating on student essays read by an overworked grader). It's not a good way to get expert answers about anything though.
More specifically: I've never gotten an answer from an LLM to a tricky or obscure question about a subject I already know anything about that seemed remotely competent. The answers to basic and obvious questions are sometimes okay, but also sometimes completely wrong (but confidently stated). When asked follow-up questions the LLM will repeatedly directly contradict itself with additional answers each as wrong as the first, all just as confidently stated.
More like "have already skimmed half of the entire Internet in the past", but yeah. That's exactly the mental model IMO one should have with LLMs.
Of course don't forget that "writing up an answer off the top of their head based on that, filling in any missing material by just making something up" is what everyone does all the time, and in particular it's what experts do in their areas of expertise. How often those snap answers and hasty extrapolations turn out correct is, literally, how you measure understanding.
EDIT:
There's some deep irony here, because with LLMs being "all system 1, no system 2", we're trying to give them the same crutches we use on the road to understanding, but have them move the opposite direction. Take "chain of thought" - saying "let's think step by step" and then explicitly going through your reasoning is not understanding - it's the direct opposite of it. Think of a student that solves a math problem step by step - they're not demonstrating understanding or mastery of the subject. On the contrary, they're just demonstrating they can emulate understanding by more mechanistic, procedural means.
Okay, but if you read written work by an expert (e.g. a book published by a reputable academic press or a journal article in a peer-reviewed journal), you get a result whose details were all checked out, and can be relied on to some extent. By looking up in the citation graph you can track down their sources, cross-check claims against other scholars', look up survey sources putting the work in context, think critically about each author's biases, etc., and it's possible to come to some kind of careful analysis of the work's credibility and assess the truth value of claims made. By doing careful search and study it's possible to get to some sense of the scholarly consensus about a topic and some idea of the level of controversy about various details or interpretations.
If instead you are reading the expert's blog post or hastily composed email or chatting with them on an airplane you get a different level of polish and care, but again you can use context to evaluate the source and claims made. Often the result is still "oh yeah this seems pretty insightful" but sometimes "wow, this person shouldn't be speculating outside of their area of expertise because they have no clue about this".
With LLM output, the appropriate assessment (at least in any that I have tried, which is far from exhaustive) is basically always "this is vaguely topical bullshit; you shouldn't trust this at all".
I am just curious about this. You said the word never, and I think your claim can be tested, perhaps you could post a list of five obscure questions for a LLM to answer and then someone could ask that to a good LLM for you, or an expert in that field, to assess the value of the answers.
Edited: I just submitted an ASK HN post about this.
> I've never gotten an answer from an LLM to a tricky or obscure question about a subject I already know anything about that seemed remotely competent.
Certainly not my experience with the current SOTA. Without being more specific, it's hard to discuss. Feel free to name something that can be looked at.
> A question asked to an AI is not reviewed by anyone, and it's ephemeral. The AI can answer "yes" today, and "no" tomorrow, so it's not possible to build a consensus on whether it answers specific questions correctly.
It's even more so with humans! Most of our conversations are, and has always been, ephemeral and unverifiable (and there's plenty of people who want to undo the little of permanence and verifiability we still have on the Internet...). Along the dimension of permanence and verifiability, asking an LLM is actually much better than asking a human - there's always a log of the conversation you had with the AI produced and stored somewhere for at least a while (even if only until you clear your temp folder), and if you can get ahold of that log, you can not just verify the answers, you can actually debug the AI. You can rerun the conversation with different parameters, different prompting, perhaps even inspect the inference process itself. You can do that ten times, hundred times, a million times, and won't be asked to come to Hague and explain yourself. Now try that with a human :).
The context of my comment was what is the difference between an AI and a history book. Or going back to the top comment, between an AI and an expert.
If you want to compare AI with ephemeral unverifiable conversations with uninformed people, go ahead. But that doesn't make them sound very valuable. I believe they are more valuable than that for sure, but how much, I'm not sure.
"Vote with your wallet" works for voting for things, but it has an abysmal track record when it comes to voting against things. It only works in the case organized boycotts, and only for a vanishing minority of those.
It's not often that we get to see whataboutism actually start with an actual "What about...".
Less snarkily: even if we agreed that paying taxes makes one complicit on evil acts done with that money, it doesn't mean that person should not avoid being complicit in other evil actions where they can.
Yeah, the company can tell the candidate the salary they will pay for someone in the role. If that doesn't match the candidate's needs then the process stops. If the candidate's performance during the interviews shows they can't function effectively in the role, the process stops.
Honestly I think the right to remix and derivative work would benefit everyone a lot more. Never going to happen, but I think society losing a lot due to how hard it's to profit from adding your own work to someone else's work.
I would find this slightly useful, I live in Madrid. For most of my trips I take the metro; I have two lines nearby, and they are frequent enough that I just show up. But there are a few parts of town that are just not well connected to my place by metro. If I want to go to Atocha, Cibeles or Piramides, the bus is better. But I have three buses that take me to each of those places, and they show up every 25 minutes. If I'm going there, I want to know which stop will have a bus soonest.
I also live in Madrid. Many years ago, wrote a bash script that downloaded the real time data for the bus stop nearest to home from the EMT website and read out loud the minutes for the next bus with festival.
We had a keyboard next to the couch, many keys were shortcuts to execute commands like that one. So it was a matter of pushing a key and listening.
I'm mostly unhappy with the current usage of LLMs. Not sure if I qualify as a "never-AI guy".
My main pet peeve is that AI is a blurry term almost to the point it has no definitions. It's basically a marketing term. AI has been applied, historically, to many trendy and novel facets of computer science, until they are not novel anymore and then they stop being AI. Expert systems, voice generation, image processing, genetic algorithms...
My take on the article: this is cool. I don't think I would call it AI though, if I could avoid it.
Note that here I'm talking about firing good performers because you have "too many" of them. Firing bad performers because the estimated cost of training them is not justified, I can understand.
The only explanation I found satisfying is that investors heuristically care about profit per capita(ppc) as well as total profit, and employees who don't produce _enough_ profit reduce ppc and thus investment to point the opportunity cost of firing them aligns. You'll make investors happy, which will raise valuation, more than what you are losing from the lost profit. But this is not "rational" in a full information economic sense. It's essentially the company virtue signaling that they are capable to fire if they had to, even at the cost of actual dollars.
reply