Yegge was an early employee at Amazon and has been writing influential blog posts and developing massive software projects since before this guy was born. But sure, in his retirement he's pivoted to pump and dump schemes.
You’re confused about what an “appeal to authority” actually is.
An appeal to authority is saying “X is true because this person said so.” That’s not what’s happening here. What’s happening is people treating expert opinion as evidence, not a verdict.
You say you don’t want appeals to authority, then you immediately offer your own opinion and expect people to take it seriously. Why? On what basis? Because it’s your judgment?
That’s the funny part. The moment you state an opinion, you’re asking others to weigh your credibility against someone else’s. You don’t escape authority, you just replace it with yourself.
Yegge’s opinion has weight because of his track record. It can still be wrong. Mine can be wrong. Yours can be wrong. That’s why people compare opinions instead of pretending they live in a vacuum.
Ignoring expert opinion entirely isn’t “independent thinking.” It’s just choosing to be uninformed and calling it a virtue.
I would’ve taken this response more seriously if it weren’t written with LLM assistance.
Regardless, it’s a lot of words to again say “they are famous, so consider them more seriously” despite the obvious scam being perpetuated via crypto. The appeal to authority is you stating their credentials first, and none of the deductions you claim one should make from merit.
It wasn’t LLM assisted. That accusation is just a way to avoid dealing with the point.
You keep restating a position no one is taking. No one said “they’re famous, therefore right.” That’s something you invented so you don’t have to argue against what was actually said.
Credentials don’t make an argument true. They explain why an opinion isn’t noise. Pretending otherwise doesn’t make you principled, it just makes you incurious.
If there’s an obvious scam, spell it out. If the reasoning is flawed, point to the step where it fails. You haven’t done either.
So far all you’ve contributed is tone policing, motive guessing, and now AI paranoia.
Your position was much worse - instead you’re devaluing the original post author’s opinion because they don’t have the credentials of the person you’re comparing them to.
All your replies have severe clear AI slop smell, you’re not giving me any reason not to assume otherwise tbh. It’s more about whether you respect my replies to formulate your own answer, but given your appeal to authority, clearly you have no qualms allowing others (senior engineers, AI/LLMs) to determine them for you!
You’re doing that thing where you confuse confidence with thinking.
You haven’t engaged with the argument once. You’ve complained about credentials, then tone, then AI, then “respect.” That’s four pivots and zero substance.
I didn’t say “trust this person instead of thinking.” I said experience adds context. You keep pretending those are the same thing because otherwise you’d have to actually respond.
The AI accusation is just embarrassing. It’s the online version of “I can’t refute this, so I’ll imply you cheated.” That might feel clever, but it mostly signals panic.
If you’re as sharp as you seem to think you are, this shouldn’t be hard. Pick a claim. Explain why it’s wrong. Everything else is just noise you’re making to avoid that moment.
Weirdly enough, your older comments don’t all have AI slop smell - so I have to assume this is some kind of performance art, or a recent shift.
I also wasn’t aware that I’m speaking to someone who actually persistently appeals to authority and maintains a list of figures to bow down to: https://news.ycombinator.com/item?id=46783280
As such, this is where I get off this thread train. Seeya!
“Maintains a list” is a funny invention. I asked a tool to help recall names. You turned that into a personality trait.
What actually happened is pretty obvious. You ran out of things to argue, so you switched to archeology. Scroll history, squint hard, invent a story, declare victory, announce departure.
That move isn’t rare. People do it when they don’t want to admit they’ve hit the end of their reasoning but still want to feel like they left on their own terms.
Confident people don’t need a backstory, a diagnosis, and an exit speech. They make the point and keep talking.
You're just quietly running away and hoping no one noticed.
You’re still not responding to what I actually said.
No one claimed “X said it’s good, therefore it’s good.” The point was that ignoring what experienced people say entirely is just as dumb as following them blindly.
You told me to “think for myself.” Great. Thinking for yourself doesn’t mean pretending expert opinion doesn’t exist. It means weighing it against your own understanding. That’s literally how learning works.
Calling it a “ChatGPT list” is just you dodging the question. If those people are wrong, explain why. If some are right for bad reasons, name them. Laughing and changing the subject isn’t an argument.
You’re shadowboxing a strawman and congratulating yourself for winning.
If “crypto = scam” is the full extent of your analysis, then you’re not being skeptical or careful. You’re just running a reflex and pretending it’s reasoning.
People who actually understand things can explain how they’re wrong. People who don’t just announce they’ve already reached a conclusion and declare further thought unnecessary.
If that’s your bar, then yes, no other reasoning is required, because none was applied in the first place.
You keep saying “signal” like you’ve discovered some deep insight, but all you’re doing is reacting to a word and shutting your brain off.
“Crypto = scam” isn’t a filter, it’s a shortcut for people who don’t want to explain themselves. You didn’t analyze anything. You flinched and stopped.
What’s telling is how confident you are while saying nothing. No description of how the scam works. No incentives. No mechanics. Just “trust me, I’ve seen this before.”
If this is what you consider a clear negative signal, then yeah, everything must look very simple from where you’re standing. Simple is doing a lot of work for you here.
The irony is thick here. You’re mocking people for “following,” while repeating one of the oldest, laziest superiority tropes on the internet and congratulating yourself for it.
Treating any appeal to authority as invalid isn’t critical thinking. It’s simplistic. Authority isn’t a command to believe something, it’s evidence. One piece of the puzzle.
And here’s the part you keep skipping: your own reasoning is also just an opinion. It’s fallible, incomplete, and shaped by your experience. The difference is that, on HN, most people don’t have much of a track record for others to weigh.
When someone has decades of relevant experience, that doesn’t make them right. It does mean their opinion carries more information than that of an anonymous commenter with no reputation beyond a username and a tone.
Plenty of pump and dumpers are already wealthy what's your point? He's either doing it or not, his past employment isn't dictating it one way or another. His ability to "influence" through writing is salient to the discussion at hand, not some mitigating factor.
Internet -> Obvious value, more efficient communication, knowledge sharing, transactions etc.
AI -> Value is very obvious to me as a developer.
Blockchain -> ? What is the actual value? Something about decentralized finance and not having to trust anyone? And the tradeoff is every transaction costs $10 or more. It was always a dubious proposition with its "value" driven by speculative investment which fueled the hype machine.
Yeah there are parallels in that in all cases people got really excited about something tech and poured a bunch of money in, but the outcomes and actual amount of value derived can be wildly different.
You see that you're assessing AI from the depth of the AI bubble and coming to the same conclusion about AI as people who assess crypto from the depths of a crypto bubble came to, right?
The dotcom bubble was due to all the useless, speculative stuff people were doing with the internet, not the useful bits you referred to that are still around which we use today.
The AI bubble is coming from all the useless, speculative stuff people were doing with AI, not the useful bits you referred to that are still around which we use today.
... You see where I'm going with this, right?
Crypto use cases that are still around that get used today are are not hard to find for anyone sincerely wanting to accept that they exist. I've already listed a few in other posts. That's not my point though.
My point is there's a speculative bubble around AI, and that's got a lot of parallels to the speculative bubbles around crypto and dotcom. Everything you've said supports the idea that you're unaware that you're talking from inside a bubble.
> It was always a dubious proposition with its "value" driven by speculative investment which fueled the hype machine.
Explain to me - without speculation or hype - why we still need trillions more datacentres, power, water, money and everything else for AI, if we're already using it and it's already here and we're already getting the most out of it?
I said explain without speculation. You've just given 2 points that both reduce down to "potential but unquantifiable future benefits", or "speculation".
> There is no reason to believe this will slow down any time soon
What a joke this guy is. I can sit down and crank out a real, complex feature in a couple hours that would have previously taken days and ship it to the users of our AI platform who can then respond to RFQs in minutes where they would have previously spent hours matching descriptions to part numbers manually.
...and yet we still see these articles claiming LLMs are dying/overhyped/major issues/whatever.
Cool man, I'll just be over here building my AI based business with AI and solving real problems in the very real manufacturing sector.
1Password can be your 2fa and autofill those fields. It has a built in scanner which will look at your screen and read the QR code on the screen (no separate device needed).
Two Factor doesn't mean 2 devices.
Two factor generally has been thought of as "something you know, and something you have."
Let's do a quick threat model on putting both passwords and MFA tokens in a 1password vault.
1Password employees a recovery key + password login by default, and logging into a vault requires you to either have a device with the encrypted vault on it and your password, or have knowledge of your password and knowledge of your recovery key (normally in a file which makes it something you have) essentially traditional 2fa needed to log into a new device.
If someone steals your phone with 1password installed - they need your 1password to be able to access your credentials on the physical device. At that point they already have both your factors - your phone (have) and your password (know) - still protected by 2fa.
If someone manages to fully root your computer, they could wait until you unlock your vault and then extract your credentials. However, if you use traditional 2fa on a separate device - then they can just wait until you log into the target app, and then ride your session and get the same level of access to the target. While there may be a small difference in level of effort or how long it takes, the same access level is possible, and the requirements are that they have very privileged access to your operating system. Someone rooting the device that you login to services is grants them "single factor" access to your services when you access them.
There is some subtle differences between these, but except for situations where you have very high privileged requirements, at which point you should be using yubikeys or standalone MFA devices, using 1Password with OTP and password is very comparable to using a separate device for MFA.
I'm a previous red teamer and currently a blue teamer.
yeah this was super impressive. If this is at the point where you can put an arbitrary object in front of it and ask it to move it somewhere, that's going to be huge for industrial automation type stuff I'd imagine.
I do wonder how much of that demo was pre-baked/trained though. Could they repeat the same thing with a banana? What if the table was more cluttered? What if there were two people in the frame?
Knowing as many people in the robotics space as I do, I suspect the demo may not be completely "pre-baked" but it is almost certaintly highly selected. Often they'll try the demo many many times until they get a clean run-through without mistakes. The circumstances are also likely pretty idealized, like they pick objects and settings that they know it performs well in.
Not true. Boston Dynamics has been demoing long and complicated single take videos involving walking, running, jumping and manipulation, in less than ideal conditions.
You might be interested in "Text Embeddings Reveal (Almost) As Much As Text":
> We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.
"Quite easily" isn't true in most cases, but embeddings are sometimes reversible. We know this because programs like Stable Diffusion sometimes output near-perfect copies of training data when given the correct prompt, and generation of that image is based on word and image embeddings alone.
Well... sure. But OpenAI and MSFT have gone to a lot of trouble to build up the mystique around GPT-4 by being secretive about its architecture and publishing papers with tantalizing phrases like "sparks of AGI" and so on. I think this type of thing provides a useful counterbalance.
>But OpenAI and MSFT have gone to a lot of trouble to build up the mystique around GPT-4 by being secretive about its architecture and publishing papers with tantalizing phrases like "sparks of AGI" and so on.
An LLM will never be AGI itself. They are word calculators. However, a word calculator is precisely the tool we were missing to be able to create AGI. I believe OpenAI will be left in the dust with this stuff, as federated agents built on open models connect and induce the singularity.
> However, a word calculator is precisely the tool we were missing to be able to create AGI
This seems like the kind of step that the person above you was complaining about.
The "emergent" features of LLMs, or LLMs even being a step in the direction of AGI is entirely unproven so far. They are however powerful enough that they spark the imagination and hypothesis of tons of amateur futurists (and many financial backers of such proyects)
>The "emergent" features of LLMs, or LLMs even being a step in the direction of AGI is entirely unproven so far. They are however powerful enough that they spark the imagination and hypothesis of tons of amateur futurists (and many financial backers of such proyects)
That's exactly what I was saying, that it's a mistake to ever think of LLMs as AI. They are the prefrontal cortex. The I/O mechanism. But we still need the spark of agency. The soul if you will. Point being that we can actually work on that for real now, since the language part has been handled.
This was my thought when we first wrote this back in... 2018 or whatever. The papers referenced sort of derive this technique in that way that feels rather roundabout. For the actual implementation we took the more direct approach... though I think we did switch from a twos complement to a sign/magnitude representation at one point which allows us to dynamically vary the bit depth used which can save some space and computation time.
As far as the performance goes, in this system, we represent almost everything with compressed bitmaps, so there's some advantage to using them for integers and range queries as well as the output of a range query is very naturally a bitmap which can easily be combined with more typical categorical bitmaps when evaluating more complex queries.
Depends what you want out of life and what career you're moving to... you can become pretty competent in a career in tech in just a few years and it will likely become lucrative pretty quickly. I feel that there will likely be increasing demand for programmers/engineers for decades to come. Check out Steve Yegge's youtube show for thoughts on this... https://www.youtube.com/watch?v=C8332hz8c2s&list=PLZfuUWMTtM...