Hacker Newsnew | past | comments | ask | show | jobs | submit | windowshopping's commentslogin

It's amazing to me how nobody seems to know about the short story "The great automatic grammatizator" by Roald Dahl. Nobody got closer than him. I feel like I should be reading about it all the time and no one seems to have ever heard of it.

“There are many other little refinements too, Mr Bohlen. You’ll see them all when you study the plans carefully. For example, there’s a trick that nearly every writer uses, of inserting at least one long, obscure word into each story. This makes the reader think that the man is very wise and clever. So I have the machine do the same thing. There’ll be a whole stack of long words stored away just for this purpose.”

“Where?”

“In the ‘word-memory’ section,” he said, epexegetically.

https://gwern.net/doc/fiction/science-fiction/1953-dahl-theg...


Roald

Autocorrect error

Grammatizator error

Roald Dahl also wrote a story about two dudes who wanted to try each other's wives without getting consent from said wives so they swapped places in the middle of darkness and then the next morning, one of the dude's wives said to her husband, "Holy shit, whatever you did last night was amazing. I never liked doing the hot dog dance before but if you can keep doing what you did last night, I'll always be down!"

Why would you out of nowhere spoil a book like that?

Because its ancient and theres no social contract preventing spoilers after 8 weeks.

Because the horror of dahl’s adult stories are as pervasive even if knowing the ending. I reread many times and still get the same sense of impending doom barbarically twisting fates in the mind - what if it was true?

Yeah sounds like a real winner of a short story at the top of the priority list.

what was this called?

The Great Switcheroo

Might check out the site I built at cardtavern.com.


Interesting, this one did pop up on my radar a while back, i always keep my pulse on the playtester space :) Although cardtavern one strays away more from the free form play that simulates a kitchen table game of magic, piles and structured zones ect, much like untap.in has (as i mentioned, users want fast structured play and QOL mechanics built in). The ops vision seems to be its a table with some hide/reveal and shuffle mechanics, do what you want after that. Which i appreciate, allows a bit of fun.


I don't quite understand how to use this, but if people are looking for a way to play magic digitally I might recommend the site I built at cardtavern.com.


You’ve now linked to your own thing three times in three comments in three minutes.

That’s too much.


Wait, I had no idea dhh was on the outs now. This is the first I've heard of this. I have to go look for more information about this. What did he do?


Not sure he's "on the outs", he on Shopify's board.

Sidekiq's solo dev (Mike Perham) has for many years made a generous donation to Ruby Central. He informed them that he didn't want his money to be spent platforming dhh at their conference, they ignored his request, he stopped his annual donations.

If you want to read about dhh's colorful blog posts and tweets: https://jakelazaroff.com/words/dhh-is-way-worse-than-i-thoug...


Colorful is an odd way to spell "vocally bigoted".


I get downvoted here when I call him a racist.


Me too, and because of that I feel it's even more important to use language like racist, white nationalist, and fascist when describing him and his ilk, because that's what they are. Softening the language only leads to those beliefs becoming more normalized than they already are.


He came out as a white nationalist [1]. And he's always been contentious.

[1] https://jakelazaroff.com/words/dhh-is-way-worse-than-i-thoug...


If

MINASWAN...

WTFIDHHSAFA???


I would recommend as a starting point this beautiful piece from November: https://okayfail.com/2025/in-praise-of-dhh.html


If you’d like to read, in his own words, his “coming out” as an ultra right wing racist piece of shit, feel free to look on his blog for the post titled “As I Remember London.”


You may enjoy the book Latin alive, for me it was a revelation on how the romance languages diverged and took their present forms.


Thank you so much for the recommendation! I think that's right up my alley and I'm gonna grab it right now.


I built a daily puzzles site at https://dailybaffle.com, and I'm working on promoting it and releasing the mobile app for it this month. Turns out it's a lot of work to promote things!


I've been able to get something like 25 interviews in 2 months despite having long gaps on my resume and nothing especially impressive to my name. So I suspect you might be going about this wrong. I haven't gotten an offer yet, that's another story, but getting the interviews hasn't been hard. Applying in NYC/SF, senior-only.


I think another big change is the offer rate. I've had plenty of interviews in recent years but almost no offers.


So what do you attribute your success in getting interviews to? What are you doing right, or better than other people?


I honestly have no idea. The last place I worked is pretty well-known. Not big tech, but a recognizable name to most people. I send out a lot of applications: those 25 interviews are the result of 150 applications in the last two months or so. And then I have my linkedin set to be discoverable and looking for a job. Basically just fiddle with the options under Visibility and Data Privacy in the linkedin settings and a bunch of people start reaching out to you immediately. I also think I have a nicely formatted resume, really readable.


So are the majority of these applications the result of recruiters finding you via LinkedIn, or have you been applying direct as well? What application path have most of the interviews come from?


Both equally. I apply to a lot of stuff on BuiltIn, mostly.


Thanks!


In my experience anyone who went to Ivy+ or worked at Big N never has any problem getting a job.


I did not go to an Ivy, but I did go to a recognizable engineering school in the northeast (not MIT.)


So, NYU?


Location has always been a huge factor in these discussions. There are usually significantly less opportunities outside of hubs. It’s a cart/horse problem- because companies go to those hubs to hire due the talent pool.


Senior level is doing the heavy lifting here.


Isn’t the person with 14 years experience at least senior? Or are you saying senior is low level enough to get interviews?


Yes, I’m not speaking about the person with 14 years of experience.

I’m saying that looking for senior level roles is much easier than roles that aren’t quite senior yet.


The part that eludes me is how you get from this to the capability to debug arbitrary coding problems. How does statistical inference become reasoning?

For a long time, it seemed the answer was it doesn't. But now, using Claude code daily, it seems it does.


IMO your question is the largest unknown in the ML research field (neural net interpretability is a related area), but the most basic explanation is "if we can always accurately guess the next 'correct' word, then we will always answer questions correctly".

An enormous amount of research+eng work (most of the work of frontier labs) is being poured into making that 'correct' modifier happen, rather than just predicting the next token from 'the internet' (naive original training corpus). This work takes the form of improved training data (e.g. expert annotations), human-feedback finetuning (e.g. RLHF), and most recently reinforcement learning (e.g. RLVR, meaning RL with verifiable rewards), where the model is trained to find the correct answer to a problem without 'token-level guidance'. RL for LLMs is a very hot research area and very tricky to solve correctly.


Because it's not statistical inference on words or characters but rather stacked layers of statistical inference on ~arbitrarily complex semantic concepts which is then performed recursively.


This answer makes sense if you know that LLMs have layers, if you don't this answer is not super informative.

If I were to describe this to a nontechnical person, I would say:

LLMs are big stacks of layers of "understanders" that each teach the next guy something.

Imagine you are making a large language model that has 4 layers. Each layer will talk to it's immediate neighbor.

The first layer will get the bare minimum, in the LLM's of today, that's groups of letters that are common to come up together, called "tokens". This layer will try to derive a bit of meaning to tell the next layer, such as grouping of letters into words.

The next layer may be a little bit more semantic, for example interpreting that the word "hot" immediately followed by the word "dog" maps to a phrase "hot dog".

The layer after that, becoming a bit more intelligent given it's predecessors have already had some chances at smaller interpretations may now try to group words into bigger blobs, such as "i want a hot dog" as one combined phrase rather than a set of separated concepts.

The final layer may do something even more intelligent afterward, like realize that this is a quote in a book.

The point is that each layer tries to add a little meaning for the next layer.

I want to stress this: the layers do not actually correspond to specific concepts the way I just expressed, the point is that each layer adds a bit more "semantic meaning" for the next layer.


DNNs aren't really "statistical" inference in the way most people would understand the term statistics. The underlying maths owes much more to calculus than statistics. The model isn't just encoding statistics about the text it was trained on, it's attempting to optimize a solution to the problem of picking the next token with all the complexity that goes into that.


One problem is that "statistical inference" is overly reductive. Sure, there's a statistical aspect to the computations in a neural network, but there's more to it than that. As there is in the human brain.


This literally says nothing - are we supposed to infer that they are putting the product into maintenance mode and will no longer be developing new features for it? This is a masterpiece of corporate nullspeech.


> with an emphasis on maintaining quality and operational excellence rather than introducing new features

it sounds pretty clear that it's in maintenance mode


It’s clear enough but they aren’t going out of their way to make it obvious. It’s definitely fluffed up / corporately sanitized.


Damage control to limit the rush for the exits?


I love the idea of this site but have always been disappointed by the fact that it's more of a slideshow than actual animations. You have to do a fair bit of interpolation if you aren't experienced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: