Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Don't Worry about LLMs (vickiboykis.com)
33 points by sebg on May 31, 2024 | hide | past | favorite | 19 comments


Warning: this page loads 234 MB of data! Images are up to 7 MB each.

https://pagespeed.web.dev/analysis/https-vickiboykis-com-202...


"Does your page design improve when you replace every image with William Howard Taft?

If so, then, maybe all those images aren’t adding a lot to your article. At the very least, leave Taft there! You just admitted it looks better."

https://idlewords.com/talks/website_obesity.htm


> Please don't complain about tangential annoyances

https://news.ycombinator.com/newsguidelines.html


I found the images distracting, and in particular the tweet-sized chunks of text separated by picture or two made the text hard to follow.

So I think it's a legitimate criticism as this is a site for technical folks. Sure, not the topic of the post being commented on, but a good example to discuss.


Looks like it's a presentation transcription.


I find it useful. I'm on a pretty limited mobile data plan. Imagine others might be too. Though I wish there was a way my browser could warn me.


Please let Dang do the moderating. I think he gets paid by the flag.


Thanks for the reminder, but anyone visiting this site on a restricted data plan may be in for a surprise. Websites that download a quarter gigabyte of data without warning is more than an annoyance.

How do you suggest I warn others to avoid this site? I don't think it's suitable for HN in its current state.


This is not a tangential annoyance, it’s akin to a NSFW warning.


Author here. That's definitely my bad and not an intended user experience. The text was initially meant as a transcript accompanying presentation slides. I've compressed the images so it should be at least slightly better now.


It's 2024. So what :P

The article itself is a confusing jumble of fiction, religion and tech though. I don't grok it.


Similar to the 1000 interns question from the article, I've heard:

"if AGI was available via an API tomorrow, what would you build?"

And it's difficult to come up with a great answer. Certainly the summarization, classification, translation, named entity type tasks are a great start (in fact it's the main thing I use LLMs for). But surely there's a more impactful direction you could steer the tech. Even if the resource was just 1000 slightly above average interns.


If AGI were available as an API tomorrow, I would have it come up with 1000 startup ideas, code them all, market them all, and hope at least 1 reaches PMF without leaving my couch.


Regime change as a service. Use AGI agents to apply a wide range of cyber attacks, propaganda, astroturfing, and psychological warfare in untraceable, deniable ways to cause the downfall of a foreign government. Like, let's say, Russia.


> "if AGI was available via an API tomorrow, what would you build?"

I guess I'd probably start with a bomb shelter, but it would take a while to work up the motivation. Maybe I'd just invite family over and look through photo albums together.


My (easy?) answer: blockchain-based owned by no one news/discussion platform a-la Reddit/HN, moderated by AGI instructed for fairness, fact-check, against private-interest propaganda and deceit.

Can’t imagine any way to make any money on that, though.


Basically what X wants to be in the (far) future?


Unless I missed something, X has owners who can be coerced by a government and moderation instructions are not publicly open.


> There were too many open-ended options , a lot of people who were loud online. So, the developers decided to rent an Airbnb outside the city for a week so they could really focus, isolate, and ship some code. When they got together around a whiteboard, they frantically started researching what tools to build an LLM with ..

This ostensibly post-modern article illustrates a few key things:

* Gradient decent is indeed a key actor in the play, but it might be easier to explain e.g. classic TF-IDF search or Penn Treebank parsing and compare it with LLMs in order to grasp the ROI of modern techniques. The hype has the crowd confused about relative improvement versus magic.

* Mozilla is definitely not going to build and launch an LLM-based search product that could rival Google. Extra fodder for the argument that Google has an unfair monopoly.

* ML deployment in the past decade has failed to formalize things like the “reprex” (reproducible input/output pairs) and the “vibe check” (ignoring the test/validation accuracy and throwing subjective inputs through the inference path). The industry around LLMs is getting so “big” that these things aren’t just basic day-to-day skills but whole teams and maybe even dedicated start-ups. But will this hold in 3-5 years after LLMs have evolved?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: