Hacker Newsnew | past | comments | ask | show | jobs | submit | jetrink's commentslogin

This is such a wonderfully unlikely story. The museum, known for its dinosaur displays, was drilling a borehole in its parking lot as part of a building improvement project. The 5cm borehole happened to go straight through the spine of a small dinosaur.

The Sandra and Woo webcomic had a dinosaur in the parking lot. O:-) https://www.sandraandwoo.com/2012/08/02/0399-shiver/

The Bitter Lesson is specifically about AI. The lesson restated is that over the long run, methods that leverage general computation (brute-force search and learning) consistently outperform systems built with extensive human-crafted knowledge. Examples: Chess, Go, speech recognition, computer vision, machine translation, and on and on.


This is correct however I’d add that it’s not just “AI” colloquially - it’s a statement about any two optimization systems that are trying to scale.

So any system that predicts the optimization with a general solver can scale better than heuristic or constrained space solvers

Up till recently there’s been no general solvers at that scale


I think it oversimplifies, though and I think it’s shortsighted to underfund the (harder) crafted systems on the basis of this observation because, when you’re limited by scaling, the other research will save you.


It turns out that Woodland v. Hill is not about landscape photography.


I jumped over to the Wikipedia page of early blogger Justin Hall to see what he's up to. He has another distinction that he can probably claim: The longest recorded gap between registering a domain and finally using it to start a business.

"In September 2017, Hall began work as co-founder & Chief Technology Officer for bud.com, a California benefit corporation delivering recreational cannabis, built on a domain name he registered in 1994."


That reminded me of Orkut, which was a social networking product, but created by Orkut Büyükkökten.

So he just reused his personal domain name for the product! https://orkut.com/

https://en.wikipedia.org/wiki/Orkut


This reminds me of the joke about the guy who couldn't afford a vanity number plate for his car so he changed his name to CK-16450


later, he held onto hello.com for years with a "coming soon! the next network from orkut!" Supposedly you could get an invite but I don't know anybody who ever actually used it.


I think domains were even free in 1994. I think the owner of rob.com told me he just had to send in a form or something back then.


They were. I think it was 1995 they started charging? I had dozens of domains. There was a simple text file form you had to type over. Then they started charging $200/2yr for .com/.net/.org and a lot of us let our domains go which ended up being worth tens of millions a few years later during the boom.

(the story at the time of what killed the "free" is that Unilever mailed in 19,000 forms; one for each of their registered trademarks)


It is an accurate depiction of how Chicago police operated, unfortunately. In fact, one Chicago detective who tortured suspects went on to work as an interrogator at Guantanamo Bay[2]. It's terrible that the series would glamorize that behavior.

1. https://chicagoreader.com/news/the-police-torture-scandals-a...

2. https://en.wikipedia.org/wiki/Richard_Zuley


That sounds like a reasonable prediction to me if the LLM makers do nothing in response. However, I'll bet coding is the easiest area for which to generate synthetic training data. You could have an LLM generate 100k solutions to 10k programming problems in the target language and throw away the results that don't pass automated tests. Have humans grade the results that do pass the tests and use the best answers for future training. Repeat until you have a corpus of high quality code.


I just looked it up. The process involves modifying a fertilized egg or embryo to create a founder organism. The gene drive is designed to be inherited by more than 50% of the organism’s offspring and propagate through the population over successive generations. Because humans have fewer offspring and longer generation times compared to insects, it would take many generations and potentially hundreds of years for a gene drive introduced in a single human to spread widely through the population.


I apologies for my other reply - which was flagged - perhaps I was rude. You are wrong because there are ways to introduce a gene drive through pathogens - e.g. you introduce a self-generating CRISPR payload through a pathogen vector for which it is possible to completely saturate gen 0. Source: I worked in a leading biology lab, with biologists actually performing this research.


If you can get a pathogen vector to infect all of humanity you already have just about everything you need to cause massive damage; the gene drive doesn't make this situation appreciably worse.

(Speaking for myself, not SecureBio)


We are comparing guns and bullets. The fact that you can genetically engineer a pathogen vector to deliver a gene drive is not well known to the public.


We just recently had COVID which was likely bioengineered (as the lab leak is now "officially" considered a plausible explanation)


How could they make sure that the CRISPR payload survives replication?


Which replication? Of the virus or of gen 0?


Jevon's Paradox: https://en.wikipedia.org/wiki/Jevons_paradox

"In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption."


It's the same reason widening the road doesn't lead to less traffic.


> With web search, Claude has access to the latest events and information, boosting its accuracy on tasks that benefit from the most recent data.

I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?


I have found that for RAG use cases where the source can be document or web data, hallucinations can still occur. This is largely driven by the prompt and alignment to the data available for processing and re-ranking.


> nor makes it easier to search for online.

Searching for 'pebble core 2 duo' already comes up with a page of results only related to the watch[1] (including this very comment thread, ironically[2].) Search engines are very good these days.

1. https://imgur.com/TE3aaGY

2. https://imgur.com/l4aBszK


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: