This is such a wonderfully unlikely story. The museum, known for its dinosaur displays, was drilling a borehole in its parking lot as part of a building improvement project. The 5cm borehole happened to go straight through the spine of a small dinosaur.
The Bitter Lesson is specifically about AI. The lesson restated is that over the long run, methods that leverage general computation (brute-force search and learning) consistently outperform systems built with extensive human-crafted knowledge. Examples: Chess, Go, speech recognition, computer vision, machine translation, and on and on.
I think it oversimplifies, though and I think it’s shortsighted to underfund the (harder) crafted systems on the basis of this observation because, when you’re limited by scaling, the other research will save you.
I jumped over to the Wikipedia page of early blogger Justin Hall to see what he's up to. He has another distinction that he can probably claim: The longest recorded gap between registering a domain and finally using it to start a business.
"In September 2017, Hall began work as co-founder & Chief Technology Officer for bud.com, a California benefit corporation delivering recreational cannabis, built on a domain name he registered in 1994."
later, he held onto hello.com for years with a "coming soon! the next network from orkut!" Supposedly you could get an invite but I don't know anybody who ever actually used it.
They were. I think it was 1995 they started charging? I had dozens of domains. There was a simple text file form you had to type over. Then they started charging $200/2yr for .com/.net/.org and a lot of us let our domains go which ended up being worth tens of millions a few years later during the boom.
(the story at the time of what killed the "free" is that Unilever mailed in 19,000 forms; one for each of their registered trademarks)
It is an accurate depiction of how Chicago police operated, unfortunately. In fact, one Chicago detective who tortured suspects went on to work as an interrogator at Guantanamo Bay[2]. It's terrible that the series would glamorize that behavior.
That sounds like a reasonable prediction to me if the LLM makers do nothing in response. However, I'll bet coding is the easiest area for which to generate synthetic training data. You could have an LLM generate 100k solutions to 10k programming problems in the target language and throw away the results that don't pass automated tests. Have humans grade the results that do pass the tests and use the best answers for future training. Repeat until you have a corpus of high quality code.
I just looked it up. The process involves modifying a fertilized egg or embryo to create a founder organism. The gene drive is designed to be inherited by more than 50% of the organism’s offspring and propagate through the population over successive generations. Because humans have fewer offspring and longer generation times compared to insects, it would take many generations and potentially hundreds of years for a gene drive introduced in a single human to spread widely through the population.
I apologies for my other reply - which was flagged - perhaps I was rude. You are wrong because there are ways to introduce a gene drive through pathogens - e.g. you introduce a self-generating CRISPR payload through a pathogen vector for which it is possible to completely saturate gen 0. Source: I worked in a leading biology lab, with biologists actually performing this research.
If you can get a pathogen vector to infect all of humanity you already have just about everything you need to cause massive damage; the gene drive doesn't make this situation appreciably worse.
We are comparing guns and bullets. The fact that you can genetically engineer a pathogen vector to deliver a gene drive is not well known to the public.
"In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption."
> With web search, Claude has access to the latest events and information, boosting its accuracy on tasks that benefit from the most recent data.
I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?
I have found that for RAG use cases where the source can be document or web data, hallucinations can still occur. This is largely driven by the prompt and alignment to the data available for processing and re-ranking.
Searching for 'pebble core 2 duo' already comes up with a page of results only related to the watch[1] (including this very comment thread, ironically[2].) Search engines are very good these days.
reply