Hacker Newsnew | past | comments | ask | show | jobs | submit | willj's commentslogin

Can you say more about the approach you take for summarization? Are the papers short enough that you just put the whole thing in the context window of the model you’re using, or do you do anything fancy? I’ve tried out various summarization approaches (hierarchical, aspect-based, incremental refinement), and am curious what you found works best for your use case.

This is something I built over the holidays to support people having a hard time with the short days and early sunsets: https://sunshineoptimist.com.

For the past several years I would look up the day lengths and sunset times for my location and identify milestones like “first 5pm sunset”, “1 hour of daylight gained since the winter solstice”, etc. But that manual process also meant I was limited to sharing updates on just my location, and my friends only benefitted when I made a post. I wanted to make a site anyone could come to at any time to get an optimistic message and a milestone to look forward to.

Some features this has:

- Calculation of several possible optimistic headlines. No LLMs used here.

- Offers comparisons to the earliest sunset of the year and shortest day

- Careful consideration of optimistic messaging at all times of year, including after the summer solstice when daylight is being lost

- Static-only site, no ads or tracking. All calculations happen in the browser.


Nice! Love the domain. Did you use Claude to design the landing page?

Thanks! I got some initial ideas from Nano Banana, actually, but then spent a while iterating on different layouts myself.

I think the models are so big that they can’t keep many old versions around because they would take away from the available GPUs they use to serve the latest models, and thereby reduce overall throughput. So they phase out older models over time. However, the major providers usually provide a time snapshot for each model, and keep the latest 2-3 available.

This reminds me a bit of using LLM frameworks like langchain, Haystack, etc., especially if you’re only using them for the chat completions or responses APIs and not doing anything fancy.


DOOMscroll[1] for sure! I still play it since hearing about it on HN.

[1] https://ironicsans.ghost.io/doomscrolling-the-game/


Fun!


I think this ignores that the monopolies have the power to buy up any new competitors, or to drive them out of business using monopoly power. Regulatory hurdles are only one tool that (can) benefit monopolies.


I think that’s different. AlphaGo is using reinforcement learning in a context in which there is a clear evaluation function— did a strategy lead to a win or loss.


I said it's not as simple but there are ways - e.g. you can generate more of your best quality of data, you can try to model the direction of quality or an objective, you can have minor human input at some points to judge which direction performs better, you can objectively verify some of your input to use as a partial objective - code, math, logic etc.


Relatedly, the OCR component relies on PyMuPDF, which has a license that requires releasing source code, which isn’t possible for most commercial applications. Is there any plan to move away from PyMuPDF, or is there a way to use an alternative?


FWIW PyMuPDF doesn't do OCR. It extracts embedded text from a PDF, which in some cases is either non-existent or done with poor quality OCR (like some random implementation from whatever it was scanned with).

This implementation bolts on Tesseract which IME is typically not the best available.


Author here. I’m very open to alternatives to PyMuPDF / tesseract because I agree OCR results are sub optimal and it has a restrictive license. I tried basic ones and found the results to be poor.


This article compares multiple solutions and recommends docTR (Apache License 2.0): https://source.opennews.org/articles/our-search-best-ocr-too...


I’d argue Bitcoin is Obscene Energy Demand.


If obscene means "I dislike demand for X personally", then sure. But if the actually useful definition of "demand for X will unavoidably collapse society" is used, then you need to actually read the second link.


At least ChatGPT is writing essays.

Whereas Ethereum has clearly shown that you can have a safe chain without this craziness.

But from reddit to Bitcoin talk to Bitcoin core development, there's a handful of people that control everything and want the status quo preserved.


it's plagarizing other people's essays


It's plagiarizing essays in the same way that I'm plagiarizing dinosaur breath by using the same air molecules.


Where are the copyright lawsuits, then?


You've missed New York Times having sued OAI and Microsoft recently, and joined by several others?


IANAL and I haven't dug deeply into the lawsuits, but my understanding is that using deliberate prompt engineering, they were able to get it to reproduce copyrighted material verbatim from its training set. That is obviously a problem, just as it would be if a human read the article and then reproduced part of it verbatim without attribution, but it's a very different argument than a blanket "everything AI generates is plagiarism"


Precisely. These headline repetitions as arguments do get tiring after a while.


humans can have original ideas, since they possess reasoning ability, whereas large language models cannot, since they have none


It doesn't? That's great! Why didn't you say so? And, of course, you have proof of that?


I'd suggest Chomsky for this:

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...

of course there are others who disagree with him but they're wrong of course


Bitcoin is especially obscene because it counteracts efficiency gains. If a new Bitcoin miner that is 10x more efficient for hash rate comes out, Bitcoin won’t use 10x less energy, instead the difficulty will get 10x harder so it continues using similar amounts of energy.

With AI, on the other hand, efficiencies help. Of someone comes up with AI hardware that does inference or training 10x more efficiently, people will incorporate that and the energy usage for the same amount of work will decrease.


You are fundamentally misunderstanding the purpose of the difficult mining in Bitcoin. It is to keep the currency scarce and difficult to obtain. If it didn’t scale with increases in technology all the Bitcoin would quickly be mined and you’d have money printing like the fiat money we are trying to escape.


Yeah if I had to stack rank 'obscene' energy consumption technologies, Bitcoin would be number 1 and AI would be pretty low on the list.


Where are the happy offices these days? Which companies are the new “Google” who people are very eager to work for?


It's a depressing era with post-covid sugar crash, new wars, getting over inflation, job market being in the trash, etc. etc.

I'd argue there are no "happy offices" as a whole, and won't be for another ~10 years until everything settles and we're on an upwards trajectory (and sugar rush) again.


Why we as a society put up with this yoyo-ing economic model will never make sense to me. Nobody likes recessions, or layoffs, yet we're all complicit with them.


Economic shocks are simply unavoidable. Supposedly the goal is to avoid the yo-yo effect of them, e.g. try to control the rate of inflation and unemployment so it doesn't feel so chaotic. The COVID shutdown and money print to deal with it kind of worked on that front if you consider just how disruptive it all was and how people are still going about their lives, contributing to GDP.


The covid money injection caused this bubble though, these companies hired like crazy during that time and now did layoffs to a more normal level. It might have been a necessary stimulus for other sectors, but the tech sector really didn't need it so there it just caused a bubble.


I'm aware, but the money injection didn't come from a desire to inflate the bubble, more a desire to avoid the calamity from shutting down the world economy for 3 months.

I do think there's an issue where the Fed is absolutely terrified of asset prices dropping and contagion spreading on all fronts, now that they have found this intervention works, but that's kind of tangential to my point, that the goal actually is to avoid the yo-yo, with the chief metrics of unemployment and inflation to guide them.


Everyone likes bubbles and hiring sprees though. If people didn't apply for jobs during hiring sprees these bubbles wouldn't happen either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: