Hacker Newsnew | past | comments | ask | show | jobs | submit | ashellunts's commentslogin

What is the app shown in the screenshot in the article that displays a flight price?


Do you say his giveaways are fake?


I mean, it was obvious to me they couldn’t have been able to pull $10k of money per video at the start. Maybe now, with 400M subscribers that would work just fine.


Very similar to my experience. I also still use it because benefits for me as a developer are much more important, especially 30% docker performance win over wsl on same hardware.


Fully agree. Mine has broken recently and I needed to buy a new phone. The S25 I have now is not much bigger, but even that small difference matters much. Though 120Hz screen is nice.


I do AoC first time and use it to try a functional LISP like language (Clojure).

Though it is just a beginning and maybe it is a matter of habit,but I should say, that reading LISP family language is so difficult for my brain.


It takes a while. But if you continue to practice, it will eventually click.


Exactly that I see in the Android app.


I see chain of thought responses in chatgpt android app.


Tested cipher example, and it got it right. But "thinking logs" I see in the app looks like a summary of actual chain of thought messages that are not visible.


o1 models might use multiple methods to come up with an idea, only one of them might be correct, that's what they show in ChatGPT. So it just summarises the CoT, does not include the whole reasoning behind it.


From telegram privacy policy:

>If Telegram receives a court order that confirms you're a terror suspect, we may disclose your IP address and phone number to the relevant authorities. So far, this has never happened. When it does, we will include it in a semiannual transparency report published at: https://t.me/transparency.


What about "child trafficking suspect", "arms dealer suspect" or "drug dealer suspect" ?


The problem here is that authoritarian and Western governments might request the data about opposition activists under excuse of being "drug dealer suspect". For example, what if US requests data on Snowden or Assange?


Some government officials also qualify environment activists as "ecoterrorist" which make them enter in the "terror" category.


Are you sure it is not hallucinating? Most likely these models don't have an access to the Internet.

edit: typo


Yes, I got way too excited and comment trigger happy. It does not appear to browse the web and was just hallucinating. The hallucinations were surprisingly convincing for a couple of the pages I tested. But on examining the network requests, no fetches were made to the pages. Llama 3 was just a lot better at hallucinating convincing results than Tiny Llama.


Split to smaller chunks, summarize them. Then summarize summaries.


You might want to overlap the first pass of chunks, something could get lost at the chunk boundaries. Not any sort of expert on this sort of thing, it just seems like an obvious pitfall for the context length.


I really like this idea. It’s basically applying similar principles as are used in image based nets - i.e. sliding window convolutional kernels - to text.


I built summarize.tech

Yes it's a great idea and I have a version that is basically a convolution over the transcript. It works much better than the current version - it can automatically create cohesive chapters and summaries of those chapters - however, it consumes an order of magnitude more ChatGPT API calls making it uneconomical (for now!)


I'm inspired that this is a side project, given everything you run. Kudos.


Thanks for the kind words. I built it on a few cross-country plane rides and now I mostly just leave it alone. The infrastructure and tooling we have these days is so incredible.


Can you please eli5 the difference of old and new?


Sure. The old one just splits the transcript into 5 minute chunks and summarizes those. The reason this sucks is because each 5 minute chunk could contain multiple topics, or the same topic could be repeated across multiple chunks.

This dumb technique is actually pretty useful for a lot of people though, and has the advantages of being super easy to parallelize and requiring only 1 pass through the data.

The more advanced technique does a pass through large chunks of the transcript to create lists of chapters in each chunk. Then it combines them to a single canonical chapter list with timestamps (it usually takes a few tries for the model to get it right). Then it does a second pass through the transcript, summarizing the content for each chapter.

The end result is a lot more useful, but is way slower and more expensive.


This is the standard practice already


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: