Hacker Newsnew | past | comments | ask | show | jobs | submit | whoami_nr's commentslogin

Quick calculator to visualize/simulate voice latency and compare API costs across voice model providers


Author here. Yeah totally agreed. The more rigorous way to do this would be to use a fixed seed and temp and in a local model setting and then sample the logprobs and then analyse that data.

I had an hour to kill and did this experiment.


Author here. I know it’s silly. I understand to some extent how they work. I was just doing this for fun. Took about 1hr for everything and it all started when a friend asked me whether we can use them for a coin toss.


Sorry, I did not mean to downtalk the blog post :) I did not mean silly as in stupid. It's rather the title that I think is misleading. Can a LLM do randomness? Well, PRNGs are part of it so the question boils down whether PRNGs can do randomness. As mentioned here before, setting the temperature of say GPT-2 to zero makes the output deterministic. I was 99% sure that you as the author knew about this :)


Veritasium did a video on this. Most people guess 37 when asked to pick between 1-100


100/e rounded is 37

Pretty good.


Author here. I know 0-10 is one extra even number. I also just did this for fun so don't take the statistical significance aspect of it very seriously. You also need to run this multiple times with multiple temperature and top_p values to do this more rigorously.


Cool experiment! My intuition suggests you would get a better result if you let the LLM generate tokens for a while before giving you an answer. Could be another experiment idea to see what kind of instructions lead to better randomness. (And to extend this, whether these instructions help humans better generate random numbers too.)


In the summary at the top it says you used 0-10 but then for the prompt it says 1-10. I had assumed the summary was incorrect but I guess it's the prompt that's wrong?


Small difference. Its called llms.txt

https://llmstxt.org/


Author here. I just messed up while posting.


Yes, I am not American and I had no clue about the connotations.


The Llada paper: https://ml-gsai.github.io/LLaDA-demo/ here implied strong bidirectional reasoning capabilities and improved performance on reversal tasks (where the model needs to reason backwards).

I made a logical leap from there.


yeah but you can backtrack your thinking. You also have a mind voice to plan out the next couple words/reflect/self correct before uttering them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: