> You can ask them to reverse numbers or re-arrange words and they'll faceplant in the same way as soon as the input gets beyond a small threshold. Here surely there wouldn't be an issue with tokenization.
My guess is the training data contains many short pairs of forward and backward sequences, but none after a certain threshold length (due to how quickly the number of possible sequences grows with length). This would imply there's no actual reversing going on, and the LLM is instead using the training data as a lookup table.
This should work fantastic in theory, since differing vocabulary (not grammar) is the main factor that determines the difficulty of a new language. Putting off this primary obstacle so one can ease into it sounds genius to me. It also agrees with the method hyped by Steve Kaufman, where one should read and speak level-appropriate material.
No, it is because the probability of arriving to a correct answer increases when there are more members in the group, but only when the individual probability to arrive to a correct conclusion is higher than 50%. Group of smart people is smarter than an individual. The opposite is true too. If the individual probability is less than 50% then the group of people is dumber than the individual.
The answer must also be within all of their domains of expertise for the 50% to have any meaning. You can’t have a “room full of smart people” as you’ll just arrive at suboptimal outcomes because your consensus relies on the lowest common denominator of understanding, which between experts in differing fields can be pretty low.
In the context of Condorcet's jury theorem, the percentages refer to the chance of voting for the correct outcome. Think of a legal trial and there is no ambiguity about the meaning of "50%" is.
Hello HN — This is a desktop app that accepts typed text and synthesizes a handwritten version of it, using recurrent neural networks in real time. The network is implemented as a library (included in the Releases section), written in 100% Rust. The GUI is based on the free Sciter.JS library, which supports SVG and the fancy controls. All this is an unofficial port of https://calligrapher.ai
By clicking the download button at the top-left, the output can be saved as SVG.
Hey HN — This is a simple web app that allows you to draw an alphabet, type some text, and then have the text written in your own handwriting. I’m happy to hear feedback or answer questions about how it works!
P.S. Drawing the entire alphabet isn't needed, only the letters you need.