Hacker News new | past | comments | ask | show | jobs | submit | more zyxin's comments login

This very same site also supports Chinese, along with plenty of other languages! It's linked in the footer.

https://youglish.com/chinese


This makes me wonder what would happen if neural networks contain manually programmed components. It seems like trivial components such as detecting DNA sequences could be programmed in by manually setting the weights. The same thing could be done for example to give neural networks a maths component. Would the network when training discover and make use of these predefined components, or would it ignore them and make up its own ways of detecting DNA sequences?


This is called feature engineering if you want to look up more of a history and use of this idea.

Edit - tokenising is a form of this, you're pre-transforming the data to save it having to learn patterns you know are important.


You can manually program transformers:

https://srush.github.io/raspy/

I don't know if you can integrate them into a model. I think you might run out of space, since these aren't polysemantic and so would take up a lot more "room" than learned neurons.


In a way, this could be considered adding a speculative transformation of the input as part of the input to some layer, and the network deciding whether or not to use that transformation. It would be akin to a convolution layer in a CNN, albeit far more domain-specific. But I’m not sure how much research has been done on weird layers like this!


This is indeed interesting. In certain use cases where precision is paramount, we might opt for manually crafted code for the computations. This allows us to be confident in the efficiency of our manual method, rather than relying on LLM for such a specific task. However, it remains unclear whether this would be directly integrated with the network or simply be a tool at LLM's disposal. Interestingly, this situation seems to parallel the choice between enhancing the human brain with something like Neuralink and simply equipping with a calculator.


I wonder what the limitations are. Do LLM's have Turing completeness?


From the Readme, their eventual plan for the project is to be able to serve the client through the web browser which would mean that almost all tablets would be supported.


Didn't Apple drop support for old iOS devices so they can't access the web anymore. Safari is useless even for local sites I believe.


If you're referring to the outdated certificates, I installed Let's Encrypt's ISRG Root X1 Certificate onto my old iPad 4 and that seems to have taken care of it. Local sites served over HTTP never had any issues.


The way Windows handles it seems to be that hardware rendering of the pointer is turned off when you drag around windows. It was very obvious when I was using f.lux, the bright white pointer would turn yellow like the rest of the screen when dragging.

Maybe you could try turning the actual cursor invisible when dragging and instead render a custom drag cursor parented to the object being dragged.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: