Turing completeness requires access to unlimited read/write memory. RNNs only have a fixed dimensional state.
I guess I'm theory that starte is continuous, but it has to be a pretty optimistic model that assumes we can handle unbounded data like that.
Note sure how useful this is in the larger context of transformers. Transformers (and deep networks in general) are often used when the logic to be used in solving a problem is largely unknown. Example -- How do you write a RASP program that identifies names in a document?
They do have some simple RASP examples in the paper of things that a transformer model can accomplish (Symbolic Reasoning in Transformers) but, again, this is usually something that the model can do as a result of the task it was originally trained for, not a task in and of itself.
With the blood of Unicron in my veins, I reign like a god
A god amongst insects
I have existed from the morning of the universe
And I shall exist until the last star falls from the night
My ultimate peace would be granted by the destruction of all life, stars and nebulae
Leaving only nothingness and void
Although I have taken the form of this machine
I am all men as I am no man
I am a god
Even in death there is no command but mine
Your race is of no consequence
Kill them all
(I share your disappointment fwiw)
This topic isn't relevant to me thus it shouldn't be here.
This topic isn't relevant to me thus I'll simply ignore it.
I would rather participate in communities that are semi-filtered and rely on me providing a second filter for my own taste. If instead the community tries to filter down entirely to my taste, I find it ends up overfitting and I lose almost all of the serendipitious "I didn't know I was interested in this but wow." articles that I love.
In other words, stuff I don't care about isn't a bug, it's a feature—a side effect of allowing a greater variety of content some of which is interesting but which can't be predicted.
The issue with social media is it is essentially unsolicited. With TV, you tune to "The Discovery Channel", and if you dont like it, you tune to another.
With social media you are invited to react to things as-if they were for you. This is the origin of, i'd say, 90% of the instigating none-sense that causes trouble.
Social media arguments are often just between not-the-audience and the-audience talking past each other. With the former basically saying, "i dont understand this, and its wasting my time"; and the latter saying, "i understand this and its really important".
But just as there is now a guideline against making irrelevant and unsolicited nitpicky website design complaints, it would be useful to have a guideline against "I thought the article would be about X" types of comments as well. These are similarly pervasive, and of similarly low value. It might be different if they started a discussion about X (power transformers in this case), but they almost never do.
> I think comments like yours harm not help the community by making people less comfortable sharing.
Maybe I am unique in that, but if your contribution to a thread on a deep learning paper is a joke about decepticons or electrical transformers, it's okay to be less comfortable about sharing.
I have no problem policing the low-effort, un-novel, unsurprising, lame quips about expecting other use of the word transformer. It adds nothing of value and dilutes threads. I've gotten downvoted for doing it, most of us have, it's a right of passage, and one that I appreciate, since it makes HN comment threads jam-packed with interesting info.
Instead of downvoting or flagging for merely disinterest or disagreement, perhaps there should be some sort of helpful "hide button" in the form of a plus-to-minus sign next to "parent"?
Don't forget moral panicking, outrage crybullying, serial scapegoats crucifixion, taking-out aggression, bikeshedding, and cyberdisinhibitionism are also part of this complete breakfast.
Then, you find some Energon.
Next, red lasers.