86 billion neurons, with many thousands of synapses between each.
If enough action potentials from presynaptic neurons arrive within a little enough amount of time, the postsynaptic neuron will _probably_ also fire.
We do not call this process "thinking".
So for instances:
[list of 10 items] -> Some Number
[list of 500 items] -> Some Number
Can anyone point me in the right direction?
More detailed answer. LSTMs have 2/3 modes : sequence-to-sequence (immediately), sequence-to-single-output, and sequence-to-sequence (delayed).
Sequence-to-sequence immediate is generally referred to as "seq2seq models", if you want to google it. This is used, for instance, in Deep Speech. Essentially the network takes in a sequence and immediately generates a new sequence from it.
Sequence-to-single-output is called "sequence classification", and is used in text sentiment analysis. The network takes in a sequence of items and comes up with some number.
Sequence-to-sequence (delayed) is called sequence generation. An example use would be translation. The network takes in a sequence, thinks about it a bit, and then outputs a new sequence.
There are other things that may work well. For instance I've found "windowed convolution" over a sequence to work well, even better than LSTMs (and certainly easier and quicker), for sentiment analysis. You essentially make a "window" of items, and have it output "bits" (1 = true, 0 = false). For every window frame you generate these bits and then you maxpool (or just add) them over the entire length of the sequence. Of course this will never detect anything longer than the window.