As far as we know there is no legal precedent, nevertheless any IT expert wittness should be able to verify in front of a court that as things stand right now it is impossible to forge the blockchain time stamp.
It exposes jurors to the risk of harassment, intimidation and violence, both during and after the case, from people who don't like their verdict. And there will always be one side who does not like their verdict.
It puts them in the position where they have to consider their personal safety when they make a verdict, rather than just the evidence presented. A juror shouldn't be thinking "which verdict is less likely to get me killed?" when they're making a decision.
Violence is rightly against the law; we're all OK with that I think.
However, I do want a juror to think, "which verdict is likely to subject me to social ostracization?" I don't see anything wrong with that. Otherwise, the powerful can always force a mistrial with no consequences.
> However, I do want a juror to think, "which verdict is likely to subject me to social ostracization?" I don't see anything wrong with that.
The fact that it tends to increase the incentive for juries to act like lynch mobs is one major potential problem with that.
(Of course, the actual issue in the source article is about grand juries, not trial juries, so "verdict" isn't properly within the domain of concern -- but the same general principal is at place. Do we want juries acting based on the facts properly before them, or do we want them acting as proxy for public trial-by-media? Because the latter is what wanting a juror to think "which verdict is likely to subject me to social ostracization" is asking for.)
This is not scientific fact since it is conceivable that real neurons are much more powerful than artificial neurons, but real neurons may also turn out to be much less powerful than artificial neurons. In any event, the above is certainly a plausible hypothesis.
Biological neurons are indeed more powerful than (most) artificial neural network models because these models discard important characteristics of biological neurons:
* Spiking: Most artificial models are 'rate-based', where they gloss over the spiking behaviour of neurons by only modelling the firing rate. This discards all the various kinds of spiking behaviours (intrinsically spiking, resonators, bursting, etc.) as well as the relative timing of spikes. The relative timing is the basis for spike-timing dependent plasticity (STDP), which enables Hebbian learning and long-term potentiation -- two of the ways that networks learn to wire themselves together and process information.
* Conduction delays: Biological neural networks have a delay between when a spike is generated at the axon hillock and when it arrives at the postsynaptic neuron's dendritic arbour. This delay acts as a kind of like delay line memory in computers, where information can be 'stored' in-transit for short periods of time (in the ballpark of 0.5-40ms). And because different axons have different delays, information can be integrated over time by having one axon with a short delay and one with a long delay both end up at the same postsynaptic neuron.
But on a computational power level, does that actually make them more powerful?
What I mean is that Finite state machines are less powerful computationally than context free grammars. A FSM cannot compute certain things that a CFG can. Further, a CFG can't compute certain things that a Turing machine can. But we do know that Neural networks like the ones being used for Deep Learning can compute anything a Turing machine can, and anything a Turing machine can compute, so can the NN. They're equivalent.
So the real question is this: do those features (spiking, conduction delays) actually make biological neural networks capable of computing something that Turing Machines and Artificial Neural Networks cannot?
I hypothesize the answer is "no". A Turing machine could simulate any of those features you've mentioned, and therefore an ANN could also simulate them. (But I would love to be wrong about it, that would be amazing if human minds could do something that no machine would ever be capable of!)
I was speaking before on a neuron-for-neuron basis. In computation-theoretic terms, I presume a spiking neural network model such as Izhikevich E.M. (2003) is equally powerful to a rate-based model such as Graves et al. (2014), in that they're both equivalent to a Turing machine.
(At least I assume Izhikevich's model can be, though I'm not aware of any proofs or demonstrations of it performing arbitrary computations.)
Also, keep in mind that saying something is 'Turing complete' only says that a machine is capable of computing anything that any other universal Turing machine can perform. It doesn't say anything about how efficiently it can do it. For example, a conventional computer and a quantum computer can both do integer factorization. But a quantum computer can do it much more efficiently, being able to do it in polynomial time. A quantum computer is therefore more powerful, even if they're both equivalent machines in computation-theoretic terms.
Neural networks are universal function approximators, not Turing machines. They can theoretically learn any series of "if...then..." functions with enough neurons. But there are a lot of functions they can't represent very efficiently or without absurdly large numbers of neurons and training data.
Yes, but computation can be performed with a function approximator where each iteration is a function `f :: (state, input) -> (state', output)`. This is the basis of an architecture called a 'neural turing machine' (http://arxiv.org/abs/1410.5401) and it is, indeed, Turing complete and can be trained through standard neural network training algorithms to perform arbitrary computations.
> So the real question is this: do those features (spiking, conduction delays) actually make biological neural networks capable of computing something that Turing Machines and Artificial Neural Networks cannot?
Are Turing machines proven to be able to compute anything computable, so isn't this known absolutely to be a "no"?
Well, ask this: does "computable" mean the upper limit of all things that can be computed ever, or simply the upper limit of what can be computed in the ways we know to compute things?
Maybe there is indeed a way to solve the halting problem using a type of computation we can't quite imagine yet. And maybe the human brain is capable of that kind of computation. I don't even know if we know the answers to those questions. I suspect not.
This is where computer science starts to get all philosophical, which is pretty awesome.
Thanks, nice summary of something I intuitively pondered but never expressed about this. I've had arguments with other engineers about this topic but could never garner more than "but the models don't match the biology!"
Thanks. I want to be clear here that I'm not disparaging work in rate-based artificial neural network models or deep learning. I just wanted to dispel the idea that the so-called 'large deep neural networks' that this article talks about are as powerful neuron-for-neuron as biological networks. They're not.
But that's not to say they're not powerful. They're an interesting area of research and you can certainly do some interesting work with them.
Java can be okay for soft real-time applications, like games, as long as you're very careful about the lifetime of your objects.
The most recent versions of Hotspot, the most common JVM, has two memory pools for (non-permanent) objects: young and tenured. Objects start off 'young'; when they survive a few collections they become 'tenured'. Young objects are collected with a minor collection, which can happen concurrently with your code and doesn't stop the world. Old objects are collected with a major collection, which does stop the world. If you're writing a game, then minor collections are okay, but you want to avoid major collections at all costs.
This means that it's okay to produce temporary objects that have very limited scopes; e.g., they're allocated while processing a frame/game step and are discarded immediately. It's also okay to produce objects that survive forever, because they won't become garbage. The problem comes in the middle, if you make objects that last a while (significant fractions of a second or longer) but eventually become garbage. They have a chance of becoming tenured, and will build up until they trigger a major collection. At that point your game will stall for a while.
The other thing you'd want to change is to tell the GC to optimize for a maximum pause time with `-XX:MaxGCPauseMillis=<nnn>` (by default it optimizes for throughput). For a game server, a maximum pause of something like 500ms would probably be unnoticeable by players.
Simulated annealing is gradient based. It's an extension of gradient descent and in the degenerate case (zero temperature) they're the same: it generates random neighbouring states, and if the fitness of that state is better than the current one then it jumps there. That is, it seeks the local minimum.
The key part that simulated annealing adds is that when the temperature is non-zero there's also a chance of jumping to worse states. The probability depends on how much worse the new state is and what the temperature is. It's unlikely to make the jump if it's much worse or if the temperature is too cold. The idea here is to jump out of local minima to (hopefully) find the global minimum.
The temperature is a knob that trades off between exploration (high temperature) and exploitation (low temperature). You start off with a high temperature and jump around a lot, hopefully seeking the general vicinity of the global minimum. As you do so, you gradually decrease the temperature and settle on a locally-optimal solution.
Let's say the fitness landscape looks like this:
\ B /
\A/ \C /
If you are at A and you're seeking the lowest point, then there's a probability that you will jump to B even though it is worse. The hope is that you'll then discover C and finally D, the global minimum. There's no route from A to D with monotonically increasing fitness, so gradient descent won't find it. But simulated annealing might.
The local minimization step does not have to be gradient based. You could use something like Nelder-Mead to do the local minimization step, and still use an annealing schedule to decide whether to accept or reject that particular minima.
Sure. I've used Dvorak about 99% of the time since about 2001-2002. Except on my phone; I use QWERTY there, because it's a completely separate skill and the different layouts don't make any difference when you're typing one-fingered (Android's Gesture typing). I actually just tried switching and typing a few sentences, and I keep defaulting to QWERTY gestures.
I can still type QWERTY if I need to, even though I have barely used it in the past 13 years or so, but it takes me a few minutes to adjust and get up to speed. This happens if I need to use someone else's machine, which doesn't happen a lot. They wonder why I'm suddenly typing like an orangutan. But it goes away after maybe 10 minutes, then I'm up to maybe 70% of my Dvorak speed which is good enough for most short-term purposes. It still feels awkward to type QWERTY, so I try not to do it for more than a couple minutes.
(Interestingly, most of the problem when switching comes from punctuation, the muscle memory for typing punctuation seems to be separate in the brain. I noticed this when I was first learning Dvorak. I'd type 'v' instead of periods all the time, even when I was reasonably fluent with typing letters. Now the same thing happens in reverse, where I'll type 'e' or 'w' instead of periods or commas if I'm typing QWERTY.)
Overall, the fact that I'm using a different keyboard layout isn't something that I think about much. I'd be surprised if I'd even thought about it more than twice in the last six months.
Also, Switzerland being in the Schengen area means they can recruit from any other Schengen country -- more-or-less the EU minus Britain and Ireland, plus Iceland, Switzerland and Norway -- without needing to get work visas or any other right-to-work documentation.
Schengen is about border controls for transportation, it doesn't mean a permit to work in itself automatically. Rather, it's because Switzerland is a member of EFTA. The UK isn't one of the Schengen countries, for example, but they can still work in Switzerland.
The 'compilation state' was wrapped up in a State monad which held a list of emitted instruction blocks, free registers, and so on. I didn't go as far as to wrap each of the instructions in their own functions to get an assembly-like DSL, but you can imagine defining functions like `add = emit $ ADD` for each instruction to get the same effect.
Believe it or not, this topic has come up on Hacker News before. To paraphrase my comment from then (https://news.ycombinator.com/item?id=7324897): while Moldovans speak a language that's very similar to Romanian, in a 2004 census 60% self-identified their primary language as 'Moldovan', as compared to 16.5% who say they speak 'Romanian'.
As linguists say, a language is a dialect with an army and a navy. Whether 'Moldovan' is its own language or simply a regional dialect of 'Romanian' is entirely a question of politics.
Sounds like Italy Italian and Switzerland Italian - I can go to Switzerland and understand what they say, but some turns of phrases are weird (imagine calling a sale "Action!", or saying "I mailed myself to Geneve" when you mean taking the bus).
> Believe it or not, this topic has come up on Hacker News before. To paraphrase my comment from then (https://news.ycombinator.com/item?id=7324897): while Moldovans speak a language that's very similar to Romanian, in a 2004 census 60% self-identified their primary language as 'Moldovan', as compared to 16.5% who say they speak 'Romanian'.
You've been refuted in the very comment thread you've linked to, lol.
Romania has existed as 4 smaller states  for a very long time, inhabited mainly by a people descendant from the native Dacians[1,2] and the colonising Romans. Moldova, the geographical region, was one of these states.
Romania's steps towards unification:
* in 1601 Michael the Brave manages to unify all Romanian principates, albeit very briefly
* in 1881 Walachia and Moldova unify by electing the same leader
* Transylvania is added to the mix after WWI
* because of our initial German allegiance in WWII, we lose half of Moldova (now the Republic of Moldova) to Russia
The point of my short, not that accurate history lesson is that Romanians and Moldavians are the same friggin' people, that had the same language for many, many centuries. Even if the language in the Republic of Moldova would've diverged significantly since WWII (which it has certainly not), it would still be merely a dialect as opposed to a different language. Romanian and Moldavian are less different today than e.g. German and its Austrian dialect are (hence why Moldavian is called a subdialect ).
> As linguists say, a language is a dialect with an army and a navy. Whether 'Moldovan' is its own language or simply a regional dialect of 'Romanian' is entirely a question of politics.
Don't believe most elections/referendums from Moldova. There's an acute Russian influence in the Republic of Moldavia, manifested as both systemic brainwashing of older/poorer people, and control over politics and economy, and that's corroborated by the systemic electoral fraud (also present in Romania today [4,5] – sigh).
(Moreover, rumour has it that Russia has armed forces conveniently stationed not too far from the border with Moldova.)
So, to reintegrate this datum into the ancestor post...
* The newspaper would advertise job openings for reporters with "5 years of experience writing in Moldovan" as a requirement. Reporters that list experience in "Romanian" are arbitrarily culled from consideration. Despite the requirement, 100% of the newspaper is written in English.
* The newspaper had a correspondent in Moldova about 3 years ago, who also wrote in English, and that section of the new job description was blindly copied from the advertisement for his replacement, a position which was actually cancelled after 4 weeks, for "business reasons". The newspaper loudly complains about a shortage of qualified reporters. They don't mention that the candidate that they interviewed rejected their offer for its ridiculously low base pay.
* The reporter job is eventually outsourced to Moldova. Readers wonder why the local police blotter has so many Andreis, Tanyas, and Nicolais in it, and begin to worry about the Russian troops quartered east of the river.
Nice example of a bug caused by weak isolation. FWIW, Postgres has an interesting implementation of "serializable" which takes far fewer locks than MySQL, so may give you better performance while retaining the same isolation level.