
Neural Networks and Primes - amasad
https://repl.it/site/blog/guest-op-nn
======
tchitra
This seems extremely contrived and at best misleading, if it is meant to be
more than an exercise to teach someone to code. If there were a function that
could produce subsets of primes in a sequence via a C-infinity function (as
you are doing via your trained neural net), then it is extremely, extremely
unlikely that all of the known pseudo-randomness properties of prime numbers
and their k-point functions are true. In some ways, this would be a sort of
converse to the Green-Tao theorem --- one can 'easily' produce subsets of
primes contained in arithmetic or geometric sequences. Can you provide a
better justification for this project? I tend to find 'approximating primes
via neural networks' as a silly combination of buzzwords that ignores all of
the known facts about primes. At best, this project appears to perform a
numerical experiment of the claim,

"For all epsilon > 0, there exists an easy to compute C-infinity function
(that can depend on epsilon) that can take an
arithmetic/geometric/arithmetico-geometric sequence and produce, with high
probability, a subset of primes of this sequence whose volume relative to all
primes in this sequence is at least 1 - epsilon"

Why should one believe that this is true?

~~~
Someone
I agree this approach is unlikely to be fruitful. I think the author doesn’t
realize how frequent primes are (for example, primes are a lot denser than
square numbers).

If you do the minimal to avoid obvious non-primes, avoiding numbers divisible
by 2 or 5, you can expect to find a N-digit prime checking about N random
N-digit numbers (1), so finding a 4,000-digit one after experimenting for a
while doesn’t indicate ability to find primes.

(1) the density of primes around 10ⁿ is about _1 /ln(10ⁿ)_, so you expect to
find a prime after _ln(10ⁿ)_ random samples, and

    
    
      ln(10ⁿ) = n * ln(10) ≈ 2.3 * n
    

Avoiding even numbers and multiples of five gives you back a factor of 2.5,
more than offsetting that factor of _ln(10)_

~~~
gerdesj
The author is 16 and does not ask for quarter. This post is not going to set
the world on light but it is a damn good effort.

"If you do the minimal to avoid obvious non-primes" \- I take it that is the
prompt for "I'm having a laugh" (rofl, lol etc)

------
tchened
Hi all, Repl.it employee here. It's worth noting that the author of the guest
blog post took on this project as part of his EPQ, or Extended Project
Qualification, an exploratory project undertaken by some high schoolers
(equivalent) in the UK. Repl.it took no part in the origins or the directions
of his work; Ollie just chose Repl.it as his tool of choice. He learned the
concepts, tools, and code on his own.

Regardless of the outcome of his research, the fact that a sixteen year old
felt empowered to tackle such an advanced topic and was able to teach himself
the tools to dive into it is pretty damn cool.

~~~
gerdesj
The internet is a nasty place and you lot should recognise it as such. I did
note this:

"As a self taught programmer of age 16, I knew from the start that taking on
the complex topic of neural networks and then trying to combine it with the
famously difficult field of prime numbers would be a challenge."

... and I read on and I am suitably impressed - good skills.

~~~
amvalo
This is like the mathematical version of trying to build a perpetual motion
machine for a science fair project. At 16, I may not have known enough make
the GP's argument, but I would have the intellectual humility to pick a
project I can reasonably understand.

------
amasad
Repl.it cofounder here. Part of what makes this project and others like it
really cool is that kids on Repl.it are really interested doing things the
hard way and learning from first principles. Here is another project by a
13-year-old that implements neural nets from scratch with zero external
dependencies:
[https://repl.it/@pyelias/netlib](https://repl.it/@pyelias/netlib)

------
olfactory
Since primes are themselves a deep structure of the integers it would not seem
possible for a neural network to offer any advantage. The concept of prime
"finding" being searching the larger space of positive integers for primes is
misleading. Primes are themselves the factors of larger, non-prime numbers.

In chess, the search space is a tree of moves, and the algorithm eliminates
some subtrees as being irrelevant to playing the sorts of strategies that beat
humans in conventional time-bound contests upon which other algorithms have
been trained.

But with primes, the next largest prime is another atom that has the requisite
structural property of not being the product of any smaller primes. If there
were any other informational aspect to what makes a number prime, then it
would be derivable from the definition of primality.

~~~
gerdesj
Its lovely to see you engaging with a 16 year old person with a discussion
about primes. Crack on.

------
Someone
_”to search for prime number sequences, sequences of numbers with many prime
numbers along them, the classic example being the Mersenne primes, of the form
2^n - 1.”_

Many? The 34th mersenne prime has n > 1,000,000, the 47th n > 40,000,000
([https://www.mersenne.org/primes/](https://www.mersenne.org/primes/)), we
know of only 50 ones, and that might even be all of them (we don’t even know
whether infinitely many exist)

~~~
gerdesj
The OP is from a 16 year old. I trust you were worrying about MPs when you
were 16 ...

------
wjn0
An interesting idea. A paper from last year showed that neural networks and
rational functions efficiently approximate one another [1].

[1] [https://arxiv.org/abs/1706.03301](https://arxiv.org/abs/1706.03301)

~~~
gerdesj
"A paper from last year showed that neural networks and rational functions
efficiently approximate one another"

I recently came up with a hand waving session demonstrating that sinks and
toilets are able to drain water, provided that they are not blocked by a
neural network. However I could only get it to work for the degenerative
neural network and not the pretty silly type and certainly not a rational one.

~~~
wjn0
But did you write it up and put it on arXiv?

~~~
gerdesj
I'm an engin ... err technic ... OK a muckerabouterer. Not a scientist.

