Hacker News new | past | comments | ask | show | jobs | submit login

That'd be me (fellow HNer and belgian programmer)... Very happy to see this upvoted up to HN's frontpage! Any question welcome but I cannot post the solution until the time-capsule opening ceremony on the 15th of May : )

EDIT: WIRED article just in. Haven't read it yet. Link if anyone is interested: https://www.wired.com/story/a-programmer-solved-a-20-year-ol...

EDIT2: forgot to tell but the mining pool Antpool posted a message in the block header / coinbase data of Bitcoin's block 573138 saying "Congrats Bernard Fabrot for solving LCS35!". My brother tried to time this with the press release which was supposed to come out on friday but then the press release got pushed back to today/monday. So yeah, coinbase data of block 573 138 is kinda very cool (it requires cooperation of a mining pool because it's not just in a transaction but in a block). TYVM to the everybody at Antpool! Big thanks for that : )




>Although Rivest initially released the puzzle’s code in Java, Fabrot realized it could be solved faster if he used the GNU Multiple Precision Arithmetic Library, free software written in C for doing “precise arithmetic.” So Fabrot dedicated one of the CPU cores on his home desktop computer to running squaring operations in an attempt to solve the puzzle. He says his computer was running the operation 24/7, except when he would have to leave on vacation or there was a power outage.

It seems as if your 2014 comment hinted at this project. Applause for keeping this tight lipped for those years.

2014 comment: >Java is quite fast compared to most languages but for some things it is still terribly slow compared to C / C++. I had to do some big integers crunching lately and tried: Java, Go, Go with GMP, C (GMP) and Haskell (using GMP under the hood). Eventually I went for good old C with the latest GMP version for it was the fastest for my use case (with Haskell and Go with GMP coming close but still not as fast as C). Java was, for the computation I needed to do... Eight times slower than C! I know I was hitting a edge case for a very specific kind of computation but still. [0]

[0]: https://news.ycombinator.com/item?id=7979006


Just want to add that GMP is not "C". The critical parts are written in assembly. It's not really possible to do fast multiple-precision arithmetic in C. You need direct access to add with carry instructions and the like.


Yes, access to hardware intrinsics is what can really accelerate arithmetic - C makes this pretty easy though.

As an aside, .NET Core 3.0 will deliver access to intrinsics - not sure if Java or any other managed platform already has this?


Your nick sounds familiar. "richtea" I'd say I remember but no idea from where.

Yeah I'm sorry that my comment from late 2014 gave that false impression that I said GMP was written in C. What I meant was that it was, when I did the testing, slightly faster to call GMP from C than from Haskell, Clojure, Java. I remember I tried all these back then and calling GMP for C was a bit more efficient. Now I know there are now ways to call libs like this from Java without paying the old foreign function call penalty: but I didn't know about that in 2014. So I called GMP from C because it was faster. Out of memory I'd have said GMP was C with inline assembly but I may be wrong ; )


Most of it is just straight up assembly. I don't think there's much inline assembly because that's hard to maintain for multiple platforms.

Rich Tea is a British biscuit. Maybe that's why you remember it?


> Bernard Fabrot spent the last three and a half years computing the solution to a puzzle

TacticalCoder last HN comment is from Dec 5, 2015. Checks out.

Kudos to you, well done!


I actually thought this might get posted to HN, so I went through a lot of pain to find back a way to get my HN login back (basically had no access to the email with which I opened an account here: I had to dive deep down in some Google Suite tied to a domain name to find a way into the user account and eventually reset my HN password). I'm glad I did in time and didn't miss this thread!


Could have used the world's best 2fa. "I can prove my identity because I have the answer to LCS35".

Pretty sure @dang would have accepted that!


Well, you sure seems like the kind of guy who will go through a lot of pain to get quick results.

Was this a side project? did it require much attention after the initial setup?


> Was this a side project? did it require much attention after the initial setup?

Totally a side project.

It hardly required any attention: just making sure the computation was running and making sure to backup the intermediate results once in a while (I had backups everywhere of the intermediate results: remote servers, USB keys, DVD, other computers at home, email, etc.).

I only needed to do a few tiny steps once I found the secret message (which contains a number which allows to compute one of the two prime numbers).


Did you knew for certain that your program will solve the puzzle at some time? Did you have an estimation of the date?


> Did you knew for certain that your program will solve the puzzle at some time?

The description of the problem itself explains a way to verify that intermediate results are correct.

> Did you have an estimation of the date?

Yes and that one is easy: I wrote a little piece of code (in Clojure, whatever, it's unrelated to finding the solution) which would check at which intermediate result the computation was and then compute an "ETA".

Fun story: I'd regularly send my brother something like this...

{ :time-left-in-years 1.239341704718417 :eta "1 year 2 month(s) 26 day(s)" :percentage-done 0.6284244211583109 }

(that's an actual estimate I found in my old emails)

: )


> TacticalCoder last HN comment is from Dec 5, 2015. Checks out.

I didn't realize... Your comment is actually extremely funny: I love it : )

Yeah, it's more or less when I started computing that. This is hilarious.


I just want to be a part of this thread. Congrats :)


Whoa, are you the Bernard Fabrot that wrote intro to Linux books in the late '90s early '00s (https://www.amazon.com/Bernard-Fabrot/e/B004N6XBQC/ref=ntt_d...)? If yes, I have to say thank you very much — I owe you my career!


And fun fact: that (french) book was actually typeset in LaTeX but I had to fight big times with my published/editor so that they'd accept LaTeX as they wanted me to typeset it using QuarkXPress. I knew how to use Quark XPress but I wanted that book to be typeset from Linux, with LaTeX. And eventually they accepted ^ ^


That's interesting. The fun fact is nobody uses QuarkXpress anymore (almost everyone in the design world switched to Indesign) while LaTeX ... is eternal. So wise choice.


That'd be me too!

I'd love to hear more about it: you can email me bernard @ fabrot dot com


Hi Bernard. Can you shed some light on why Rivest's prediction that this puzzle would take 35 years of computation was so wildly off? In the paper he predicts that clock speeds would reach 10Ghz by 2012, he also predicts another 5 fold speed improvement by 2034. I am not sure what the actual technological improvements have been since 1999 but it seems our processors are way below his predictions of where we would be in 2012, let alone 2034. He states:

> "We estimate that the puzzle will require 35 years of continuous computation to solve, with the computer being replaced every year by the next fastest model available. Most of the work will really be done in the last few years, however."

So how were you able to perform a calculation that should take 35 years in just 4? Where did Rivest get his assumptions wrong?


You still need the same amount of iterations (79 685 186 856 218)[1] to resolve the problem but it's now faster because

SIMD instructions[2] lowered the amount of cpu cycles needed per operation.

Novel algorithms[3] lowered the amount of operations needed per iteration.

[1] https://people.csail.mit.edu/rivest/lcs35-puzzle-description... (short description at the bottom).

[2] https://en.wikipedia.org/wiki/SIMD

[3] https://gmplib.org/manual/Algorithms.html


The problem appears to be sequential so how would SIMD help? Also what advances in squatting algorithms have there been specifically? None of the links you have provided appear relevant


The numbers involved are quite large. They don't fit into a single computer word.

The problem is designed so that iterations have to be sequential, but a single iteration can use parallelism.


I see this problem as simultaneously a cryptographic challenge and a DevOps one.

So mine isn't a cryptographic question, but I'd love to know what steps you took to ensure integrity and uptime of your consumer i7 hardware, and capture of each partial step of the solution, while you ran your algorithm.

EDIT: also, congratulations! (How did I leave that out!?)


I'm no cryptographer so it's good news you don't have any cryptographic question ; )

I basically used my everyday workstation. There were 79 trillions operations to be made and I'd save the intermediate result every 1 billion result (so approximately every 22 minutes I'd save a result).

The computation cannot be parallelized and the i7 I was using has four cores. So basically instead of turning it every night off I'd simply leave it running 24/7.

I'd simply backup the files with the intermediate results once in a while. I rebooted the computer about 50 times in 4 years and during those 4 years during about 3 and half years it was computing towards the solution.

But I coded (totally unrelated things), compiled things from source, browsed the web, did my email, etc. from the very workstation which computed the solution.

The computation is easy to relaunch from any intermediate step.

Thanks for the congrats!


Wow, this is awesome! Great job computing it.

This reminds me of a time during college a professor gave us an extra credit assignment to essentially find prime factoring large numbers (classic problem). I had come up with a pretty bad solution, but thought it might just finish in time for the deal line if I left it running all week. Of course a transformer blew up the day before the deadline, and I never knew... chances are my program would have never terminated :P


That's amazing dedication. Congrats. Did your power bill increase noticably?


No it really didn't. I think I mentioned here elsewhere in a comment but I use an Intel Core-i7 6700 "non K": so the version that is not meant to be overclocked and which Intel rates as having a max TDP of 65W. And this would be with the four cores running. I don't know the TDP / power consumption when it's turbo-boosting at 3.9 or 4.0 Ghz (when one or two cores are at full speed on the 6700, it goes from 3.4 to 3.9 or 4.0 on these cores).

Basically I like my workstation to be quiet. Really very quiet: as in I can only hear it if I put my ear on the tower. So I never have ultra power hungry CPUs. I don't game so no GPU besides the integrated one. A M.2 SSD which consumes next to nothing.

Power bill probably did increase a bit but honestly it was lost in the noise: I definitely didn't notice anything : )


> The computation cannot be parallelized

Is there a proof of this somewhere? Operations that seem vaguely similar, like doubling repeatedly, can be easy to "parallelize".


The puzzle is based on a paper from 1996 by Rivest, Shamir and Wagner: https://people.csail.mit.edu/rivest/pubs/RSW96.pdf

Each step depends on the result of the previous step. I don't know of any claim that the crypto/maths behind the paper do not hold.

You can use a FPGA or build an ASIC: but as far as I know as long as the crypo in this paper hold, it's not parallelizable.


> Each step depends on the result of the previous step.

But this doesn't mean anything. If I said to take a number and add 1 eighty trillion times, each step would depend on the result of the previous step, but obviously it would be unnecessary to actually compute each step.


The reason this isn't the same is that addition is associative, so you can parallelize it like this: `((1+1)+1)+1 = (1+1)+(1+1)`.

A fast way to calculate something like `g^x` for some large `x` is to iteratively square:

  g^1
  g^2
  g^4
  ...
  g^(2^i)
and the puzzle is designed so that `i` is large (`79_685_186_856_218`)

Of course, this doesn't prove that computing `g^x` is sequential, but it does show why it is different than computing `1+1+...+1`.


Doesn't this suggest that computing 2^i is just as hard?

You can compute g^(2^i) by just multiplying g's in the same way that you can compute 2^i by adding 1s, and multiplication is as associative as addition is. Specifying that you want g^(2^i) rather than g^k for some arbitrary k looks like a way of ensuring that a partial product computed in parallel (say, g^3) will be of nearly no value, because every partial product that helps compute the total will already be produced by the single agent doing squares.

So why compute g^(2^i) rather than just 2^i? Is 2^i in fact hard to compute? (In binary representation, it certainly isn't, but maybe 3^i is?)


Using the doubling trick you can compute `2^i` in `log i` time, so you can compute `2^(2^i)` in `log 2^i = i` time. Let `i` be big enough, and you've got a "difficult" problem.'


Read the paper. It tells you why they think it's better than that. And I think that has stood so far.


> Read the paper. It tells you why they think it's better than that.

OK, I read the paper.

This is the entire discussion of the inherent difficulty of repeated squaring:

> More importantly, repeated squaring seems to be an "intrinsically sequential" process. We know of no obvious way to parallelize it to any large degree. (A small amount of parallelization may be possible within each squaring.) Having many computers is no better than having one. (But having one fast computer is better than one slow one.) The degree of variation in how long it might take to solve the puzzle depends on the variation in the speed of single computers, and not on one's total budget.

I see two problems here:

(1) This is a purely conclusory argument, suggesting that a better response would have been a simple "no, there's no proof".

(2) This does not even attempt to address the question I posed, which was whether it is necessary to compute the intermediate results in order to compute the final result.

In fact, we know that the intermediate results are not necessary to compute the final result, because that is how the message gets enciphered in the first place. It can be done by factoring the modulus, and obviously there is no proof that that is difficult to do.

But beyond that, it also isn't immediately obvious that it must necessarily be impossible to compute the final result directly by means that don't require factoring the modulus. It is possible that somewhere in the literature on Blum Blum Shub this question has been analyzed, but that goes a bit beyond your suggestion to "read the paper".

So, I'm stumped. Tell me what you saw in the paper that you thought would answer my question?


> In fact, we know that the intermediate results are not necessary to compute the final result, because that is how the message gets enciphered in the first place. It can be done by factoring the modulus, and obviously there is no proof that that is difficult to do.

Well, there's no mathematical proof. But physicists and judges would accept that there's lots of money and fame to be made breaking eg RSA, so countless people have tried and keep trying, but haven't come up with much good so far.


I hate it when people think they know better than the experts. Do you really think that if they could have made it run in parallel they wouldn't have? Get off your high horse and solve it yourself if you're so smart.


Seems to me like a good-faith effort to understand. And respect for the experts isn't the same as unquestioningly accepting all of their assumptions.


The answer to your question is, "No, there is no proof."

This is similar to the answer to, "Is there any proof that RSA is uncrackable?"

The security is based upon the assumed hardness of certain mathematical problems.


As for the uptime it's a pretty reliable machine: I'm running the i7-6700 (non K, so non overclocked) version. Intel lists it at 65W max TDP but that'd be with all the cores at 100% I guess. It's a 3.4 Ghz CPU turbo-boosting when only one core is at 100% to 4.0 Ghz on that one core.

OS is Debian stretch: it's rock solid in my experience.

It was rebooted on average once a month (I switched country, there were black outs, I upgraded the OS, etc.) but there have been stretches where it was up for several months in a row, happily cranking!


Hi @TacticalCoder, One question!?

From Rivest-Shamir-Wagner's paper[1]:

“Solving the puzzle should be like having a baby: two women can’t have a baby in 4.5 months."

[1] https://people.csail.mit.edu/rivest/pubs/RSW96.pdf


Hi MrXOR,

first thanks a lot for posting this to HN!

Then: is there a question somewhere in your post which I missed? : )


Congratulations!


This is very cool. Congratulations!

How did you beat Peffers' team? Did you simply start the computation sooner, or did you reach some insight that eluded them?


No Simon Peffers' team did something more impressive: they're using a FPGA. I simply started way earlier using consumer-grade hardware (a Core i7-6700 CPU).


But you made the correct decision on whether to start the computation immediately or to start by building a "more impressive" system. Maybe it was luck that you beat them, or maybe it wasn't, but either way several years out you made the correct engineering decision on how to solve a problem with finite resources. That's impressive.


For those who didn't read the Wired article:

"In mid-March, the group began to run an algorithm" ... "FPGA was about 10 times faster than a high-end commercial CPU running non-optimized software. Based on the chip’s computing efficiency" they "calculated that they would have the correct solution" "on the evening of May 10, just two months after they started the calculation. Yet when they reached out to MIT to let them know a solution was imminent, Rivest informed them that Fabrot had beaten them to the punch."

We read here that he started end of 2015! Rivest believed at the time he constructed the puzzle, almost 20 years ago, that he has constructed the puzzle that would keep the vault locked until 2034, based on his expectations then of the increase of the computing speed. Obviously, predicting the developments of 35 years is not so easy, and the single-CPU computer speeds did improve faster. Anyway...

Congratulations!


It's actually the algorithmic improvements that were faster, if I read everything here and in the article right. Not the raw single-CPU computer speed per se.


> It's actually the algorithmic improvements that were faster

I’d really like to know about those! Any links?


Disagree. I'd choose to work with you first on a project, but until facing such a nice problem congrats!

No disrespect intended to that team. It's an impressive accomplishment, and they're the ones who can publish in a journal for all due academic credit.

However, it also depends on what you value. Sometimes for some problems crossing the finish line first is winning, or seems able to simply allow the worlds technical advances to compound faster than more novel approaches whose future utility is unknown.

During all those years one person stopped to think about the goal in the most optimal way and ask, what's really the simplest solution here? Is custom hardware the only practical way? Everyone was allowed to think about implementation costs.

In other words, the most impressive part is what was not done. Great work.


A real-life Tortoise and the Hare story! (In the sense that slow and steady progress won the day. I do not wish to imply that Peffers team has any of the negative qualities of the hare.)


Kinda: a friend told me exactly that the other day. But we then discussed it: technically the hare started first and then rested, which is not what Simon and his team did : )


Fellow HNer and fellow Belgian here. Congratulations! If you're ever around Leuven I'd love to buy you a beer and have a chat ;-)


Quite a few of us in here! I spend a lot of time in other countries so I'm not that often in Belgium but I'll make sure to contact you when I come around. We should exchange emails. You can email me at bernard at fabrot dot com !


Mail sent! (meeusdylan at protonmail dot com).

Yeah, always nice to see who you can run into on hackernews! I'm sure there's plently of us Belgians lurking on here :)


Hi, and belated congratulations!

Are you by any chance planning to release your source after the ceremony? It'd be really interesting to read some peak-efficiency GMP code as that's a library I've always been interested in.

Additionally, do you happen to know how much longer it could take to factor modulus n and then compute the value for 2^(2^t) using the methods described in the paper? i.e. instead of repeated squaring compute

  2^(2^t mod phi(n)) mod n


There is also a thread on reddit's r/technology https://www.reddit.com/r/technology/comments/bitha2/belgian_...


Well played! If only I had made my attempt earlier!

    2019-03-18T20:23:06+01:00
    AMD Phenom(tm) II X4 B50 Processor
    56.674783% (0x2912f5a00000 / 0x48792741a51a) ETA: 1 year 138 days (2020-09-18)


How much work was involved in the 3.5 years ? What tips would you say worked for you ? What do you think you do in your life that might have given you a leg up in cracking the puzzle ? (besides being intellectually inclined and adept)


> How much work was involved in the 3.5 years ?

Already answered somewhere but lost in this thread...

Hardly anything. Relaunching the computation after a black out or an OS reinstall or when I moved from one country to another : )

Also once in a while going ballistic with backups as to not lose x months / years of computations.

Then at the end doing as the puzzle says to do: xor to get the secret message. Get b from the secret message and do 5^b mod (2^1024) then find the first prime above the resulting number. Divide n by that first prime to get the second prime. This took no time.

> What do you think you do in your life that might have given you a leg up in cracking the puzzle ?

I don't know. It required finding it (mostly chance there), reading it and understanding how the parameters were set up, realizing that a solution could probably be found in some "reasonable" amount of time and then it required some dedication (and maybe a tiny of crazyness too) to run if for so long.

I clearly always liked optimization and learning new languages and trying libraries and whatnots: so as the solver was simple to implement I was easy to quickly try different languages / libraries. I kinda knew upon reading the puzzle that there was a good chance that it could be solved faster than expected.

But really it's like 10 or 12 lines of code. I spent some time doing performance tests but besides that it was just being patient?

I was clearly motivated by the idea that I could be the first one to find the solution to this puzzle. That one carrot worked wonderfully on me : )


> computed the solution using the GNU Multiple Precision Arithmetic Library (GMP).

Great job. Kudos.


Hey, congrats ! I like your display shown in the Wired article.. it looks very ergonomic ! Could you tell me what brand/model it is ? Thanks :-)

EDIT: got it, it's LG 38UC99-W.. thanks


Thanks, I was looking for the display specs as well :)


It completely makes my day that the solver is on HN.


Nice work man


Congratulations! Any ideas for the next challenge?


What was the ascii sentence result and what were the factorization instructions (presumably just a large prime factor of the result)?


It was decided that the solution would be kept secret until the 15th of May, when the time-capsule shall be opened.

As the problem states: there's a number allowing to factorize n. There's also a little message.


A dedicated ASIC based on MOS current mode logic (MCM) in normal 14nm should get at least 10x higher speed than that FPGA. You'd probably be thermal limited to, say 100 Adders in parallel working on partial products (full-width) clocked to 10~20 GHz (power is basically linear in frequency and a 1GHz adder consumes ~ 40 mW). That gives you a base multiplication speed of 0.5~1GHz and 40~80W power consumption. AFAIK their FPGA runs at ~65MHz.


Congratulations! I’m curious if a quantum algorithm could help with this, as the problem deals with factoring large numbers?


What are your thoughts on projecteuler.com?


I obviously love all the kind of algorithmic puzzle / challenges / competition.

I haven't done many from Project Euler but I did participate a lot in the "TopCoder" competition back when TopCoder started (that'd be many many years ago).

I think it's a great way to learn about algorithms or to learn a new programming language.


Congratulations!


Congratulations!


Congratulations!! This may cost me some karma bit it is definitely worth guessing!!

Is the answer.... 42? :)

In all seriousness, I hope that on May 16th you will post your efforts/solutions/other miscellanea somewhere, and will come back to HN and make a "Show" post for us all to admire and celebrate with you.


Celebration at the MIT on the 15th of May and it is public.

I don't have a blog (yet) but I do have a few things I'd like to write about (not just this puzzle) and share with other enthusiasts so I think I'll simply start a blog. But it won't be on the 16th: I'll need some time to come back home etc.


If you have time you could write it up in advance and publish it on the 15th. I say this because I think your post is more likely to be read by more people if you are able to publish it coinciding with the event. That way you can also share the link with anyone IRL that you meet at the ceremony who asks you for more details about it, and in turn it is more likely that some of said people share your post on social media, leading to even more people reading your post.


What was the question?


Hey, I know it's kind of off-topic, but I couldn't help noticing your OS. Are you using awesomewm/what's your setup! Also, what monitor is that?


Hi, no problem for me with the off-topicness: I said I'd gladly answer questions : )

As dsincl12 wrote it's indeed a 38": the LG 38UC99-W.

And you're correct, good catch: it's the Awesome WM running on Linux (Debian Stretch). For the geeky details: it's Awesome WM but with a patch I modified doing "useless but not useless" gaps. Basically it adds gaps between the top bar and between the windows, but it doesn't add gaps on the edge of the screen (which to me makes the more sense but YMMV).

So yeah a Core-i7 6700 / 16 GB of RAM / NVMe M.2 PCI-e 3.0 x4 SSD, which is my daily workstation (as I wrote elsewhere in this thread: I solved this on my everyday workstation).

I've got 12 virtual workspaces set up and my "main" one is in a three columns layout (with Emacs in the middle ;)


+1 for what monitor is that?


Looks like the one I have, an LG 38UC99-W.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: