Hacker Newsnew | past | comments | ask | show | jobs | submit | procparam's commentslogin

Wow. The idea of getting to 100% on PE is almost incomprehensible to me. I've solved basically none outside the first couple pages.

What was your strategy like? How much math background do you have?


I've got a bachelor's in math, but that's 40+ years ago. I had intended to go on for a PhD in math, but fell into computers instead - programming was easier and way more lucrative, even in the early 80s. Once I was retired and found my way to Project Euler, it became an obsession, tickling that desire to go deeper into math that I had in my college days.

I attacked roughly the first 250 problems in order. The early problems build on each other to introduce new topics. I also got good at figuring out the right search term to find some random paper in number theory, combinatorics, probability, whatever.

Later problems introduced new, more niche areas, like chromatic polynomials and impartial & partisan game theory. But by then, I found it much easier to figure out what part of math a problem was based on and how to find relevant literature.

It helps to be really really stubborn, and to have the patience to let a problem stew in my brain, sometimes for weeks at a time. That seems to help lead to that Eureka moment.


One other thing - can't believe I forgot to mention this: once you solve a problem, read the solvers' thread in the forum! I learned so much by doing that, which fed into success on later problems. The link will be at the bottom of the problem page once you've solved it.

There are some much later problems where some obscure technique gets mentioned, even though the problem is doable without that technique. But then later on, there are other problems where that technique is practically required. I can think of multiple 100% difficulty problems which were actually much easier than that for me, because I had already seen and tried out the techniques that enable a fast solution.

And sorry, not going to mention any of those techniques. A lot of the fun I have in solving PE problems is that incremental increase in knowledge as time goes on.


I feel a lot better knowing that searching the literature is supposed to be a normal part of Project Euler.


Do you have a favorite?


As a class of problems, I'd say the combinatorial game theory ones are my favorites. There are a lot of impartial game theory problems - look for problems mentioning Nim or stone games. They build on each other nicely, from the mid 300s on. The site has been getting into partisan game theory problems in the past year, which finally got me to buy "Winning Ways For Your Mathematical Plays", vol 1, and "Lessons In Play". I find pretty much any problem with John Conway's influence fun to do.

As for a single problem, I'm fond of PE589, "Poohsticks Marathon". That was my 501st solution, two years after first attempting it (solved 5 years ago, yikes). I like it because it's a problem with a 95% difficulty rating, so very tough, but the development team slotted it in as an easy problem (problems normally get scheduled in batches of 6 with a cadence of medium/easy/medium/easy/medium/hard). Once I solved it, I agreed that it was relatively easy, in that it uses techniques introduced by early PE problems, but something about it makes using those techniques unexpectedly difficult.


I have been working on PE problems for most of 10 years. One thing I would sort of like to do is make a library (mostly I have been using python) for some of the more common functions. Do you have anything like that?


I do, but there's nothing too obscure in it. Efficient prime number sieve, prime factorization using trial division, generating list of divisors from the prime factorization, modular inverse via Euclidean algorithm, Chinese Remainder Theorem.

I also have some stand-alone modules, one to solve generalized Pell equations, another to find a polynomial given a sequence via the differences (e.g. 2, 5, 10, 17, first differences 3, 5, 7, second 2, 2 is enough to find n^2+1). There's another to find the closed form for a sequence as a linear recurrence.

Some solvers have much more extensive libraries, but I tend to grab bits of code from old solutions to reuse on the fly.


I've always wanted something like this, but for i3 workspaces. Something like "share workspace 2." Anyone know how to accomplish this?


That only works if the workspace is on an actual screen, because it's otherwise not rendered. Maybe the virtual monitor solution via xrandr would work for that, but I haven't yet tried that.


My friends and I have played so much already that the list of elements on the sidebar is unwieldy. You can paste this little js snippet into the console to add a basic search feature

  items = () => [...document.querySelectorAll('.items div.item')]
  show = (elt) => elt.style.display=''
  hide = (elt) => elt.style.display='none'
  search = (text) => (items().forEach(show), items().filter(e => !e.innerText.toLowerCase().includes(text.toLowerCase())).forEach(hide))
  inputElt = document.createElement('input'); inputElt.type='text';
  document.querySelector('.sidebar').prepend(inputElt)
  function handle(e) { search(e.target.value) }
  inputElt.addEventListener('input', handle)


This is really cool, thanks. I was just using ctrl+f to find things. I've got like 1200+ words right now so I totally understand the unwieldy-ness.


Thanks for this! I ended up hitting refresh because of how long my list eventually got, I wish I'd seen this comment 10 minutes sooner!

Oh well, I guess now I'm forced to sink in another half-hour this evening! ;)


It's not you Gentoo, it's me. I'm just not good enough.

I've been trying to like Gentoo for over four years, ever since I installed it on my primary laptop. I love the idea of having the fine-tuned control that Gentoo offers, and I love the idea that everything I install is tailored to my specific system.

But despite my best intentions I have not mastered it - in fact I barely feel like I understand it. When I remember to update it a couple times a week everything is pretty smooth. Go much longer than that and all bets are off. If I go on a long vacation I dread coming back to a Gentoo update. I just don't have the discipline to keep up.

Lately portage has been scolding me because some random packages are trying to install different versions of openssl. I've been ignoring it for weeks while I muster the courage to solve it.

And in my experience this kind of problem is super common. Either I'm pulling multiple versions of a package into the same slot, or I'm trying to merge a masked package, or some package is trying to put files in a place it doesn't own, or who knows...

Every time I try to solve one of these issues it's like I'm starting from scratch. I read manpages, online docs, bug trackers. I try various incantations of emerge flags. And eventually I show up the IRC channel with my tail between my legs.

Fortunately, the IRC community is extremely knowledgeable and can usually fix my problem in no time. But I hate having to ask for help, and I never seem to get closer to solving them myself.

The other major issue I have is convenience: it sometimes takes so long to install a new package. I get it - that's what you expect with a source-based distro. But man I did not understand just how much compiling time I'd need. Krita releases a new patch version? There's a couple hours with my laptop fans blasting. My laptop has easily spent 10x or 50x the time compiling Krita vs actually running it! And that's just one random program; god help you if you want Firefox on the same machine.

Anyway, on my next computer I'll probably install Arch.


"But despite my best intentions I have not mastered it - in fact I barely feel like I understand it. When I remember to update it a couple times a week everything is pretty smooth. Go much longer than that and all bets are off. ... I just don't have the discipline to keep up."

You are so close to enlightenment! When you can repair a broken Gentoo box (which is its default state), whilst still providing service, you are a man my son (pronoun assumption - soz!)

I do run Arch on all my personal stuff these days (DebIan n Ubuntu at work and others as required) but Gentoo was my first real love and it taught me to never fear a completely broken Linux box. Provided the broken bit is not screwed hardware then Gentoo can survive nearly anything.

I have a box - a VM running on esxi in my attic that got a bit behind. By bit, I mean 2013ish - I've just checked out /etc/kernels to get an idea. You can use git to make /usr/portage go back in time and then gradually update your box to now. It's not something that I recommend for the impatient but you can do it. The worst bit was dealing with things like Let's Encrypt CA changes and finding old packages. I often had to download them manually and slap them in the right place.

You complain about compilation times but back in the day I had a laptop that I left running on a glass table for over a week (worried about heat) cranking through what would eventually become @world. From memory, it was for the GCC 3 -> 4 upgrade and the advice at the time was compile everything until your eyes bleed and then do it again. Nowadays an ABI change is handled rather better and with better advice.

The Gentoo wiki is a very decent repository of info (I wrote some of it, and probably ought to revisit and update my offerings).

The contrast with other approaches to software is striking when it comes to docs n that. I've recently "solved" an MS Outlook MAPI to Exchange snag that might have been easier to diagnose if I had access to the source or logs that weren't solely designed for people with access to source code.

Do stick with it and do ask for help. You are very close ...


Before gentoo I would sometimes wipe the os and start over but no more. Now I can fix anything


Arch is also rolling update, and has no trivial chance to break when upgrading, I learned this painfully a few years ago. switched to Ubuntu, which is boring but stable.


I've used Arch btw for like the last 3 years with no broken upgrades, fwiw! (I also update once a week. And use the "lts" kernel.)


The mat at the top of the kombucha is not the SCOBY. It is a waste product of the SCOBY - just a big slab of cellulose. This is a misunderstanding that I've seen people on r/kombucha get pretty upset about

The actual SCOBY is in the kombucha itself, not the cellulose. It's pretty easy to prove; when I made kombucha I would throw out the mat after every batch and save some of the liquid to use as starter for the next.


You also don't need the mat, also known as the pellicle, to start brewing kombucha. It is entirely redundant. Just get a couple of bottles of unpasteurised, unflavoured kombucha and use that as your starter, along with the fresh tea and sugar. Those kombucha starter kits with a pellicle are just a big scam.


You can just use whatever store bought Kombucha to get going. Of course if you find any scoby on Craigslist it's guaranteed to be an interesting encounter


Do you have any reference? I don't think that's strictly true. I agree that this big slab is just some kind of protection, but if you would look at the bottom part of this slab, you would see threads, like small seaweed which "grow" from this slab. My understanding is that kombucha is a symbiotic mix between those threads (it's actually Yeast fungus) and bacteria in the liquid. Liquid does contain some parts of fungus, of course, so you can throw out the entire slab and new slab will grow eventually, but you're throwing lots of yeast and its protection. So your kombucha will take much more time to brew and final result will not have the same proportions of components as "properly" brewed kombucha.


> My understanding is that kombucha is a symbiotic mix between those threads (it's actually Yeast fungus) and bacteria in the liquid. Liquid does contain some parts of fungus, of course, so you can throw out the entire slab and new slab will grow eventually, but you're throwing lots of yeast and its protection.

On the other extreme: if one has an old enough continuous ferment, a lot of yeast will settle at the bottom, potentially affecting the end result.


I didn't know that, thanks for mentioning it.


Some great contenders for new uppestcase letters on this page.

http://tom7.org/lowercase/


The main thing we learned from this is that this a terrible problem to use machine learning for. These are far better, and readable. The algos couldn't even figure out that letters are a bunch of straight lines and neat curves.


I had a similar thought. It's amazing what you can do with machine learning, but this example shows that you can't just throw machine learning at a problem and expect good results.

And it shows that, at least for now, machine learning isn't going to make humans obsolete.


It showed that machine learning is useless if the input format doesn't match the problem. The result would probably be better if the problem was approached more on a vector graphics-based level rather than throwing a bunch of pixel clusters at the program and expecting it to figure out the underlying shapes by itself.

I mean, children in school don't learn to write by copying printed letters out of a book either, instead they're shown the individual strokes and their direction step by step.


His first attempt was vector based, the blobs of pixels / signed distance fields were his second attempt that worked better.


Incidentally, to update my comment there, I asked someone who had a font GAN and he mentioned he'd already found uppercase/lowercase latent directions which can automatically "more-uppercase" letters. It's not that this is a 'terrible problem' - it's borderline trivial. (I've seen highschoolers do much more impressive GAN things.) He just didn't do it even remotely right, is all.


> He just didn't do it even remotely right, is all.

To be fair, this was for April's fools, he probably did not intend to get something actually usable (as far as upperercase letters can be usable) .


He put in way more work than would've been necessary to get StyleGAN or CLIP to do a vastly better job, is my point, because he used totally the wrong tools and approach. (And he could put in vastly more work, and it still wouldn't work that way either; the level of effort one would or would not spend on a SIGBOVIK paper is just irrelevant when his approach is doomed from the beginning.)


Do you happen to have a link with some examples? After seeing tom7 video I'm now terribly curious on what a 'more uppercase' font would look like


He didn't provide any samples, sorry.


Yes but on the other hand, he could play chess with them, so it's impossible to say if it's bad or not


Almost if it's emulating the wrong kind of stroke ...


The tom7 link was posted here a few weeks back, if anyone is interested in that discussion:

https://news.ycombinator.com/item?id=26667852 (167 comments)


When I saw this thread I went looking for the link, couldn't find it. Felt sad. Came to read comments. Saw others have posted it and now I feel content again.


If all research papers were written in this style, I'd read a lot more of them.

> I trained the network using a home-grown (why??) GPUbased package that I wrote for Red i removal with artificial retina networks [16]—an example of “lowercase i artificial intelligence”—and have improved as I repurposed it for other projects, such as Color- and piece-blind chess [17]. It is “tried and true” in the sense that “every time I tried using it, I truly wanted to throw my computer out the window, and retire to a hermitage in the glade whenceforth I shall nevermore be haunted by a model which has overnight become a sea of infs and NaNs.”


I’d love to see what his trained lower function makes of these.


Can you say more about how you use eval() during prototyping?


It can be nice when you're working running code in a repl and don't want to maintain state/code between repl sessions. If you just print the data to stdout, you can just eval it and not have to deal with writing one-off cacheing. It's similar to the use case of pdb where you're forcing yourself into the middle of ten different functions. But print statements are easily added, easily removed, and so is eval.


A separate language? How do you mean?

This is one of the big advantages of Lisp macros over C macros. In Lisp you write macros using Lisp, including normal user-defined functions. In C, on the other hand, you write macros using "C Preprocessor directives"


Yes, of course, I get that it has the same syntax, which is not the case with C macros and which makes macros way better. Thing is, there is one Lisp that executed at compile time which expands the macros, and then there is the resulting code which executed at runtime. It's not self-modifying code. It's just code that operates on some other code, written in the same syntax.


That's not true, though.

When running in the REPL (the most common way to interface with Lisp) the compile time and runtime Lisps are the same.

For example, here's a copy/paste from a REPL session that defines a macro that defines a function. The macro (at "compile time") prints information about the function (just its argument count) to stdout before using defun (itself a macro) to actually define the function.

Next I call the new function and print out a disassembly, just to show the function is in fact compiled

"CL-USER> " is the prompt in the REPL I use:

    CL-USER> (defmacro my-defun (name arguments &body body)
               (format t "Defining ~a, taking ~a arguments~%" name (length arguments))
               `(defun ,name ,arguments ,@body))
    MY-DEFUN
    CL-USER> (my-defun omg-2 (value) (* value value))
    Defining OMG-2, taking 1 arguments
    OMG-2
    CL-USER> (omg-2 34)
    1156
    CL-USER> (disassemble #'omg-2)
    ; disassembly for OMG-2
    ; Size: 33 bytes. Origin: #x52ED2714                          ; OMG-2
    ; 14:       498B5D10         MOV RBX, [R13+16]                ; thread.binding-stack-pointer
    ; 18:       48895DF8         MOV [RBP-8], RBX
    ; 1C:       488BD6           MOV RDX, RSI
    ; 1F:       488BFE           MOV RDI, RSI
    ; 22:       FF1425C0001052   CALL QWORD PTR [#x521000C0]      ; GENERIC-*
    ; 29:       488B75F0         MOV RSI, [RBP-16]
    ; 2D:       488BE5           MOV RSP, RBP
    ; 30:       F8               CLC
    ; 31:       5D               POP RBP
    ; 32:       C3               RET
    ; 33:       CC10             INT3 16                          ; Invalid argument count trap
    NIL
    CL-USER>

It is also possible to compile a Lisp program to an executable (or byte code or whatever) and run it, and not use the dynamic aspect of it.


The code gets compiled (and macro expanded), before it is running. That's a definition of compile-time in Lisp.

In an interpreter version of Lisp, the interpreter may expand the macros at runtime.


Whatever artificial division you believe exists between "compile time" and "run time" in Lisp is almost certainly your misunderstanding, and does not reflect how the language actually works.


Perhaps I could use some clarity on this matter too, and hopefully with a bit of gentleness. As I understand it, some Lisps do have an explicit compile phase and there is discussion among the language users over the runtime cost of macro abstractions.

https://docs.racket-lang.org/guide/stx-phases.html

I understand there may be some philosophical or theoretical interest in framing Lisp the way you do, but does that invalidate the close-to-the-app thinking about the costs of macros and where those costs are distributed?


I'm not so familiar with scheme, but I can perhaps give useful answer regarding how this works with SBCL (Probably the most well known common lisp implementation).

In SBCL, we would commonly say that code is "evaluated" rather than compiled or interpreted. This is because "compiler" is referring specifically to when assembly is emitted and "interpreter" is referring to when code is being used to call already compiled code. In the case of SBCL, it is doing both of these things simultaneously when it is reading a ".lisp" file. For example, if this is the contents of my .lisp file:

(defun boop () (print "hello"))

(boop) ;; prints hello

(defun scoop () (print "goodbye"))

The boop function is being evaluated (and therefore compiled), and then called on the next line. In traditional algol based languages like c and java with a separate compilation phase, it would not be possible to call boop before scoop has been turned into machine code.

So you are asking how the costs of evaluating lisp are distributed?

For one, the assembler generated whilst evaluating the .lisp file can be cached in a .fasl file so that it does not need to be evaluated again to be loaded into other projects.

Another is that macro's can be evaluated once during function definition. e.g.

(defmacro moob () `(print "kellog"))

(defun foo () (moob)) ; evaluates to (defun funky () (print "kellog"))

(defmacro moob () `(print "fish")) ; redefining moob

(foo) ; will print "kellog" because funky was evaluated before moob was redefined.

In this case, the cost of calling moob is trivial because it only occurs once during the definition of foo. It will slow down the initial load of your program, but no subsequent calls to foo.


> This is because "compiler" is referring specifically to when assembly is emitted

SBCL has both an in-core compiler (compiles to machine code in memory) and a file compiler (compiles to machine code in an file).

SBCL also has an interpreter (relatively new and which usually is not used), which executes Lisp source as data.


I didn't know SBCL had an interpreter! (For those who didn't know either, see [1].) When would you use it?

[1] http://www.sbcl.org/manual/#Interpreter


In other implementations, interpreters are used for tools like source steppers or custom tracers...


I wouldn't say they are a seperate language, but they definitely feel like a seperate layer. They're not first class objects like functions (so you can't pass them to functions) and you can't evaluate the arguments (since macros are only directly present during compilation/interpretation), it's purely a mapping from source code to source code. For example, you can't rewrite a list based on it's values or form. You can rewrite the list to a program that does the transformation when run, but the seperation feels very tangible to me. Something like FEXPRs would be closer to being "just the same as normal code", but there's good reason why macros are the way they are.


You can absolutely pass a macro to a function. It's a list! What you can't do is use funcall or apply on it, because it isn't a function.

What you do is, you use macroexpand-1, like so:

https://stackoverflow.com/questions/50754347/macro-with-a-li...


It's not "helpful" in the sense that it's the smart decision for anyone.

The point is, that's actually how real people see the lottery. A big payout feels more likely than a $100k/yr job when you've been under the poverty line long enough.

Remember, the goal of this game isn't to see how efficiently you get to the end; it's to help you put yourself in the shoes of people actually in this position.


One huge distraction-enabler is the fingerprint sensor. It makes things too easy for me - I put my hand in my pocket, put my finger on the sensor, and by the time it's in front of my face I can be mindlessly browsing reddit.

So I would add the following tip: disable the fingerprint sensor if you have one (also face recognition, etc), and replace it with an inconveniently long password. This way you really need to think about whether it's worth the time to open your phone.


You can always set your fingerprints for the other hand instead. This will force you to switch hands which requires more complexity.


Interesting. I'm gonna try this and see.


Reddit, the serial time-killer. Removing it from my phone was a tangible life improvement. And on desktop, I've moved everything to multis so that my reddit homepage is empty like a desert, so that I have to choose what to see purposefully from the sidebar. Thus I don't compulsively go to reddit and get lured by a sea of interesting links. This has helped cut the time I spend on reddit by, well IDK, I don't spend more than about an hour or two a week on it anymore.


I have been using protopage.com, which is basically what iGoogle was back in the day. I just drop in the RSS feeds for the top posts from subs I want to check out and have it limit the number of posts. I do that with HN and a few other sites as well. Although would be nice to be able to set a minimum refresh rate so newer stuff wouldn't pop up on refresh.


Delete the reddit app and logout. Have a really long password so you're too lazy to type it in. That's my trick. Used to spend hours and hours on reddit... now I'm just stuck on HN :(


That's what I did a while ago and haven't wanted to look for it again - the app was always terrible anyway. On my desktop/laptop, I now log out more as well to not have the orange notification show up if I 'accidentally' browse to the front page.

For Twitter (which I think is more 'addictive' due to the greater pool of talent about compared to hivemind posts), I need to open up my password manager and it's quite a process to log in again compared to a saved browser password (especially with two factor authentication) - quite often that hurdle is enough to have me reconsider idling away.

By the way, did you know the Twitter app requires about 5 taps or strokes before you can log out? They really try to hide that one from users.


I have a small velvet bag that I use to keep my cell phone safe from scratches in my pocket. You can keep the fingerprint sensor on, but drop it in a similar bag with the sensor at the bottom; that way, you keep the fingerprint security, but there's an extra step involved before you can use it.


I installed space recently and I use it just for Twitter. I don't have any other distractions. I think it has been useful so far. http://youjustneedspace.com/


I have one on 5" phone. I lately think that I would gladly go back to my old Moto G 1st gen that actually works better now on LineageOS [0] than before on original ROMs. It's smaller - it fits my pocket. It is a bit less convenient to stare at the screen, but now I think that's good. But sadly it has broken SIM slot. Maybe I will sell my phone and just buy something smaller, used with good LineageOS support. Or maybe something even more inconvenient - at most 10" 3g laptop.

[0] But last April Fools' "joke" was really testing my nerves and I'm still not settled on this.


this is a cool tip, especially considering the security implications! afaik, police can coerce you to unlock with fingerprint, but not to give up your password


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: