Hacker Newsnew | comments | show | ask | jobs | submit | sold's comments login

> this can lead to unexpected conflicts and can be relatively counter-intuitive to undo.

git merge --abort. Perhaps not intuitive, but it's a single command.

-----


See other threads e.g. https://news.ycombinator.com/item?id=9053552

-----


What is the difference between x ?: y and x || y?

-----


|| gives 1 or 0 only?

-----


I see, thanks.

-----


"The C Companion" gives logical identities like

    (A && B) || (A && !B) == A,
but what he means by A on the RHS, I think, is that you must take into account the fact that A_LHS might be zero. You can't really write this identity and give a constant on the right, so he wrote the next best thing.

Well, the real next best thing would be !(!A).

-----


Unary representation is useless when speaking about complexity in number theory.

If you take a number N given in unary, convert it to binary and do trial division up to the square root, it will take O(N log N) time for conversion to binary and O(sqrt(N) * log(N)^2) for trial division (depending on your computational model, it could be O(N) and O(sqrt(N)) - I am counting bit complexity). In total, it's O(N log N). The runtime is dominated by reading the input! The complexity of trial division and the brilliant AKS algorithm is the same from this viewpoint.

Even if you had an algorithm that did not have to convert to binary and could tell in time linear to unary represenation whether a number is prime, it would be interesting trivia but nothing worthy a Nobel prize. In practice numbers are given in binary (or some other base>1 number system). To use your algorithm, you would have to convert to unary, which already means trial division would be faster.

-----


Ah, I see, thanks for the explanation.

-----


Flux https://justgetflux.com/ or redshift http://jonls.dk/redshift/.

Try setting it to the smallest temperature possible for 30 minutes and then turn it off to see the difference. I have it on all the time (though people working with colors might not be able to use it)

-----


Note it's not a pull request; it's an issue in the tracker. Had it been a prepared patch, I would have more sympathy towards the reporter.

-----


If you ask Mathematica whether x^n + y^n = z^n has solutions for n>2 and x,y,z>0, the system will simplify it to "False", displaying knowledge of Fermat's last theorem. This is taken from documentation (last example in http://reference.wolfram.com/language/ref/FullSimplify.html).

I asked about x^n + 1 = z^n, which is a simpler special case with y=1. The system no longer recognized it to be False. So the theorem was programmed as thoughtless pattern matching. I think one day computers might become authentic "creative" tools for mathematicians (as opposed to "computational" tools), but Mathematica's philosophy seems to be a dead end in this regard.

-----


To be fair... I would imagine you could catch more than a few humans with the same trick. Specifically folk that might recognize the textbook Fermat's theorem, and miss the special case.

(Sadly, I can offer no data point, as I probably wouldn't have recognized either case...)

-----


y=1 isn't just a special case, it is an absolutely trivial case that is easily shown to have only a degenerate solution.

-----


Apologies, I did not mean special as in edge or hard. I meant simply that it was a specific case of the same thing.

Regardless, I don't think that changes my point.

-----


A harder quiz: http://helloworldquiz.com/

-----


I like this one much better. There are many more options, and the choices are much more closely related.

-----


Agree with you. HelloWordQuiz is more fun and challenges

-----


That's exactly the point of monads - see http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/ba.... Such a system is still pure, btw.

-----


When I write a program, I don't want it to be just correct (true); I want it to be _provably_ correct; I want to be able to be convinced that it is correct, at least in principle, given enough time and whole specification of the system. Programs which are correct, but not provably so, should not pass code review and might as well be lumped together with those which are wrong. It doesn't matter if you are using a full-blown theorem prover or thinking about the code in your head; Gödel's theorems are not really relevant to programming, even when the code uses deep mathematics.

I'm not sure I agree with "theorems that must be verified at compile time can never account for data that are provided only after compilation". At compile time, you prove the assertion "for every x, the program outputs correct answer for x". Now, you don't know that the user will enter say x=5 at runtime, but since you proved a theorem about _every_ x, this includes x=5. You cannot predict the future (the path that will be taken by the program), but once you prepare for all possible futures, you're safe.

-----


The problem with theorem provers like Adga and Coq is they are brittle to refactoring and thus hard to make changes to an app. Something seeming small can ripple through the whole system.

I hear Idris is more reasonable in this regard and has better general purpose programming properties.

-----


>The problem with theorem provers like Adga and Coq is they are brittle to refactoring and thus hard to make changes to an app. Something seeming small can ripple through the whole system. I hear Idris is more reasonable in this regard and has better general purpose programming properties.

You can say the same for any strongly typed language, but reality is it's the opposite. If your program has a good design, it's easy to refactor.

-----


source: http://www.johndcook.com/blog/2014/02/10/real-world-haskell/...

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: