Writing a toy ray tracer is something that I'd recommend to everyone who's interesting in becoming a better programmer. Once you get deep enough into it you'll encounter so many interesting problems (e.g. data structures, software architecture, some linear algebra, etc.) to hone your skills on. It's almost up there with "write your own compiler" as a learning experience in my opinion and you get the added benefit of creating some pretty pictures in the process.
I do. I've got the server counterpart set up on a Linode and sync my task lists across all devices. That's quite nifty because I usually think of things that I should be doing while I'm on the go. At the moment I need this kind of organization to get anything done, and out of all the solutions I've tried this one is by far the best. The feature I use the most is probably the scheduling feature. After dinner I'll sit down for a moment and contemplate which of the tasks I've still got open I will do the next day and schedule them accordingly. The next morning I do a `scheduled:today` query and I've basically got half the day laid out in front of me already. The best part of it though is that by running my own task server (at no additional cost because the same box doubles as my mail server), all the data belongs to me instead of relying on a third party.
The beauty about Linux (and a lot of well-maintained open source software) is that security flaws are usually patched rather quickly -- with the possible exception of the disaster that is X, but Wayland is coming along nicely.
I don't know why you are getting downvoted - ATMs are never connected to the Internet directly, they connect to the bank either using a built in modem(yes, the dial-up variety) or their own VPN systems. So it's extremely unlikely, if not just impossible for them to be downloading their own software updates, unless they are published by the bank for the machines to download.
There have been a few bugs that have been patched, but turned out to have been introduce quite a while before they were reported and patched. That doesn't mean they were not found long before they were reported/fixed, though.
I agree that free software seems to be very much responsive to disclosed issues though, and while many vendors (perhaps especially Microsoft) have gotten a lot better lately, the image remains (rightly or not) that closed source companies move (too) slow when it comes to patching security issues.
(Part of that is quality control, patching one bug tends to introduce/expose more)
The strategy is to get the largest tile in a lower corner, with the next biggest tile next to it, and then the next, across the bottom. Once you have the bottom row filled, concentrate on putting tiles into the next row which will match the ones on the bottom row. Initially, when the tiles on the bottom row aren't that big, you should concentrate on matching the biggest tile. But as that becomes too large, concentrate on matching the next biggest, then the next, and so on. Of course as soon as you do match a tile on the bottom, merge with it so the bottom row always contains the biggest possible value. This strategy ought to get you to 2048 within 3 tries if you don't make too many mistakes.
I remember watching this AI play and thinking if I could write my own strategy into an AI. The AI's strategy suffers from the issues that my strategy tries to combat. Perhaps someone can tell from my screenshot what my strategy is.
To be honest, screenshots of this game are no longer a proof of anything. I am not calling you dishonest,but just saying that I could run the AI version, and when it wins change the address at the top,and using any browser's inspect feature just remove the two buttons above the game field. It's just ridiculously easy to fake winning screenshots of 2048 now.
Your strategy? Up and to the left. aka keep all the empty spaces next to the lowest number tiles.
The way I beat the AI was to try to give them isolated 2s on the opposite sides of the board and then flood the board with 4s. This prevented the AI from keeping all the low numbers together and blocked some tiles from use (the ones with the isolated 2s).
I'm running dwb, version 2013.08.03 (either the maintainer for Fedora is slightly lazy or there hasn't been a new version in a while) on Fedora 20. It reports Safari 538 and OSX as the operating system. I've also got cookies disabled and it reports that I have them enabled, which is strange.
This might come across as quite harsh but the article could have been much shorter.
NP is the class of decision problems that can be solved by a non-deterministic Turing machine in polynomial time. A problem is called NP-hard when there is a polynomial-time reduction of 3-SAT to said problem. A problem is called NP-complete if it is both NP-hard and in NP. Since TSP is not a decision problem, it is not in NP, therefore it cannot be NP-complete. However, it is NP-hard.
One can call this a minor nitpick. I still point it out every now and then when I hear people talking about such things. However, the really important thing is that we haven't found a way to solve NP-hard (and hence of course NP-complete) problems efficiently on a deterministic Turing machine (and hence computers as we can actually build them). An informal way of saying the same thing is "TSP is really really hard to solve".
Either way, I'm always surprised at how many problems we encounter in our every day lives are actually NP-hard. Every decent puzzle game for instance is usually NP-hard. Goes to show that we don't really enjoy solving easy problems.
EDIT: Of course there's also a decision version of TSP, as others have pointed out. This one is NP-complete of course.
>Every decent puzzle game for instance is usually NP-hard.
An interesting question is how many of these puzzles are NP-hard in practice. For example, Minesweeper in NP-hard. However, it is possible to solve many games of Minesweeper using simple deductive reasoning in polynomial time. This leads to the question of what is the average difficulty of a random game. My best attempt at formalizing this question is to imagine reducing every game with a polynomial time algorithm, then look at the complexity of the remaining games. The complexity of an original game of size n could be the average after you reduce all games of size n with your polynomial time algorithm. If this new complexity is sub exponential than it feels as if there is some sense in which the game is not often NP-hard.
A more concrete example that comes to mind is Haskell's type inference. In general this type inference is NP-hard. However, there are algorithms that can solve it in polynomial time assuming a limit to nesting. Because no such limit actually exists, type inference is still exponential. But, because we never see arbitrarily deep nesting (except for contrived examples), this does not matter.
Many NP-hard or NP-complete problems have (infinitely many) instances that are "easy" to solve. It is quite straight forward to write up a procedure that produces infinitely many instances of 3-SAT that can be solved in polynomial time by the DPLL algorithm. But it is important to note that a problem is actually a set of instances.
Your observation basically boils down to "how rare is the worst case?". There are actually algorithms with exponential worst case complexity that are used all over the place because - on average - they perform really well. The simplex algorithm for linear programming is an example for such. Every version of this algorithm comes with a worst case problem such that the algorithm has exponential runtime. However, on average it is quite a fast algorithm that can handle millions of variables and corresponding constraints on modern hardware. Still, that doesn't change the fact that linear programming is - from a theoretical point of view - still hard enough that we cannot generally solve it faster than in exponential time. As is the case with DPLL, good heuristics can speed up an algorithm a lot without changing worst case complexity. Other examples are all over the place in AI. It's also in AI where the concept of pruning is probably encountered most often. This the practice of discarding whole subtrees of the search tree by some sort of heuristic or deductive reasoning.
Those things aside, I think what we should be looking at from a theoretical point of view is when it makes sense to consider the average case complexity over the worst case complexity. This brings us back to "how rare is the worst case". Quicksort for instance is said to be an O(n log n) algorithm but it has O(n^2 ) worst case complexity (trivially so). However, it can be decided in O(n) whether this case actually occurs and hence a good sorting implementation could switch to a different algorithm once this has been detected (something similar happens in the std::sort function that's been implemented in the STL). This is definitely a case where it makes sense to "forget" worst case complexity.
For SAT, researchers have actually worked up statistics on how hard certain instances are in terms of the number of clauses and the number of symbols. It turns out that there is a range of ratio between the two where problems are the hardest (roughly between clauses/symbols is between 4 and 6). I don't have a source right now but googling "satisfiability threshold conjecture" might turn up some results.
As such, yes, not all instances of NP-hard problems are actually hard to solve, but there are infinitely many that are. Encountering such problems in practice one can usually make use of the special problems of the instance (or class of instances) one actually encounters and devise an algorithm that performs rather well on average.
I thought the hash was in fact used for ensuring data integrity. That's pretty much what Linus stated when he said you have a guarantee that the data you put into your repository is exactly the data you get out of it.
I'm no expert but as far as I understand, the signature would be valid as long as the hash stays the same. If the commit has been tampered with in such a way that the hash does not change, the signature would still appear valid.