Hacker News new | past | comments | ask | show | jobs | submit login
Why SAT Is Hard (matklad.github.io)
96 points by yarapavan on Feb 26, 2023 | hide | past | favorite | 103 comments



> SAT is harder than any NP problem

This isn’t really true; as any undergrad will have been told, it’s at least as hard as any NP problem, where A is as hard as B when there is polynomial-time computable function f such that we may determine whether an arbitrary b ∈ B by determining whether f(b) ∈ A. (I think I’ve that the right way round.)

There are lots of these!⁰

It’s not obvious to me that there’s a robust, meaningful and useful notion of hardness on which SAT is the hardest NP problem.

0: https://en.wikipedia.org/wiki/List_of_NP-complete_problems


To form a counter example, could you make a marginally more difficult version of SAT? Or would it always be reducible to SAT?


If it’s NP it’s polynomially reducible to SAT. There are problems that do not admit a polynomial reduction to SAT, but by Cook-Levin they’re not in NP.

It is of course possible to construct a problem that will require strictly more steps than the most efficient algorithm for SAT; for example, we could give a language SAT-PRIME = {<φ,n>: φ is satisfiable and n is prime>, which will take longer to decide than SAT. (Primality can be checked by the Agrawal–Kayal–Saxena test in polynomial time.) But note that SAT-PRIME is polynomially reducible to SAT, so on the notion of hardness as given by polynomial reducibility, it is in fact just as hard.

Of course, there are other notions of hardness, but the question is whether those notions are actually useful. Not really, in this case. P is closed under many things; in particular, it is self-low—i.e., L ∈ P just in case L ∈ P^P, where A^B = {L: L is decidable in time A with access to an oracle for B deciding instances of B in one step}. Now being a bit more fine-grained is useful for problems we know to be in P—for example, the recent result on max flow.⁰ But when we’re dealing with exponential-time best-known worst-case complexities (which is the case for all NP-hard problems, since we have no proof yet that P = NP), plonking on a polynomial doesn’t seem so bad.

0: https://arxiv.org/abs/2203.00671


> It is of course possible to construct a problem that will require strictly more steps than the most efficient algorithm for SAT; for example, we could give a language SAT-PRIME = {<φ,n>: φ is satisfiable and n is prime}, which will take longer to decide than SAT.

A minor point, admittedly, but the second part of this statement is not true. Running time is w.r.t. the length of the input, and <φ,n> is longer than just φ. Deciding whether n is prime is easy, so per bit of input the algorithm is faster.


Oh, you’re right! Well, hm, how about SAT-PRIME' = {φ: φ is satisfiable and its Gödel number (under some scheme) has [some property]}.


I would guess that this can work, but it seems impossible to prove. I do not have any good candidates for an NP problem that is provably harder than SAT for a deterministic Turing machine, though I would not be surprised if some tricky diagonalisation argument works.


This post sketches a proof of the NP-hardness of SATisfiabiliy of boolean formula [1].

[1] https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem


There is an interesting bit of nuance here.

Yes, SAT is hard.

But, the vast majority of SAT problems can be solved really fast.

The second one is one of the little known advancements in computer science which happened in this century. Fast SAT solvers power a lot of the interesting software out there.


>> Fast SAT solvers power a lot of the interesting software out there.

Can you give examples? I can't think of anything I use regularly that would use that, with possible exceptions of Make and maybe a compiler (for optimisation)


Package manager, Static Analysis, Model Checking (like TLA+), probably scheduling software.

I'd concur that these are interesting but maybe seldom used pieces of software, except for package management.


They play a big role in hardware design and verification as well. All in all, SAT solvers became very good, so the solution to many NP-hard problems is to encode them in SAT and solve them this way.


For model checking Alloy might be a better example, it directly builds SAT instances (through the Kodkod constraint solver) that are then solved with a SAT solver.

I don't think the standard TLA+ checker (TLC) uses SAT explicitly, though there's an alternate checker (Apalache) that uses a Satisfiability Modulo Theories solver like Z3. (Caveat that I've been wrong about stuff like this before, never dug into the implementation of TLA+ tools, but that's what I pick up from https://lamport.azurewebsites.net/tla/tools.html )


Yup, TLC is a brute force model checker and Apalache uses SMT.


Most package managers don't actually use SAT, because it's hard to get data about the version conflicts when it's unsatisfiable.


Think of it in terms of weird graph problems and algorithms.

It is now easier and faster to encode most graph problems and have a SAT solver go at it, compared to carefully designing and tuning your custom graph algorithm for performance in the kinds of graphs you care about.

As for examples, there isn't much retail software that really needs SAT. But you might have indirectly used firewall/configuration management, home automation, flight/factory scheduling, route planning, etc.


Print shops use SAT solvers to lay out pages on a large sheet of paper in the most efficient way, for example.


He's stretching the definition of "interesting" I think. The main application of modern SAT solvers (and SMT solvers) is formal verification, which I do think is interesting, but you're very unlikely to be using it.

There are some other uses, e.g. for programming languages with advanced type systems, but I doubt most people interact with SAT solvers basically ever.


Have you used, interacted with, or benefited from AWS? ;)

You are right that most people do not have to interact with SAT solvers. Everyone has, almost surely, interacted with software with uses SAT solvers or have used SAT solvers for development tools.

If not software, your chips were definitely designed using tools which are nowadays based on SAT.


> If not software, your chips were definitely designed using tools which are nowadays based on SAT.

To some degree sure (assuming a generous definition of "designed"). I mentioned formal verification, and I expect synthesis uses it a bit. Still a pretty minor use IMO. You definitely could still design chips without SAT/SMT.


Wait, does this mean "hard" refers merely to worst-case complexity, not to average-case complexity?

Which would seem strange, since, intuitively, a hard worst-case means just that the problem is "possibly hard".


Yes, "hard" in this context refers to worst-case complexity. In practice, lots of NP-hard problems can be solved, or approximately solved, in a reasonable amount of time. This is why traveling salesmen can exist, for instance.


Then it seems there should exist a more interesting category "really hard" which refers to average-case complexity. Yet I've never heard of such a complexity class?


Average-case complexity is studied quite a bit. There is some discussion on wikipedia [1]. Also see this paper, which discusses the average-case complexity of NP problems [2]

The reason why its not studied as much by theoretical computer science as is by algorithm-implementers, is that "average" is not uniquely defined. For instance, you can talk about the uniform distribution for a lot of problems, but not always. Suppose you want to investigate the complexity of solving certain types of equations, involving matrices as unknowns. How do you take the uniform distribution over matrices over the (infinite) real numbers? In principle, there are technical definitions you can make [3], and [2] has some results about this, but at the end of the day, there is no objectively superior way of doing it.

What this all leads to are two questions. (1) Would studying average-over-some-sampling-distribution-case complexity yield insights on what is computation? I don't know.

(2) Is it practically useful to classify problems in this way? Not really. Because in the real world, your distribution will rarely be uniform, so it is far better to optimize your algorithms for the distribution you do get.

[1] https://en.wikipedia.org/wiki/Average-case_complexity

[2] https://arxiv.org/pdf/cs/0606037.pdf

[3] https://mathoverflow.net/questions/76295/intuition-for-haar-...


Well, thanks for the Wikipedia link. It says average complexity is very important for sorting algorithms and cryptographic one-way functions. I found especially this part remarkable:

> Note that it is not desirable for the candidate function to be NP-complete since this would only guarantee that there is likely no efficient algorithm for solving the problem in the worst case; what we actually want is a guarantee that no efficient algorithm can solve the problem over random inputs (i.e. the average case). In fact, both the integer factorization and discrete log problems are in NP ∩ coNP, and are therefore not believed to be NP-complete.

Regarding this question:

> How do you take the uniform distribution over matrices over the (infinite) real numbers?

Well, naively, how about screw the real numbers and just take the uniform distribution over the, say, 64 bit floats, which are conveniently finite? At least if we use those in practice? But I guess this goes in the direction of "Because in the real world, your distribution will rarely be uniform", since the 64 bit floats are a subset of the real numbers, and any uniform distribution over the former is equivalent to a non-uniform distribution over the latter (probability 0 to any reals not in the 64 bit floats).


64-bit floats are a finite set. The solution of problems where the real numbers are replaced by the finite set is sometimes easier, but often much more difficult. The reason is that you can't do calculus over finite fields. For instance, taking the inverse of the natural log is efficient, but the inverse of the discrete log is a hard problem, and hence is used in cryptographic algorithms.

Complexity theory is a very subtle science.


Nontheless, in practice most algorithms have a limited range of inputs, in which case there should be no problem with infinity?


Beyond average case complexity, which is discussed below, there is also a rich research line on approximation algorithms. Interestingly, the effectiveness of approximation algorithms is not uniform across NP-Complete problems. There are problems where efficient approximation algorithms can be arbitrarily good and there are problems where if P!=NP then approximation algorithms have hard caps on their effectiveness. This is relevant for a lot of optimization tasks where you don't necessarily care about getting the best possible solution but would be happy getting pretty close.


The issue with defining such terms properly is that it's unclear what makes an 'average' problem instance.

For SAT, you can sort of correlate difficulty with the number of irreducible clauses.


That seems very reasonable for SAT.


As a nit, I think the vast majority cannot be solved fast? A large number of interesting ones can, though. Right?


Maybe (can't remember, might have never known), but not necessarily. As an example of the idea, consider the simplex algorithm as it applies to linear programming. It has exponential worst-case behavior, but people noticed that in practice the vast majority of interesting problems could be solved quickly. Later it was shown that all non-polynomial inputs do actually lie in some diminutive subspace.


"Vast majority" and "vast majority of interesting ones" may be completely different. The vast majority are just random; the vast majority of interesting ones will have some kind of structure, given by the real-world problem they represent.


As I understand it, I think that any randomly generated SAT problem will not be able to be solved quickly; however, SAT problems in the real world, even when they're large, tend to be quick for SAT solvers (most people solve the "Wedding Seating" problem for their own weddings, after all!), Why ordinary SAT problems tend to be easy to solve, last I checked was an open problem.


The "vast majority" of SAT instances would be something like a SAT instance you would draw uniformly at random from some kind of distribution on all possible circuits. Making this idea rigorous is what k-random SAT is all about.

It turns out that most randomly generated SAT instances in this way will lack certain features that tend to make industrial instances "hard," and in fact there is research on randomly generating random SAT instances which have those features and thus which are more difficult to solve. For instance: https://dl.acm.org/doi/fullHtml/10.1145/3385651


Those dependency chains in apt, brew, port/pkg.. that's one of the problem space SAT solvers address.

I hate apt/deb bundling. I prefer brew/port/pkg atomicity with explicitly defined version dependent declarations and metaports distinct.

That might be how most people interact with SAT.

Makefiles too maybe, I'm unsure if make does it's dependency checks in SAT


You don't need any complex algorithms such as SAT solvers for makefiles, because make does not need to choose from alternative ways to build an out-of-date makefile dependency. Either the dependency file already has a date newer than all its dependencies, or the file will be rebuilt by following deterministic rules.

The reason package dependencies can get complex in Linux distributions is that you can choose different versions of a package, and each version has dependencies on specific version ranges of other packages. Of course it would be a lot easier if you could choose to install multiple versions of the same package in parallel, and don't care about minimising the number of parallel versions.


For those confused, this is not "SAT" as in the Scholastic Assessment Test, but rather SAT for "boolean SATisfiability"[1].

[1] https://en.wikipedia.org/wiki/Boolean_satisfiability_problem


If it were the former, the title would have to be “why the SAT is hard.”


Interesting informal summarization of SAT. However, there is no such thing as "the hardest" NP-complete problem. By definition of NP-completeness, if you solve ANY NP-complete problem in polynomial time, you can solve every other NP-complete problem in polynomial time, thus they have equivalent difficulty (although, some problems may seem intuitively easier than others).


That's only true if you focus purely on the cost as being polynomial or not. The constant factors, the exponents, and the ability to represent difficult problems compactly, can of course differ pretty substantially. The ability to find approximate solutions, and the difficulty of the average case, can also differ substantially from problem to problem. An example of an application where these kinds of details are crucial is cryptography.


I think that was a slightly odd way of mentioning https://en.m.wikipedia.org/wiki/NP-hardness


That is not entirely true. If you compare two implementations of bubble sort, they will both be O(n^2). Yet one can still be much more efficient than the other.

x^4 and x^10 are both polynomials.


> That is not entirely true

what are you referring to?

Obviously x^4 != x^10, but noone has ever proven that "if SAT is P, then it would have the lowest (or highest) polynomial complexity among the other problems", or anything similar


I was referring to this:

> [All NP-hard problems] have equivalent difficulty.

I get what OP meant, but there can still be some NP-hard problems that are more difficult than others.

As I said, x^4 and x^10 are both polynomials. I wouldn‘t call them "equivalent" though.


Last time I looked at NP complete problems people were telling me to use SAT-family algorithms to solve them, so I’m quite confused at the moment.


Let me first try to briefly summarize NP-completeness:

A problem A is NP-complete if:

- you can verify it's solution in polynomial time

- you can reduce it to any other NP-complete problem in polynomial time

Since polynomial reduction is transitive, the second point is equivalent to saying that "you can reduce it to SAT in polynomial time".

The reduction is simply a function that compiles the starting problem to SAT and preserves it's solvability (in a bijective way). In other words, to prove that a problem is NP-compelte, you must write a compilation function to SAT and also prove that every solution to the starting problem is also a solution to the destination problem in SAT. Furthermore you must also prove that any solution in the SAT version can be "decompiled" to a proper solution in the starting problem, so this is where the equivalence comes from and it's the reason that SAT is not really anyhow harder than other NP-complete problems.

The thing is, SAT has been heavily researched, so that's why you have extremely efficient SAT-solvers. So if you want to solve an NP-complete problem, you can simply compile it to SAT, solve it with well-known SAT-solvers and "decompile" the solution to your starting problem


The other way around: to prove that X is NP-complete you need to reduce SAT(or any other known NP complete problem) to X.

The intuition here is that if you have a magic algorithm which solves X efficiently, you should be able to use this algorithm to solve any other NP problem, like SAT.


Good post.

Tangentially related: I have thought recently that we really need to refresh the way that we explain these ideas to beginners in this new world of AI/ML doing everything.

For instance, here is an excerpt from Scott Aaronson's excellent paper on the topic:

"To illustrate, suppose we wanted to program a computer to create new Mozart-quality symphonies and Shakespeare-quality plays. If P = NP via a practical algorithm, then these feats would reduce to the seemingly easier problem of writing a computer program to recognize great works of art. And interestingly, P = NP might also help with the recognition problem: for example, by letting us train a neural network that reverse-engineered the expressed artistic preferences of hundreds of human experts. But how well that neural network would perform is an empirical question outside the scope of mathematics."

This has been kind of the standard way to explain this stuff to beginners for the last 20 years or so. It is an excellent paper and I highly recommend reading. Aaronson also goes on to explain how these are imperfect metaphors, that they are just for building intuition, etc, and gives better technical details. But it's still a good intuition-builder regardless.

Or at least it was. How do we explain all of this stuff now that we have StableDiffusion and ChatGPT? From here on, people are going to grow up in a world in which these things are commonplace, rather than it being questionable how physically possible they will ever be. The "empirical question" Aaronson talks about has largely been proven in the affirmative, so given that we are here, how do we explain this stuff now?


A rule of thumb: the first time you use an abbreviation in a text you explain it.


If you write a post for experienced Linux users about using Linux, do you generally write "first, install package xyz with Advanced Package Tool (APT)", or "switch to a Terminal TYpewriter (TTY)" or "Run the following Bourne Again SHell (BASH) script"? "Extract this Tape ARchive (TAR) ball"?

Some abbreviations are so well known in certain fields that spelling them out in a text written for people experienced with that field isn't that necessary. For people who have any familiarity with this field at all, SAT (and NP in NP-complete, for that matter) are such abbreviations. I'm not against the idea of explaining what SAT stands for either, but your rule clearly doesn't hold up.

EDIT: one point I will agree with though is that the Hacker News title should have been written for a more general audience; something like "Why SAT (boolean satisfiability) is hard". But that's a complaint about the submitter, not the author.


>If you write a post for experienced Linux users about using Linux, do you generally write "first, install package xyz with Advanced Package Tool (APT)", or "switch to a Terminal TYpewriter (TTY)" or "Run the following Bourne Again SHell (BASH) script"? "Extract this Tape ARchive (TAR) ball"?

I agree that GP's rule doesn't apply in all cases (and they did say "rule of thumb," which implies imprecision), but I think it applies to this blog post. The author explains Big-O notation and Turing Machines, but uses the abbreviation SAT without explaining what it means.

The inconsistency is a symptom of the author forgetting to think through their intended audience and what they can be assumed to know already. I see this often on technical blog posts, and it's unfortunate because it's easily fixable.

If the author is trying to reach the type of reader who doesn't immediately recognize terms like "Big-O" or "Turing Machine," then they need to explain SAT more.


Agree, I could follow everything in the article, but was like wtf is SAT, especially since it's in the title of the article. I imagine a big crossection of folks had no clue, hence the comments section.

As for the previous commenter, all of their examples are pretty much irrelevant to what you are trying to do. A random Linux tutorial doesn't need to spell out CLI acronyms, but if you write an article "Hacking Bash for fun and profit" it's expected you put one paragraph background on bash, and explain the acronym.


The thing is that SAT has many possible meanings as an acronym and this isn’t the most common one.

I’ve been out of academics for a while, and at first I thought this was talking about the American college exam.


This sounds like a complaint about the hacker news title, which should have been written for a general HN audience, not a complaint about the blog post, where it is perfectly clear (for its target audience) that SAT means the boolean satisfiability problem.


The title in the blog post is exactly the same.


Which is the problem if the title of the blog post is intended for a mathematical audience and the title of the HN post is intended for the general HN audience.


As others have pointed out, if this is a blog post for a mathematical audience that understands basic concepts in complexity theory, why bother explaining "Big-O notation is a standard instrument for describing performance of algorithms, as it erases small differences which depend on a particular implementation of the algorithm."

Feels like the audience that this post is directed to is essentially "an earlier version of the author", which, to be fair, is a common mistake. I anticipate the average HN reader should be able to understand this blog post - I know what the boolean satisfiability problem is but have never seen it abbreviated as just "SAT".

Don't mean to be overly harsh toward the author, but a first step in good writing is to define who your audience is.


FWIW I graduated over a decade ago, and even non-capitalized, I interpreted the title correctly immediately. The fact that it was on HN and said "hard" let me take a damn good guess that it would be the NP SAT problem.


This article explains what SAT is in basic enough terms that I could understand the idea. I'm guessing nonmath nonCS people are part of the intended audience. I had no idea till just now.


Fun fact, also "hard" in this title is a formally defined term.


Exactly what clued me in. It's an NP-Hard problem so "Sat" + "hard" are inextricably linked in my brain.


That's what I meant.


Agreed on principal. Always amusing to click a pandas post. I think I mainly do it to roll the dice and see what it will be. :)

I'm curious what else you think of when you see SAT, though?


How satisfactory something is perceived

My bodily arrangement when I ate breakfast

The sixth day of the week according to crontab

Etc etc


Not everyone here is an experienced prover/mathematician, and SAT can mean different things, and I believe many people will view it as the SAT test especially when you pair it with "difficult" in the tile wihtout any other context.


I don't know how that disagrees with anything I said. I agreed that the HN title should've clarified what SAT means, since HN's readers aren't all in the target audience.


First, did I say my previous commebnt disagrees with anything you said? Second, if it's not due to "disagreement", why did you add the last "EDIT"?


I got the impression from your comment that you disagreed with my overall position. If that was incorrect, I apologize.


Totally agree with your comment. Just came in to leave for others: I believe "TTY" stands for "Teletype"(writer). Learned fun facts I didn't know when I hit google to double check: the original teletypes are descendants of a morse-code printing device called the "teleprinter" (from the 1830s!") One of the early popular devices for encoding and printing typographic characters was patented by Emile Baudot(for whom "Baud rate" is named) in 1879! This means that the commercial mechanical typewriter barely predates the teletype.


> If you write a post for experienced Linux users about using Linux, do you generally write "first, install package xyz with Advanced Package Tool (APT)", or "switch to a Terminal TYpewriter (TTY)" or "Run the following Bourne Again SHell (BASH) script"? "Extract this Tape ARchive (TAR) ball"?

I would avoid abbreviations if possible: "install xyz with your package manager", "switch to terminal", "run the script", "extract the archive". In your last example, I would ask myself if I need to specify it's a TAR. Does it matter that it's TAR? Are there any other archives in this project?

I agree that you should write for your audience and if you know they are experts like you, you don't have to explain everything. Personally, I err on the side of assuming less and explaining more.

Funny thing is, if you search for SAT on Hacker News [1] is has results for both satisfiability problems and admission tests. Even funnier is that in the second context, SAT nowadays means just SAT.

1 - https://hn.algolia.com/?q=SAT


Here's an analogy: you write a blog post for a tech/UNIX audience, titled: "Why GNU tools are bloated" or something (stupid example). At some point, someone else posts a link to your blog post to a mathematics forum, changing the title to "Why Gnu Tools Are Bloated".

In those comments, people complain that they don't understand what kind of gnu you're talking about, that they thought you were talking about wildebeest herding, and they say you obviously should have expanded the GNU abbreviation in the title.

That's what I feel is going on here.


Sure, but then you should have „Linux“ somewhere in the title of the post, in the header of the website (M‘s Linux Blog), or start the text with some context.

And not just dive in with something like this: „Ever wondered what happens if you open a TTY and APT something?“, more like: „Have you ever wondered what happens if you install Linux packages with APT on the console?“


Dude the very beginning of the very first paragraph talks about complexity theory and NP-completeness. That's starting the text with the relevant context.


That would be expanding the abbreviation, not explaining it. If context calls for it, one could write "extract this tar ball (a file format for packaging many files into a single one)"


> If you write a post for experienced Linux users about using Linux, do you generally write "first, install package xyz with Advanced Package Tool (APT)"

Yes, it is generally a good practice.


TTY is teletypewriter.



Yeah.. Sat Is hard because of the Hangover from Fri?


Site Acceptance Test? SATtelite communication ? We don't know, but we now know it's hard.


The post does explain which SAT it's talking about, though it's a bit later than you might have hoped for.


Scholarly Aptitude Test


[flagged]


There are many, many people on here who do not have the slightest clue what complexity theory even is.

Let's try to educate more and condescend less.


Then maybe go read one of the many introductions on the topic? Not every blog post has to start at the beginning to be interesting here. There is nothing condescending about not starting every piece of technical writing from grade school arithmetic and the author makes it very clear who the audience for the post is.


The first sentence is literally, “An introductory post about complexity theory” but then goes on to assume we know about complexity theory.

The author has the right to write whatever they want, however they want, and they owe us nothing.

But your outburst when someone simply asks wtf, is nonsensical.


I interpreted "introductory post" to mean that it is introductory within the blog and possibly a lead in to further articles as the author hasn't written on the topic before. Not that it is necessarily an introduction to the topic.

People on this site are so full of themselves thinking that random technical bloggers on the internet have to satisfy some unwritten Hacker News style manual.


Really we are just being curious. I like articles that talk about computational complexity, I wanted to understand it. But there was literally no hook for me to hold onto. I didn’t even know what to search for. SAT is such a common acronym.

The blogger owes us nothing. You’re the one telling us that we should already have known. Maybe instead of criticising us, you could have helped us?


> But there was literally no hook for me to hold onto

Ahm, from the text: "Boolean satisfiability, or SAT is a decision problem where an input is a boolean formula like [...]"


Yeah but it appears ½ way down. I didn’t make it that far because I had no context.


> Maybe instead of criticising us, you could have helped us.

I mean, I did? While the reply was brusque I did name what the topic was about: boolean satisfiability, "It's pretty damn clear from the context of complexity theory that this is about boolean satisfiability."

The comment I replied to on the other hand was not constructive at all.

This is a link aggregator site and not an editorial site, the content linked to is varied and is not going to meet a single standard, or this site will become monotonous very quickly. I don't even click or read the majority of links here because they don't sound that interesting to me, which is fine. Not everything has to be aimed at the beginner or a particular audience.

When slightly less accessible content or just personally uninteresting content is posted I don't see how a content free snarky comment is at all constructive.


It's pretty damn clear if you have any idea what SAT and complexity theory is (which I don't), the DO eventually get around to mentioning in passing that SAT (the word in the headline, mind you) is short satisfiability, but look where..

I can't comment on the contents of the article itself, I know nothing about the subject, I only mentioned my frustration that, as someone who dares explore links on the Internet, even if they can't extrapolate the subject from the headline (the audacity, I know), it's impossible from the first paragraphs of that text to even find out what kind of subject we're talking about here.. And the SAT word, well, for me, that's what tripped me..

Look at this random article for comparison http://www.cs.toronto.edu/~lalla/373s16/notes/CSAT2SAT.pdf

They explain it in the third sentence, and they told what SAT meant _BEFORE_ they used it for the first time, not half a dense page further down in passing.

Sure, this is putting my ignorance on display, but I'm not going to apologize for not knowing about every subject in the corpus of human knowledge..


A link/title should be informative and useful with no prerequisites.

Personally, I thought this was something to do with satelites.


I'm sorry, but on a tech site full of people working in IT, I genuinely think this is obviously about Boolean Satisfiability. There has to be a minimum standard of expected understanding - e.g. "Rust" on this site is unlikely to be about corrosion.


And as a person working in tech for the past 10 years, I had never heard of Boolean Satisfiability let alone SAT until I read through the comments on the post and the linked Wiki page. I would suggest that this topic is not as universal as you think it is.


Pretty sure if they’d used the term “Boolean satisfiability” even once, we would have been able to work it out.

But SAT could mean a million things and if you’re not using this stuff on a daily basis then it’s simply non obvious.

It’s not the authors fault, they can write what they want. what’s pissing me off is the utterly unhelpful attitude of people responding to questions about wtf the guy is talking about.


>Pretty sure if they’d used the term “Boolean satisfiability”

It is obviously used in the text, just not in the title.

This random digression doesn't add anything to the topic. If the parent of this thread had posted "What is SAT ?" instead of the rubbish snark, someone would have replied, "Boolean Satisfiability". End of story. When did expressing one's ignorance become cool ? The OP (and others) could have grepped the article, or even just smashed a few keys in google, such as "sat np", and would have got the answer.


Nothing is more ignorant than being willfully ignorant of the fact you aren't making sense to your audience.

"You" in reference to the people posting these things here at HN, not the article author. We aren't the article author's audience, so he has every right to ignore our needs. Whoever posts these things here, though? Their audience is us, and they need to try and make sense of themselves.

Just add "(Boolean SATisfiability)" after "SAT" and everything would have been fine. It's simply good practice to always explain the first use of an acronym anyway.


Articles need to be aimed properly at the audience. Is it beginners or experts?

It ir is not for beginners, then why start explaining Big O notation and SAT with some pseudocode?

In my opinion, the article should explain what sat stands for because it is aimed at relative beginners.

On a note: my take on the community here is that in general nobidy here wants to be spoon-fed or be insulted if they ask what an abbreviation stands for. What they want is properly written articles so that they can investigate further, and a discussion with the community.

There are so many specialists reading HN and I am amazed at the level of knowledge and experience in the replies sometimes. They don't talk with snobbery about that users should read text books and not bother because they are not part of the select inner circle. Instead, they help out and educate.

That is what makes HN great.


This is a perfect use case for hyperlinks.


“hard” is not a word invented by the Optimization community. Lots of difficult topics get posted here that have nothing to do with NP completeness.


You don't know that the article is about complexity theory until you click through it, so the OP's point about the ambiguity stands. The title could be better.


Can we agree title alone is not so clear and maybe (I repeat, maybe) a bit clickbait?


Still, a title of "why boolean satisfiability is hard" would eliminate the confusion without any downsides.


SAT, what? Standardized Aptitude Tests?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: