For some reason you're willing to drop the context.
He's talking about people coming through the border without being vetted, who are dangerous to the country.
Do you think that people should be able to come into the country freely without any vetting at all?
He was pointing out a specific problem, maybe your imagination turned that into the much more general "all immigrants are bad" when he's never actually said or suggested anything of the kind?
> If you say "they're eating the dogs, they're eating the cats, we have to clean up our country" then... yeah you're getting pulled into HR.
Trump was just repeating the same thing that many other people heard, a resident at a town council meeting in Springfield, Ohio claimed that some group of migrants there had eaten someone's pets.
Was it true? Who knows? People tend to repeat a lot of things they hear but haven't verified themselves. For example in that same debate, David Muir falsely repeated the lie that crime was down nationally. Do we take David Muir to HR?
That kind of obvious logical inconsistency, coupled with the assumption of moral authority (which for whatever reason seems to be the subtext for so many of these conversations), rubs a lot of people the wrong way.
So then Trump didn't do his own research and will pretty much just believe anything? Wow, it sounds like you think Trump is an idiot.
I don't understand this line of thinking from conservatives. When Trump is criticized on the things he himself has said people will argue we shouldn't listen to Trump, because he is stupid and he lies. So he's stupid and he lies... and yet you support him?
Either he is a strong leader, in which case we should take what he says at face value, or he is a fraudster, in which case we should expect misinformation. You can't have both. Talking down about the person you support reflects very poorly on you.
> David Muir falsely repeated the lie that crime was down nationally
Crime IS down nationally.
And you understand this isn't the type of thing that would get you fired from a job?
While people on the left may often be wrong, they aren't wrong in a way that can be interpreted as racism, sexism, or homophobia. This won't get you sent to HR - and that's the difference.
Conservatives have a really hard time supporting their position without using tools which are unacceptable in professional environment. If you don't support immigration, then great! Now defend that without racist rhetoric. Trump can't do it, so if that's your role model then you better find someone else.
Eh, maybe he shouldn't have been impeached for that call. President Biden's son had a strangely lucrative position, which he appeared not to be qualified for. And Biden was very involved in pressuring the Ukraine government to fire a prosecutor who was investigating that same energy company. There's a lot of public corruption in Ukraine, which was one of the factors leading to the election of their current president (according to what I've heard anyway).
This doesn't mean that Biden is definitely corrupt, but it does look very suspicious, and seems worthwhile to investigate. Our country is sending a lot of money to Ukraine. We deserve to know everything here.
> A joint investigation by two Republican Senate committees released in September 2020 found no evidence of wrongdoing by Joe Biden. A sweeping Republican House committee investigation of the Biden family has found no wrongdoing by December 2023.
And sure, it’s weird Hunter was involved but it’s also weird the guy who brags about being rich still won’t show us his tax returns despite it being something every other president has done and gone as far to make up stories about the IRS while at the same time saying he’ll release them when he can. Joe released his tax returns, it’s all on the table.
Joe appears to be held to a much higher standard than Trump. Like… if Joe asked Kamala to overturn the election results like Trump did to Pence the republicans would be outraged and try to bar him from ever being president again, not that he would because he’s a good person at heart and for which he’ll get no praise, because it’s obvious not to do that and we don’t give out brownie points for abiding common sense.
It’s only bad when the other guy does it which is also why they latched onto Joe’s garbage comment even though Trump has called his opponents, trash, vermin, sick people, the enemy within and encouraged his supporters to call them satan worshippers and gets a free pass. Oh snowflake dems hate being called enemies of the state. He’s doesn’t mean it he’s just trolling the abuse of presidential powers like any reputable statesman would.
But we’re so inundated by the constant flood of news that one scandal replaces itself and it’s hard to remember all the other ones that came before it. We’ve grown numb to it. At this point I’ll just be happy if we make it to the next election in one piece.
> And sure, it’s weird Hunter was involved but it’s also weird the guy who brags about being rich still won’t show us his tax returns […]
You’re so quick to drop the question of the Biden family’s involvement in Ukraine and you pivot to Trump’s tax return, but the billions of our tax dollars and lives lost in Ukraine now make that a MUCH more important issue than Trump’s tax returns.
Congressional investigations blah blah blah, obviously they can’t get anywhere. Trump was impeached for trying to get information right from the source, and it was very stupid. We should get that information, hopefully he takes another shot at it. We’ll see what happens.
I made something like this for Morgan Stanley some years ago, a structurally typed eager variant of Haskell with static elimination of type class constraints (so no runtime penalty for overloading) and uses LLVM for the back-end:
http://github.com/morgan-stanley/hobbes/
We used it for processing and analyzing billions of events per day. Using structural algebraic types let us encode market data (obviously) but also complex data structures inside trading systems that we could correlate with market data.
As you say, Haskell-ish expressions are much more compact and capable than SQL, which was one of the reasons I wanted to build it.
It also had a somewhat novel compression method (“types as probabilities” if you like the old Curry-Howard view), which was very important to manage billions of daily events.
> if you want to claim "forall x in X, P(x) is true" then you need to exhibit a particular element of x for which P holds
I don’t mean to be pedantic (although it’s in keeping with constructivism) but in the case you describe, you don’t have to provide a particular x but rather you have to provide a function mapping all x in X to P(x). It may very well be that X is uninhabited but this is still a valid constructive proof (anything follows from nothing, after all).
If instead of “for all” you’d said “there exists”, then yes constructivism requires that you deliver the goods you’ve promised.
I’ve always wondered why SQL doesn’t support variant types, and in this case especially.
If you’re going to store a sequence of values whose type can change over time, a variant is the obvious encoding choice. As new schemas are introduced, you add constructors to the variant.
On top of that, you can apply some basic structural reasoning to say eg a field can be indexed (possibly by different paths in each case) iff it exists in each constructor, or optionally with the awkward SQL nulls in case a field is not in each constructor (as they describe in the paper).
But then you could also use general variant deconstruction to decide on a case-by-case basis how to handle each version or subset of versions of a schema, and having that capability to represent variants in user data, independent of “evolving schemas” could have significant general use as well.
If you feel like having tuples and variants is too complicated, use dependent sigma types, so they’re all tuples, and then your schema language is much more expressive still.
It’s weird how much database systems have stagnated on this front.
Adding variant cases effectively changes the type of a column, breaking existing clients. So you might as well migrate the existing data to a consistent representation.
Independently of schema evolution, a sum type can be implemented by using different nullable columns, and adding a multi-column CHECK constraint that (one and) only one of those columns must be set. One might want more syntactic sugar in SQL for that, but it would also complicate the language and data model, without really adding any new functionality.
I hear different opinions about whether type checking is "an" instance of abstract interpretation (AI) or "the" instance (which IMHO is a useful question since we might consolidate our efforts on one method if possible).
This article does a very good job of laying out typical examples motivating AI as a concept.
But the process of attributing points in the AI lattice to expressions does look a lot like type inference (and the AI lattice itself does look a lot like a type hierarchy).
Does this point to a slightly different approach we should take to type systems? For example, suppose the type of '1' was not 'int' but rather '1' (a type runtime equivalent to unit, but carrying its information at compile-time), and that 'add' is overloaded on input types so that it can either directly compute its result (also then statically knowable) or emit instructions to dynamically compute its result if its arguments are only dynamically known.
Is that the meta lesson here? Should we be using more detailed/nuanced type systems?
If you're talking about type systems, you typically have one per language: a language has a certain type system. A different language might have a different one. If different implementations of the same language have different type systems, they tend to be called different dialects, if not different languages altogether.
But as noted in a sibling comment by 'chc4, one might want different AI domains for the same language, either all at the same time, or in different compiler passes, or only if the optimiser is enabled, etc. While AI might look like a type system, and perhaps the algorithms are sort-of similar, at the very least the two are used differently.
EDIT: There is prior art on doing type systems and further analyses simultaneously; these are sometimes called "type and effect systems": https://en.wikipedia.org/wiki/Effect_system
Contrary to what wikipedia seems to claim, such systems need not actually talk about side-effects only; one can also formulate provenance analysis in a programming language (e.g. which lambda can end up in this function-typed variable?) as a type-and-effect system.
FWIW, the Patrick Cousot* perspective on abstract interpretation is that literally everything is an application of abstract interpretation. It's kind of a meme in the PL community at this point, but he's really all-in on it and even begins his course with a self-deprecating joke about this viewpoint.
*Patrick Cousot and his late wife, Radhia, are credited with inventing abstract interpretation.
my last job was working on [pytype](https://github.com/google/pytype), which uses abstract interpretation to do static type inference and checking for python. we used the cpython compiler to convert a program to bytecode, and then ran the bytecode through a VM where the abstract values were types. it worked extremely well, and could even typecheck completely unannotated code, though of course with less precision than when the user supplied some types.
extending that to runtime JIT compilation is an interesting idea; i'm not sure if any of the current JIT systems do that, but i don't see any reason it wouldn't be a useful technique, so they likely do.
What? There is no "the" instance of abstract interpretation, because different uses of it will have different abstract domains they are interpreting over and different transfer functions. There's no reason to choose types as being a more privileged implementant than odd/even, and choosing odd/even as being a "case" of a type doesn't make more sense than choosing types as being a "case" of odd/even.
no shade at you (i like you and your work and blog) but
> type, known bits, known range
these are all the same thing.
i was thinking about this question in the shower (before responding) and i'm pretty sure type inference (for whatever purpose, like deducing the type, or refining the type, etc) is the only use-case of abstract interpretation. if you look at couset's webpage (i haven't read the original papers) you see lots of cute pics of program trajectory but no one actually uses abstract interpretation for control-flow tracking - the lattice is only ever updated at a type boundary.
prove me wrong (like i could be wrong) but i'm reflecting on implementations in llvm/mlir/pytype ie mature/real use-cases (i have not poked around in pypy's impl).
They're related, but they're not the same thing: E.g. when analysing a JS-like language and you see "x = a + b" with raw memory content a = {0x1,0x2}, b = {0x3} you get x = {0x4,0x5,0x13,0x23} - because you can't tell if that + is a string concatenation or numerical addition.
Of course when analysing program code, you might want to carry type information with you. The math behind AI allows to combine basically any domain to a single domain without losing all the nice properties of the Galois connection. But usually you get away with tracking multiple domains simultaneously, since that's also a lot easier from an engineering perspective (someone has to understand and maintain that code for a few decades).
Edit: I recommend following funcDropShadow's advice.
I'm not sure what you mean that they're the same thing. They're different domains with different transfer functions. There's even a recent ish paper exploring how they can be used to inform one another. Can you say more?
>I'm not sure what you mean that they're the same thing
they're all components of the type; i have no idea what pypy does with "known bits" but deducing "range" of a value is firmly/unanimously as "type refinement" (see https://en.wikipedia.org/wiki/Refinement_type) i.e. still a part/aspect/dimension of the type not of the value.
that's not what the other question/poster/person is implying - every can already be modeled as a type system because type systems (at least some of them) are turing complete. the other question asking whether there's any other application of abstract interpretation other than type checking/type inference. to which i say no or at least none that i've ever seen ie no one ever does anything with it something about eg control flow. it's always only about types.
edit: i was about to say "for example no one ever uses abstract interpretation for escape analysis" and then i wanted to double check and lo and behold https://dl.acm.org/doi/abs/10.1145/945885.945886. so i guess i'm wrong (and the other original question can be answered in the negative) but i would still argue that it's true in practice. the reason is obvious - the space/lattice of possible types of a program is huge, the space of states is .......... busy beaver scale.
There’s forward and backward propagation. Backward propagation is generally less about the type of a variable, but it’s still a form of abstract interpretation. For example, is-live (or dead) or used analysis, or reachable analyses are not really type information, as they tend to depend on the control flow after a point.
There’s some other analysis that’s not type-like as such, for example available expressions.
You might as well express escape analysis as inferring an "escaping reference" or "non-escaping reference" type for each allocation site. Then you're back to "abstract interpretation is type inference". But wanting to make this statement true also makes it meaningless. Tautological-by-definition statements aren't interesting.
Abstract interpretation can do constant folding and branch elision. It changes the program instance. Type checking is about invariants over a specific program, not changing the program itself.
i suppose you can probably do anything with dependent types, but i'm not sure this is a useful perspective. i commend you to read my comments on the red website https://lobste.rs/s/xkcrvn/
(i do think it is a valid question whether abstract interpretation is a good idea)