My hunch was motivated by the fact that Frege had already made the very insightful distinction between functional and non-functional types in the 1890s, and that type theory had already been around for a fairly long time (at least since Russell and Whitehead's Principia). In other words, just about everything had already been formalized, you just needed to have someone come in and say "computer programs are better behaved when data has a type". In the case of Plankalkül, the types of variables are stored as single bits.
Okay … but what's the confusion, exactly?
Wikipedia defines type theory as,
> In type theory, every "term" has a "type" and operations are restricted to terms of a certain type.
Which sounds very much like how programs use it: terms in expressions in statically typed languages do have a type; in some, functions themselves, much like Wikipedia's addOne example, have a type.
Whereas the Wikipedia article for type system says,
> Assigning a data type, termed typing, gives meaning to a sequence of bits such as a value in memory or some object such as a variable.
This is, I feel, a better definition. Or at least, it gets at what programmers are concerned about: the meaning of the sequence of bits (and from that meaning, the set of valid possible values and operations, and more importantly, that not all values (of differing types) are valid in all operations; I would argue that the bits aren't so much involved, since there are multiple ways to store the same sets of values. E.g., Unicode strings can be stored in a number of bit forms).
Bit patterns don't have to matter. There are languages like Haskell which have type-theoretic type systems and no bit patterns specified . There are also languages like C, which has non-type-theoretic type systems and lots of things to say about bits, and languages like ANS FORTH, which has no type system at all and could reinterpret bit patterns.
There are also languages which have features generally referred to as type systems, and which fit Pierce's definition, but which are not just not type-theoretic, but don't resemble any kind of system which discriminates values based on membership in a set. For example, Pony and Rust both have syntactic forms for adding memory semantics to values, and their type-checking routines check the memory semantics oblivious to the bits being stored. Pony additionally can check certain causality properties. Monte has auditors, which are syntactically-specified objects which enforce semantic properties of other objects by examining their ASTs .
In summary, "type system" means what Pierce says it means: there is syntax, the syntax describes abstract properties of the program, the properties can be checked by syntax-guided computation, and programs can be rejected as invalid if the check fails.
What do you mean by "type theoretic"? Once you have a formal system where your objects are divided into sorts and your syntactic expressions are constrained by those sorts such that expressions may be ill-formed when the sorts are mismatched (by some definition), then you have a type theory on your hands.
> the properties can be checked by syntax-guided computation
Not all type systems can be checked by computation (some are undecidable).
The reason for the infamous undefinedness of signed integer overflow is precisely that C tries to be agnostic about their encoding (which mostly comes down to two's complement vs sign bit).
Btw, this reminds me of a fascinating paper on the subject which "rather than interpreting type systems as categories, [it] describe them as functors from a category of typing derivations to a category of underlying terms. Then, turning this around, [it] explain how in fact any functor gives rise to a generalized type system, with an abstract notion of typing judgment, typing derivations and typing rules."
The most interesting part is that both usages use the same kinds of type. That's probably the source of the confusion.