"one of the myths concerning LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus, or that was the intention. The truth is that I didn't understand the lambda calculus, really" - John McCarthy
So there are a two issues here, 1) whether or not it was McCarthy's intention to realize the Lambda Calculus in LISP, and 2) whether or not LISP is such a realization. Or at least some kind of close realization.
The answer to 1 is clearly no. This doesn't imply an answer to 2 one way or another.
If 2 isn't true, what explains the widespread belief? Is it really just that he, McCarthy, borrowed some notation?
My understanding is that lexical scope was first implemented in Algol and Pascal, and then was first implemented with true garbage collection in Scheme. (Thereby leading to the restriction in Algol and Pascal that closures existed, but they could only be passed into functions, and never returned from them. That way the variables being closed over could live on the stack.)
But I'd love to learn that I'm wrong and that these things came before that.
> The concept of closures was developed in the 1960s for the mechanical evaluation of expressions in the λ-calculus and was first fully implemented in 1970 as a language feature in the PAL programming language to support lexically scoped first-class functions.
> PAL, the Pedagogic Algorithmic Language, is a programming language developed at the Massachusetts Institute of Technology in around 1967 to help teach programming language semantics and design. It is a "direct descendant" of ISWIM and owes much of its philosophy to Christopher Strachey.
Looking at the "Universal LISP function" on page 13 in , the case for apply/LAMBDA just extends the current environment a with the arguments of the lambda, but it doesn't unpack a closure to get the environment the lambda function was defined in, so it implements the dynamic version. (Unlike, e.g., the interpreter in SICP .)
Googling a bit, it seems that McCarthy mentions this in "History of Lisp" , as a change from LISP 1 to LISP 1.5:
> In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained. I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell's turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. Unfortunately, time did not permit writing an appendix giving the history of the problem, and the interested reader is referred to (Moses 1970) as a place to start. (David Park tells me that Patrick Fischer also had a hand in developing the FUNARG device).
So I guess closures had been invented at least by 1962 (when the LISP 1.5 manual was published), but were not widely used because of performance considerations?
Common LISP is lexically scoped, though it does still have opt-in dynamic scoping ("special variables").
Since Emacs 24.1 (had to look up the version), it's been possible to enable lexical scope for a file or buffer. The default is still dynamic scope, and dynamic scope can be achieved even after enabling lexical scope with certain conditions.
"With lexical scope, a name always refers to its (more or less) local lexical environment. This is a property of the program text..."
So as local lexical environment isa property of the program text, lexical scope is explicit in the language's syntax.
This would fit in with Graham's suggestion that McCarthy more "discovered" Lisp than "invented" it.
(a) In classic lambda calculus, everything is a lambda term. McCarthy's Lisp has primitives like lists and numbers. However, it is known that lambda calculus is powerful enough to encode these things as lambda terms (for example,
null = (lambda (n c) (n))
(cons a b) = (lambda (n c) (c a b))
(car a) = (lambda (lst)
(lst (lambda () (error "car: not cons"))
(lambda (a b) a)))
(b) Lambda calculus does not have any evaluation rules. Rather, it is like algebra where you can try to normalize an expression if you wish, but the point is that some lambda terms are equivalent to others based on some simple rules that model abstract properties of function compositions. Lambda-calculus-based programming languages choose some evaluation rule, but there is no guarantee of convergence: there might be two programs that lambda calculus says are formally equivalent, but one might terminate while the other might not. Depending on how you're feeling, you might say that no PL for a computer can ever realize the lambda calculus, but more pragmatically we can say most languages use lambda calculus with a strict evaluation strategy.
(c) The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols. Perhaps one of the innovations of McCarthy is that lambda terms can be represented using lists, and the evaluator can be written as a list processor (much better than Godel numbering!). In any case, the fact that terms have the ability to evaluate representations of terms within the context of the eval makes things a little different. It's also not too hard to construct a lambda evaluator in the lambda calculus, but you don't have the "level collapse" of Lisp.
(d) In lambda calculus, one way to model function application is that you immediately substitute in arguments wherever that parameter is used in the function body. Shadowing is dealt with using a convention in PL known as lexical scoping, and an efficient implementation uses a linked list of environments. In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping, which gives different results from the immediate substitution model. Pretty much everything fun you can do with the lambda calculus depends on having lexical scoping.
All this said, the widespread belief about Lisp being the lambda calculus probably comes from Scheme, which was intentionally lambda calculus with a strict evaluation model. Steele and Sussman were learning about actors for AI research, and I think it was Sussman (a former logician) who suggested that their planning language Schemer (truncated to Scheme) ought to have real lambdas. At some point, they realized actors and lambdas (with mutable environments) had the exact same implementation. This led to "Scheme: An Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate something" papers. Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
See the following for the current state of the art including the latest Actor approach to Eval, which is more modular and concurrent than the Eval in Lisp and Scheme:
The above article explains exactly how Actors are much more powerful than lambdas with mutable environments.
While a lot of people are trying to defend the lambda calculus as a basis, I think this actually undersells the significance of LISP. Apart from Lisp the language family and its implementations, there is Lisp, (arguably) the first practically realizable mathematical model of computation. That is, it stands on its own as a model for computation†, continuing along a long line of which I think Grassmann's 1861 work on arithmetic and induction is a good starting point.
Turing Machines are intuitive and the lambda calculus is subtle and expressive, but Lisp's contribution was to place partial recursive function on a more intuitive/realizable basis in terms of simple building blocks of partial functions, predicates, conditional expressions and symbolic expressions (ordered pairs/lists of atomic symbols). Lambdas come in as a notation for functions with a modification to facilitate recursive definitions.
†Making Greenspun's Tenth Rule trivially true.
This is probably the most informative comment that I've read on HN in the last couple of months.
John McCarthy said that he never had the intention to realize the lambda calculus, but he followed that statement with the corollary that had someone "started out with that intention, he might have ended with something like LISP." Peter Landin was a pioneer in that regard. See "The Mechanical Evaluation of Expressions", published in 1964, and the SECD virtual machine. Machine interpreters like SECD and CEK may come close to a "realization" of the lambda calculus. Their design is directly inspired by its semantics. You don't necessarily end up with something like LISP, but you can, see Lispkit and LispMe.
Moreover the lambda calculus is confluent, so if you find the terminating reduction sequence, you're guaranteed all other terminating sequences end up with the same result.
So as long as your PL uses normal-form evaluation or lazy evaluation you can entirely realize any equivalences in the lambda calculus.
OO says everything is an object. Even though Java has non-object primitives, we're still gonna classify Java as OO.
> Lambda calculus does not have any evaluation rules.
> The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols.
It's not clear to me why this makes Lisp not in the family of Lambda implementations.
> In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping.
That's true. Every modern Lisp (Scheme, Clojure, Racket) has lexical scoping. And Common LISP uses lexical by default.
> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.
> It's not clear to me why this makes Lisp not in the family of Lambda implementations.
To be clear, I started my comment by writing "if it is a realization, then it is one with [the following differences]." Lambda calculus was such a good idea that pretty much anything with function abstractions can be described by some variation of it. It's the dynamic scoping that causes the main issues here, though, and suggests lambda calculus was not a significant motivation in the definition of McCarthy's Lisp. Yet, he was still aware of it enough to call the abstraction operator "lambda."
>> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
> Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.
I don't see how that follows. Sussman was a math undergrad and PhD and was well aware of developments in logic, and he influenced Steele, who created the quite-influential Scheme and went on be one of the main people on the standardization committee for Common Lisp. This isn't even mentioning all the work people have done in PL research with typed lambda calculi (going back to corrections to Church's attempt to use lambda calculus as a foundation for mathematics), which has influenced the designs of many type systems in modern programming languages.
That notion is wrong (at least with a very high likelihood), and it's usually stated by people who fetishize the lambda calculus but know little of its long evolution. It's just your ordinary case (of hubris) where people aesthetically drawn to something describe it as inevitable or even a law of nature. And I know it's wrong in part because of the following quote:
> We do not attach any character of uniqueness or absolute truth to any particular system of logic. The entities of formal logic are abstractions, invented because of their use in describing and systematizing facts of experience or observation, and their properties, determined in rough outline by this intended use, depend for their exact character on the arbitrary choice of the inventor.
This quote is by the American logician Alonzo Church (1903-1995) in his 1932 paper, A Set of Postulates for the Foundation of Logic, and it appears as an introduction to the invention Church first described in that paper: the (untyped) lambda calculus .
The simpler explanation, which has the added benefit of also being true, or at least supported by plentiful evidence, is that the lambda calculus was invented as a step in a long line of research, tradition and aesthetics, and so others exposed to it could have (and did) invent similar things.
If you're interested in the real history of the evolution of formal logic and computation (and algebra) you can find the above quote, and many others, in a 300-page anthology of (mostly) primary sources that I composed about a year and a half ago . They describe the meticulous, intentional invention of various formalisms over the centuries, as well as aesthetic concerns that have led some to prefer one formalism over another.
: Actually, in that paper, what would become the lambda calculus is presented as the proof calculus for a logic that was later proven unsound. The calculus itself was then extracted and used in Church's more famous 1936 paper, An Unsolvable Problem of Elementary Number Theory in an almost-successful attempt to describe the essence of computation. That feat was finally achieved by Turing a few months later.
Lambda calculus originated from research in formal logic, which is about manipulating symbols according to precise rules that would capture reasoning. It is a compelling way to combine variable binding, equality and substitution into a model of "function calls" - even if the purpose was to formalize arithmetic computation and reasoning.
At some level, reasoning is what programming is about as well! The notation and rules may change, but ultimately we want to make the machines do things and at some level, we need abstraction mechanisms. Recursive procedures are such a mechanism and it can be expressed as a lambda term that involves a fixed-point combinator, or machine code.
It is easy to model and understand many things using lambda calculus or functional programming techniques, depending on whether the interest is theoretical/formal or practical.
To quote Peter Landin (heavily influenced by McCarthy and LISP and author of 'The next 700 programming languages'):
> A possible first step in the research program is 1700 doctoral theses called "A Correspondence between x and Church's λ-notation."
Maybe people think this was different in the late 1950s?
Let's read McCarthy's paper 'Recursive Functions of Symbolic Expressions and their computation by machine Part I' where he explicitly cites Church and introduces lambda notation.
I would consider that paper part of the phenomenon that is LISP, would that not settle the question? Lambda calculus gives little guidance in terms of implementation, but I think it does not diminish LISP in any way that it should be "based" on lambda calculus.
And I do not find the linked article adds any value, but I am very glad to read the HN discussion to find gems like the above (even if I should rather have slept for the past few hours).
Turing's thesis talks about some system transforming an input to an output. Clearly, a TM could simulate the actor itself in your proof. If it is not able to simulate the entire actor-collaborator system, that's only because you may have given the collaborator (whatever it is that generates the messages) super-Turing powers. You assumed that there could be something that could issue a `stop` after an arbitrary number of `go`'s, but you haven't established that such a mechanism could actually exist, and that's where the super-Turing computation actually hides: in a collaborator whose existence you have not established. As you have not established the existence of the collaborator, you have not established the existence of your actor-collaborator system. I claim that a TM cannot simulate it simply because it cannot exist (not as you describe it, at least).
So here's another "proof": The actor machine takes two messages, Q and A(Bool), and it gets them alternately, always Q followed by A. Every time it gets a Q, it increments a counter (initialized to zero) by 1 to the value N, and emits a string corresponding to the Nth Turing machine. It then gets a message A containing a value telling it whether the Nth TM terminates on an empty tape, and in response it emits A's argument back. And here you have an actor machine that decides halting!
As you now describe it, it cannot be implemented (physically realized) at all. Or, conversely, any physically implementable refinement of it (which will not exhibit the entire range of behaviors) will be simulatable by a TM.
There are many abstract machines that cannot be implemented by TMs -- e.g. various oracle TMs. There is nothing special, surprising or new about that.
There are many formalisms that are more or less convenient abstractions for various kinds of systems. There is nothing special, surprising or new about that, either. In fact, some formalisms that can describe non-computable behaviors are commonly used to specify either software or hardware as they're convenient (like Lamport's TLA).
But you're making a claim about the Church-Turing thesis, which, as commonly interpreted today (as the physical Church-Turing thesis), is the claim that any mechanism that can be implemented by a physical mechanism can be simulated by a TM. Unless you show how to build a physical system that cannot be simulated by a TM, your claim is no refutation of the thesis; it has nothing to do with it. Your claim that arbiters in digital circuits cannot be simulated has not been established and is not recognized by scientific consensus.
> However, just as there can be an arbitrarily long amount of time between two steps of a computation, there can be a arbitrarily long amount of time for a message to be delivered.
This is a completely different use of "arbitrary". In TMs, the fact that an arbitrary amount of time can pass between steps means that any device, with any finite amount of time between steps, can produce the full range of TM behaviors. In your actor case, to get non-computable behavior, you need to show that the device can delay the message by every amount of time. You need to show that such a physical device can exist.
Put simply, it's one thing to propose a non-computable abstraction that's convenient to model some systems, and another thing altogether to claim that there are realizable physical systems that cannot be simulated by a Turing machine. The former is useful but mundane (in fact, all of classical analysis falls in this category); the latter has not been achieved to date.
In the above models of computation, arbitrary means absolutely arbitrary, i.e., there is no a priori bound on the amount of time that it can take.
Your trouble may be with Plotkin's proof, which shows that state machine models of nondeterministic computation are inadequate.
Every physical object does have an a priori bound on the amount of time it can take to do something, unless that time could possibly be infinite. The reason is that it needs some sort of a counter, so it needs some state, and there's only so much state storable in the universe to store a counter.
This is a common problem when we appeal to continuous natural phenomena, as their common description is usually a convenient, but imprecise, abstraction. Goldreich addressed this in On the philosophical basis of computational theories : "A computational model cannot be justified by merely asserting that it is consistent with some theory of natural phenomena ... The source of trouble is the implicit postulate that asserts that whatever is not forbidden explicitly by the relevant electrical theories, can actually be implemented"
: He adds "at no cost" because his focus is complexity, not computability
Goldreich demonstrates the problem by showing that our abstractions of electrical circuits don't in themselves preclude circuits that violate computational complexity results or even decide halting, but that alone is insufficient to show that such circuits can actually be built.
There are certain formalisms that make this kind of error harder to spot: actor formalisms make it easy to hide impossible "computation" in a collaborator; some typed formalisms could hide computation in the syntax (which requires a lot or even infinite computation to decide).
Ron Pressler - Finite of Sense and Infinite of Thought: A History of Computation, Logic and Algebra
See the following:
(\xy.x)M ==> \y.M
((lambda (x y) x) M) ==> undefined
\xy.x is just an abbreviation for \x.\y.x, so the LISP counterpart would really be
((lambda (x) (lambda (y) x)) M) ==> (lambda (y) M)
Also, LC is not compatible with functions with a variable number of arguments, which is common in LISP. For instance,
(+ 1) ==> 1
PLUS 1 ==> \nfx.f(nfx) == SUCC
In most LISP dialects, you can pass any number of arguments to a variable-argument function like +. So what does the syntax (F X) denote in general? The application of a unary function to one argument or the partial application of a binary function? Or a ternary one...?
In LC it does not matter, because multi-variable functions and higher-order functions are the same.
I have developed a LISPy language that uses currying instead of functions of multiple arguments in the book Compiling Lambda Calculus (https://www.t3x.org/clc/index.html).
You can download the code here: https://www.t3x.org/clc/lc1.html.
What I'm extrapolating from this is that McCarthy's ideas are similar in implication to Lambda Calculus where you can define computation with just function abstraction and application, and use Peano numbers to represent data. Both approaches end up creating a purely functional way to write programs.
Would that be correct? I also wonder whether there is anything we can take away from this knowledge that is applicable to programming or how we look at it?
"They are interesting because with just three initial functions (successor, constant and projection functions) closed under composition and primitive recursion, one can produce most computable functions studied in number theory (addition, division, factorial, exponential, etc.)."
Lazy evaluation is just one possible operational semantics for a lambda calculus. Eager evaluation is another. In fact, all of the versions of lambda calculi presented in Benjamin Pierce's widely-read textbook "Types and Programming Languages" feature eager evaluation rather than lazy evaluation.
So the claim that the reason that Lisps aren't based on the lambda calculus is due to lack of lazy evaluation is incorrect. There are other reasons that Lisps diverge from lambda calculi but the evaluation strategy isn't one of them.
I think the 'typed' bit is key. You can't implement Y in plain old Haskell because it would need to recurse infinitely during type-checking.
Some type systems do support equi-recursive types without the type constructor, e.g. Whiley (http://whiley.org/2013/04/21/iso-recursive-versus-equi-recur...). Maybe there you could implement Y without a type signature and have the recursive type inferred.
The main problem is speed. Using the Y combinator is going to mess up whatever code flow analysis the compiler has, unless it's using some cutting edge optimization research that I haven't been able to find.
Some questions to ponder:
Is Lisp a term re-writing system?
Is lambda calculus a term rewriting system?
Is the Mathematica language a term-rewriting system?
And to round it all up: Is Lisp an evaluation system and Lambda calculus an evaluation system?
[I'll leave this one to the reader]
I think, actually the main reason it became so popular is exactly because it was implemented in Lisp/Scheme...
See the following:
As a loose analogy, think of how specific instances of the general idea of systems are named: the court system, the cooling system, the moderation system, etc. Some uses are a bit archaic though, e.g. people now usually refer to integral calculus as a standalone name, without the definite article. I think we're somewhere in between with (the) lambda calculus; you can find papers that use "the" and others that don't.
A) Pure Lambda Calculus is defined by
1. the terms (metavariables t and u): λx.t | x | (t u)
2. the reduction rules, β
3. the two conversion rules, α and η
B) Another of Church's lambda calculus, defined by
2. the reduction rule, β
3. the conversion rule, α.
You might go .. wait, these are the same! They are not. Without the η rule, you cannot define extensional equivalence, thus you cannot do things like referential transparency.
There are a number of shortcomings with the pure lambda calculus of course, so over the years people have added things , creating variations of lambda calculi.
A particularly famous one is System F. System F extends the Simply Typed Lambda Calculus (which itself is an extension of the pure lambda calculus) with type generators and eliminators. The trick is to make the language of types into terms of the lambda calculus itself.
System F begets SML, SML(NJ), Ocaml, Haskell and F#.
Then you have the other extensions along Barendregt's Lambda Cube. You get things like Coq and Agda and Idris, which are extensions of System F in the same directions but different ways of constructing them.
On the other side of things people are trying to make sense of the pure lambda calculus itself. There exists a whole subindustry of people explaining substitutions (lambda calculi with explicit substitution), people explaining self interpretation which you can do in lisp but not really in pure lambda calculus (first clean attempt was in 1995!). There are people developing or splitting pure lambda calculus into two subsets - example are people who work with sequent calculus.
All these systems are lambda calculi. When people say _the_ lambda calculus then they usually mean the pure lambda calculus.
E.g. If we meet aliens someday their Lambda Calculus will be the same as ours. Just like their integers will be the same as ours.
"The one, the two, the three, the four,...."
Unintended metaphor and application are things.
Smacks of a cognitive bias known as functional fixedness: https://en.m.wikipedia.org/wiki/Functional_fixedness
A screwdriver can also be a pry bar :-)
Imo this is why looser IP laws are important. Humanity needs to be able to rethink and find new application of its epistemological ideas to find new ideas of interest.
Too often we’re held to thinking about IP only the way the author intended. It’s almost pushing into thought policing.
Carl Hewitt developed the Actor model based on Smalltalk in the 1970s.
Joe Armstrong created Erlang in the 1980s, which he didn't know the Actor model at all at that time. Erlang doesn't even have the concept of Actor, it accidentally implemented Actor model by the elegant design of processes.
But when it comes to the Actor model nowadays, Erlang is basically a must-mention language, although the intention wasn't about Actor.
It's true that J. McCarthy had only a passing familiarity with LC. M-expression LISP, as it was originally conceived, was all about first-order recursion schemes over S-expressions. But due to a very simple error in the base case of an inductive definition, LISP 1.0 "featured" or "supported" higher-order functions, ala LC.
map(f, map(g, list(1, 2, 3)))
map(f.g, list(1, 2, 3))
Does anyone know of a symbolic manipulation software for haskell, or similar syntax (prefer to avoid lisp syntax if poss, but ok if nothing else) which will allow me to do this easily and get a feel for it?
the paper: https://ndmitchell.com/downloads/paper-uniform_boilerplate_a...
small tutorial: https://www.cs.york.ac.uk/fp/darcs/uniplate/uniplate.htm
If I'm missing something, please say.
However, I was using my my example exactly as an example; I was after a symbolic manips app.
The issue of the cost of naively compiled functional languages matters to me as bad performance will kill a good thing.
(Regarding the sibling comment: Landin’s paper also predates Backus’ paper by about 10 years)
From Landin’s paper:
6. Relationship to LISP
ISWIM can be looked on as an attempt to deliver LISP from its eponymous commitment to lists, its reputation for hand-to-mouth storage allocation, the hardware dependent flavor of its pedagogy, its heavy bracketing, and its compromises with tradition.
https://amturing.acm.org/award_winners/backus_0703524.cfm It's not obvious but the "ACM Turing Award Lecture" link is the PDF.
The focus of Curry's work were attempts to show that combinatory logic could provide a foundation for mathematics. (edit: accidentally stripped the part here mentioned that was in 1933 ie. very pre-computer) [...]. The paradox, developed by Rosser and Stephen Kleene, had proved the inconsistency of a number of related formal systems, including one proposed by Alonzo Church (a system which had the lambda calculus as a consistent subsystem) and Curry's own system. [...]
By working in the area of Combinatory Logic for his entire career, Curry essentially became the founder and biggest name in the field. Combinatory logic is the foundation for one style of functional programming language. The power and scope of combinatory logic are quite similar to that of the lambda calculus of Church, and the latter formalism has tended to predominate in recent decades.
And I think there's more but it's hardly my field. Prolog is grown out of predicate calculus which has its roots in propositional calculus, which goes back to the ancient greeks.
The mathematical foundations of things can be surprisingly old. I saw a 3D wireframe of a goblet with perspective, and that was from the 1500's. It could have been done on a 1980's home computer by appearance.
"Computer Science could be called the post-Turing decline in the study of formal systems."
(I dunno who said it. Maybe Dijkstra.)
Come to think of it, still isn't so here's one in return
Q. What's the difference between a mathematician and a large pizza?
A. A large pizza can feed a family of four.
Also, realistically speaking, no programming language is based on the Lambda Calculus as is, even those that try to be.
Lisp has mutable variables. Lambda calculus doesn't.
This article repeats this "TL;DR Lisp is not based on the Lambda Calculus"
But that's not actually what McCarthy said. McCarthy said:
> one of the myths concerning LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus
"Based on" and "realization of" are two different things. This kind of exaggerated or hyperbolic pedantry strikes me as clickbait. Which is unfortunate, because the article does contain some good content.
If you read the LISP I manual, you will see that concepts beyond the obvious lambda notation are used directly from The Calculi of Lambda Conversion. Notably, the distinction between forms and functions.
Clearly, we're splitting some very fine hairs here.
I think he is commenting on the subtleties of it.
I think many reading here will say they understand it or have studied it in a course but I am not so sure everyone gets the subtle points. Myself I have always puzzled over the difference between what programmers call LC and what seems to be discussed by Church.
Lambda calculus can be modeled in lisp. But there are millions of things you can build with Lisp that McCarthy might not know or care about.
Marking you down for using an expression used by all native English speakers is bonkers.
example #1 from a clearly educated british person:
'i was going to go' (yes, no negation, but still..)
agree, this is not formal language, but quite common.