Hacker News new | past | comments | ask | show | jobs | submit | bvssvni's comments login

I tried to get Jonathan Blow engaged in the Rust RFC process to improve productivity for gamedevs. However, he thought it was a better idea to start working on his own language (Jai).

When I did some research for the Piston project, I learned that there was a productivity technique called "meta parsing" which was used in late 60s to develop the first modern computer. This was before C. The language was Tree-Meta. Viewpoint Research Institute upgraded it to OMeta.

I thought OMeta was too complex, so I developed Piston-Meta, an alternative for Rust using a simple data structure: Start node, end node, text, bool and f64.


In mathematics, the roof holds up the building, not the foundation. Since humans use mathematics a lot, we design foundations to our specific needs. It is not the building we are worried about, we just want better foundations to create better tools.

Not only are we going to treat mathematics as subjective, but also having formal theories that reason about different notions of subjectivity. https://crates.io/crates/joker_calculus

> Could our conception of paradox be itself primal, and perhaps, in some plane, could it be something ranking higher, of first-class?

Yes! Paradoxes are statements of the form `false^a` in exponential propositions. https://crates.io/crates/hooo

> Also, I’ve been thinking, recently, on the role of time in structures. There can’t possibly be any structure whatsoever without time, or, more concretely, at least the memory of events, recollecting distinctive and contrasting entropic signatures. So, mathematics manifesting as, of, and for structure, wouldn’t it require, first and foremost, a treatment from physics? Regular or meta?

Path semantical quality models this relation, where you have different "moments" in time which each are spaces for normal logical reasoning. Between these moments, there are ways to propagate quality, which is a partial equivalence. https://github.com/advancedresearch/path_semantics


I think the most exciting work in mathematics today is in the formal foundations. However, I can also understand mathematicians who are thinking like this:

1. I only need normal congruence

2. I only need perfect information games

Under problems that are solvable using these two assumptions, there is little benefit in tying proofs back to an axiomatic basis. Once you drop one of these two assumptions, proofs get much harder and a solid foundation gets more important.


One thing I would like point out with Gödel's incompleteness theorems, is that there are different notions of provability. Gödel uses the notion of "provability" you get from Provability Logic, which is a modal logic where you introduce terms from e.g. Sequent Calculus.

Recently I found another notion of "provability" where Löb's axiom, required in Provability Logic, is absurd. It turns out that this notion fits better with Intuitionistic Propositional Logic than Provability Logic. This allows integrating the meta-language into object-language. This is pretty recent, so I think we still have much to learn about the foundations of mathematics.


This makes it kind of sound as if Löb's axiom is used in the proof of Gödel's incompleteness theorems, but Gödel was working in (I think) PA, not in Provability Logic?

I guess you just meant "the notion of provability is the same as the one that would later be described in Provability logic" ?

I viewed the page you linked, but I don't see anywhere where you describe the alternate notion of "provability" you have in mind.

"Provability" in the sense of Gödel's incompleteness theorems, is just, "There exists a proof of it in [the chosen system of axioms + rules of inference]", yes? I don't see how any other notion would be fitting of the name "provability". "provable" , "possible to prove", "a proof exists", all the same thing?

Oh, I guess, if you want to use "there exists" in a specifically constructive/intuitionistic way? (so that to prove "[]P" you would have to show that you can produce a proof of P?)

I think I've seen other people try to do away with the need for a meta-language / make it the same as the object-language, and, I think this generally ends up being inconsistent in the attempts I've seen?

edit: I see you have written 'Self quality a ~~ a is equivalent to ~a, which is called a "qubit".'

I don't know quite what you mean by this, but, _please_ do not call this a qubit, unless you literally mean something whose value is a vector in a 2d Hilbert space.


> I guess you just meant "the notion of provability is the same as the one that would later be described in Provability logic" ?

yes

> I viewed the page you linked, but I don't see anywhere where you describe the alternate notion of "provability" you have in mind.

See section "HOOO EP"

>Oh, I guess, if you want to use "there exists" in a specifically constructive/intuitionistic way? (so that to prove "[]P" you would have to show that you can produce a proof of P?)

This would be `|- p` implies `□p`, which is the N axiom in modal logic (used by Provability Logic).

In Provability Logic, you can't prove `□false => false`. If you can prove this, then Löb's axiom implies `false`, hence absurd. `□p => p` for all `p` is the T axiom in modal logic. In IPL, if have `true |- false`, then naturally you can prove `false`. So, you can only prove `□false => false` if you already have an inconsistent theory.

Provability Logic is biased toward languages that assume that the theory is consistent. However, if you want to treat provability from a perspective of any possible theory, then favoring consistency at meta-level is subjective.

> I think I've seen other people try to do away with the need for a meta-language / make it the same as the object-language, and, I think this generally ends up being inconsistent in the attempts I've seen?

I have given this some thought before and I think it is based on fix-point results of predicates of one argument in Provability Logic. For example, in Gödel's proof, he needs to encode a predicate without arguments in order to create the Gödel sentence. In a language without such fix-points, this might not be a problem.

> I don't know quite what you mean by this, but, _please_ do not call this a qubit, unless you literally mean something whose value is a vector in a 2d Hilbert space.

The name "qubit" comes from the classical model, where you generate a random truth table using the input bit vector as seed. So, the proposition is in super-position of all propositions and hence behaves like a "qubit" in a classical approximation of a quantum circuit.


Something being random and/or undetermined is not sufficient for it to be like a qubit. You need the linear algebra aspect for the name to be appropriate, IMO.

> See section "HOOO EP"

I have looked through that section, and afaict, nowhere in it do you define an alternative notion of “provability”?

> In Provability Logic, you can't prove `□false => false`. If you can prove this, then Löb's axiom implies `false`, hence absurd.

Why do you expect to be able to prove `□false => false` ? I.e. why do you expect to be able to prove `not □false`, i.e. prove the consistency of the system you are working in.

> Provability Logic is biased toward languages that assume that the theory is consistent. However, if you want to treat provability from a perspective of any possible theory, then favoring consistency at meta-level is subjective.

I’m not really sure what you mean by “favoring [...] is subjective.” . Also, if you want to do reasoning from within an inconsistent theory, then I’d hope it is at least paraconsistent, as otherwise you aren’t going to get much of value? Or at least, nothing with any guarantee. If you aren’t aiming at a paraconsistent logic, I don’t follow your point about not wanting to favor the assumption that the system is consistent.


> Something being random and/or undetermined is not sufficient for it to be like a qubit. You need the linear algebra aspect for the name to be appropriate, IMO.

Naming things is hard. Given how constrained Propositional Language is as a language, I do not think there is much risk of misinterpreting it. I needed something to associate with "superposition" but also fit with "quality". Both "qubit" and "quality" starts with "qu", so I liked the name.

It does not bother me if people find another better name for it.

> I have looked through that section, and afaict, nowhere in it do you define an alternative notion of “provability”?

I do not want to create a controversy around Provability Logic by making too strong claims for some people's taste. What I meant is that this section is explaining HOOO EP and my interest is in communicating what it is on its own sake, without needing to compare it to Provability Logic all the time. However, since HOOO EP is so similar to Provability Logic, it requires some clarification. I hope you found the section useful even though you were not able to see how it is an alternative notion of provability from its definition.

> Why do you expect to be able to prove `□false => false` ? I.e. why do you expect to be able to prove `not □false`, i.e. prove the consistency of the system you are working in.

I think this is trying to think about logic as a peculiar way. HOOO EP was not developed to reason about consistency. It has its own motivation that makes sense. However, once you have HOOO EP, you can start discussing how it relates to consistency of theories.

It makes sense, in a sense of consistency, from the perspective where an inconsistent theory is absurd. Absurd theories can prove anything, so there is no distinction between true and false statements. Now, if you interpret `□false` as an assumption that one can prove false, then of course, one can prove false. `□false => false` is the same as `!□false`. Does this mean that it proves its own consistency? No, because you made the assumption `□false`. You have only talked about what you can prove in the context of `□false`. From this perspective, `□false => false` is trivially true.

Provability Logic does not allow you to think of `□false` as meaning "I can prove `false`". Instead, it is interpreted as "this theory is inconsistent" but without assigning this statement a particular meaning. This means, there is a gap between "I can prove `false`" and "this theory is inconsistent". Now, if you ignore the gap, then you are just making an error of interpretation. You have to respect the formal sense, where Provability Logic can have two different notions of absurdity while naturally you would think of them as one. However, if you want to have one instead of two, then you need HOOO EP.

> Also, if you want to do reasoning from within an inconsistent theory, then I’d hope it is at least paraconsistent, as otherwise you aren’t going to get much of value?

It sounds like you are assuming HOOO EP is inconsistent? Why are you speculating about my motivations in a such context?


> Provability Logic does not allow you to think of `□false` as meaning "I can prove `false`". Instead, it is interpreted as "this theory is inconsistent" but without assigning this statement a particular meaning.

“It is provable in system X that False” and “system X is inconsistent” are the same statement. That’s what it means for a system to be inconsistent: there is a proof in the system of the statement False.

So, no, you’re wrong: in probability logic, []False can be interpreted as “it is provable (in this system) that False”.

> It sounds like you are assuming HOOO EP is inconsistent?

While I do suspect it to be, that isn’t why I said that. I was saying that because you were seemingly saying that probability logic has a bias favoring reasoning from within a consistent system. And, I was saying, “why would you want to be reasoning within an inconsistent system? Shouldn’t something be suited for reasoning within a consistent system, seeing as reasoning in an inconsistent system gives you nonsense?”

> I hope you found the section useful even though you were not able to see how it is an alternative notion of provability from its definition.

It didn’t provide me with the answer I was seeking, and so I will instead ask you directly again: what alternative notion of probability do you have in mind?


Logic by default does not have a bias toward consistency. The bias is added by people who design and use mathematical languages using logic. It does not mean that the theory you are using is inconsistent.

Asking "why do you want to be reasoning within an inconsistent system?" is like facing a dead end, because you are supposing a bias that was never there in the first place. As if, logic cares about what you want. You only get out what you put in. Bias in => bias out.

I am speculating about the following: If we don't bias ourselves in favor of consistency at the meta-level, then the correct notion of provability is HOOO EP. If we are biased, then the correct notion is Provability Logic.

In order to see HOOO EP as a provability notion, you have to interpret the axioms as a theory about provability. This requires mathematical intuition, for example, that you are able to distinguish a formal theory from its interpretation. Now, I can only suggest a formal theory, but the interpretation is up to users of that theory.


> In order to see HOOO EP as a provability notion, you have to interpret the axioms as a theory about provability.

A notion can generally be prior to a particular formalization. If you have an alternative notion of probability in mind, you should be able to express it.

> Now, I can only suggest a formal theory, but the interpretation is up to users of that theory.

Ok, well, it has no users other than yourself, so if you want to communicate how it could be useful, I recommend you find a way of expressing/communicating an interpretation of it.

—- Also, I think your idea of a “bias towards consistency” is unclearly described at best.


I'm attempting to understand you, so:

Are you saying that Löb's axiom, which states that the provability of "the provability of p implies p" implies the provability of p, necessarily prejudices some implicit assumption of consistency to the meta-language?

How so, and/or, what are the axioms, or derived properties, of this new notion of provability you have uncovered?


> Are you saying that Löb's axiom, which states that the provability of "the provability of p implies p" implies the provability of p, necessarily prejudices some implicit assumption of consistency to the meta-language?

Yes. One way to put it: Provability Logic is balancing on a knife-edge. It works, but just barely. However, you can turn it around and say the new notion is balancing on a knife-edge by requiring DAG (Directed Acyclic Graph) at meta-level. They way I see it, is that both approaches have implicit assumptions and you have to trade one with another.

I am working on an implementation of the new notion of provability (https://crates.io/crates/hooo), after finding the axioms last year (it took several months):

    pow_lift : a^b -> (a^b)^c

    tauto_hooo_imply : (a => b)^c -> (a^c => b^c)^true

    tauto_hooo_or : (a | b)^c -> (a^c | b^c)^true
As a modal logic the difference is surprisingly small, by swapping Löb's axiom with T. `tauto_hooo_imply` is slightly stronger than K.

The major difference is that `|- p` equals `p^true` instead of implying, if you treat `|-` as internal. If you treat it as external, then you can think of it as N + T.

I needed this theory to handle reasoning about tautological congruent operators.

However, once you have this, you can perfectly reason about various modal logical theories by keeping separate modality operators, including Provability Logic, e.g. `modal_n_to : N' -> all(a^true => □a)`.

So, it is not a tradeoff that loses Provability Logic. Instead, you get a "finalized" IPL for exponential propositions. This is why I think of as a natural way of extending IPL with some notion of provability.


Thanks for your response. I'm not very familiar with reasoning in modal logic so I'm finding it hard to get an intuition for your `^` operator. Yes, it does seem like some kind of generalized provability. Does `a^b` mean `a` is provable in all worlds in which `b` is valid, i.e. taken as an axiom in the underlying proof theory, or something like that?

This is very interesting, but unfortunately I have more mundane things to get to. Hopefully I'll find some time to play with your axioms and relearn whatever modal logic I once knew.

One more thing, what are you calling `N`?


> Does `a^b` mean `a` is provable in all worlds in which `b` is valid, i.e. taken as an axiom in the underlying proof theory, or something like that?

Yes. You can also think of it as a function pointer `b -> a`. Unlike lambdas/closures, a function pointer can not capture variables from the environment. So, if you have a function pointer of type `b -> a`, then you can produce an element of type `a` from an element of type `b`.

This sounds almost like the same thing as with lambdas/closures. The difference is that a lambda can capture variables from the environment, such that `b => a` is possible "at run-time" in a way that does not hold for every possible world. So, the distinction between function pointers and lambdas/closures can be thought of as different notions of provability at static compile-time and dynamic run-time.

> One more thing, what are you calling `N`?

N is the name of the axiom in Modal Logic. It is called the "Necessitation Rule". See https://en.wikipedia.org/wiki/Modal_logic#Axiomatic_systems


This reminds me of Heisenberg's uncertainty principle but at a much deeper and more generalized level.


> Recently I found another notion of "provability"

Any pointers to that alternative notion?


I'm implementing it in this project: https://crates.io/crates/hooo


Is your goal to have as few axioms as possible, or as few syntactic constructions as possible?


Both really. Just as few givens as possible whether they are baked in or added on as axioms.


The foundations of mathematics are all about language design.

To answer this question, one must say something about which language a foundation of mathematics is using. For example, Set Theory is formalized in First Order Logic. However, First Order Logic uses Propositional Logic, so to build an alternative foundation to Set Theory, you might consider starting with Propositional Logic and extending it another way.

First Order Logic extends Propositional Logic with predicates. This might seem like a good design at first, until you try to reason about uniqueness. In Set Theory, one requires an equality operator in addition to set membership, in order to be able to reason about uniqueness, at all. This equality operator is ugly, because you have to rebuild objects that are isomorphic but using different encodings.

Predicates causes problems because they are unconstrained. For easier formalizing of advanced theories, I suggested Avatar Logic, which replaces predicates with binary relations, avatars and roles. You can try it here: https://crates.io/crates/avalog

Also, most theories assume congruence for all predicates, which is bad for e.g. foundations of randomness.

The next crisis in "the foundations of mathematics" will be "tautological congruence". Luckily, this is already being worked on, by extending Intuitionistic Propositional with exponential propositions. This theory is known as "HOOO EP" and is demonstrated here: https://crates.io/crates/hooo


Equality works out cleanly if you distinguish between definitional equality and equivalence. Definitional sometimes gets called syntactic. As in the definitions are the same symbols in the same arrangement.

  (lambda (x) (+ x x)) != (lambda (x) (* 2 x))
In particular, that's decidable on any finite expressions. Walk the expressions like an AST comparing the nodes on each side.

Equivalence / observational equality covers whether two expressions compute the same thing. As in find a counterexample to show they're not equivalent, or find a proof that both compute the same things, or that each can be losslessly translated into the other.

Requiring that your language can decide observational equivalence on the computation of any two terms seems obviously a non-starter to me, but it also looks like the same thing as requiring a programming language type system be decidable.


I want to reason hypothetically, which is why I don't use syntactic equality. I only use syntactic inequality in a very limited sense, e.g. two symbols `foo'` and `bar'` are symbolic distinct, so one can introduce `sd(foo', bar')`.

The reason is that if I have a proof `((x + x) == (2 * x))^true`, then I can treat objects as if they were definitionally equal.

When definitional equality and syntactic equality are the same, one is assuming `(!(sd(a, b)) == (a == b))^true`, which obtains a proof `!sd(a, a)` for all `a`. This destroys the property of reasoning hypothetically about exponential propositions in Path Semantics. For example, I might want to reason about what if I had `sd(a, a)`, does this imply `a ~~ a` by `q_lift : sd(a, b) & (a == b) -> (a ~~ b)`? Yes!

However, if I already have `!sd(a, a)` for all `a`, then the above reasoning would be the same absurd reasoning.

I can always assume this in order to reason about it, but I don't have to make this assumption a built-in axiom of my theory.

When assuming tautological congruence e.g. `(a == b)^true` in a context, one is not enforcing observational equality. So, it is not the same as requiring the type system to be decidable. I can also make up new notions of equivalences and guard them, e.g. not making them tautological congruent.


I don't follow. At a guess, my "definitional equality" is your "syntactic equality", but I can't infer what you mean by definitional. Perhaps our terminology is too different to communicate successfully.

(+ X X) is not definitionally equal to (* 2 X), for they are different definitions. Different pictures on the page. A proof that they are equivalent is not a proof that they are equal. A proof that they are equal is evidence that your proof system is unsound.

If you assume a symbol is not definitionally equal to itself, you can indeed prove false, but that doesn't seem particularly important since you cannot prove that a symbol has a different definition to itself.


Yeah, this is too imprecise. I tried to translate to your terminology, but failed.

My system uses "tautological equality" and this allows me to treat them the save way for all tautological congruent operators. Ofc, you can't treat them the same if an operator is neither normal nor tautological congruent.

Even if you have a such operator `foo'(x)` you can prove `foo'(x)`, `(foo'(x) == foo'(x))^true` or `foo'(x) == foo'(x)` etc. If this is what you are talking about, then this system supports it.


Yes. Although my take on this is that it's about VM design, or the ISA for a VM upon which mathematics runs.


Modern mathematics deals with ISA design from the perspective of application:

A CPU, an FPGA, and an GPU are all Turing complete substrates, yet they’re useful for wildly different things.

Category theory, type theory, and set theory all can embed arbitrary mathematics — but the encodings you get lend themselves to different applications.

Eg, category theory is very useful at abstracting structures to check if they’re “the same”.


This update changes the PSI implementation (path semantical logic) to use a safe model of path semantical quality. The problem previously was how to handle reflexivity without symbolic distinction (this is beyond IPL - constructive logic). Now we have a safe subset of path semantics that works with IPL and also already proved some useful results for further research on Seshatism.

If you have questions, please open up an issue on the Prop project (https://github.com/advancedresearch/prop/issues). You can also join us on the Discord server (https://discord.gg/JkrhJJRBR2).



This is a scripting language I've been working on since 2016. Originally, I did not plan to make a language, but I had a couple weeks available for some project while waiting for Gfx upgrades. It turned out to be so much fun to work on that I've continued working on it.

It uses a lifetime checker (like Rust) for safety instead of a garbage collector. No borrowing semantics, so the language is simpler than Rust. The language is parsed using Piston's meta-parser, which is fed in parallel to the lifetime/type checker and the AST constructor. This makes Dyon a bit faster at loading scripts.

Modules are loaded dynamically without polluting the environment of the loader. You must call a function using `call` or `call_ret` to make dependent modules part of the environment.

Dyon supports 4D vectors, HTML color literals, Go-like coroutines and a special link structure with its own template-like loop which makes text generation fast and ergonomic.


This is not just about applying universal basic income, negative tax etc. The algorithm fine tunes the whole economy using a single parameter. People can vote on the inequality level they think is healthy for economy, and the solver guarantees that this happens near perfectly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: