Hacker News new | comments | show | ask | jobs | submit | EdiX's comments login

The syntax will be broken 99% of the time and making a parser that recovers gracefully under any circumstance is very hard.

You could make an editor that only allows valid programs but that opens up a lot of UI problems, it was tried many times and it never took off in practice.

-----


Yet Eclipse and IntelliJ IDEA (and most likely a lot of other IDE's) successfully use AST's for their code editor. The syntax will only be broken locally, so it's not that hard to recover.

-----


I tried Eclipse many times over last decade. Always used for few weeks before code highlighting and completion drove me insane due to multiple crashes, hangups and slowdowns.

Trouble with elaborate 'correct' solutions starts when you need another thing. I don't think support in Eclipse for .cjsx or PureScript or whatever is coming soon. I won't wait 10 years till they get it 'right'. I'll use Atom to get the 95% right in few months.

-----


I said it's very hard not that it's impossible. If you think it's easy write a parser with recovery for every language Atom supports.

-----


It doesn't need a full AST. It needs a tokenizer and a nesting level or potentially a stack of different classes of token nesting.

This parser would not be difficult to write; the only "recovery" is in choosing how to react to mismatched token pairs. The simplified problem means heuristics could potentially be used.

-----


Not just broken, but broken and changing. Efficiently updating an AST as the user types, especially when their changes could trigger dramatic alterations to the structure of the tree (e.g., typing /*) is a very hard problem. A responsive editor is pretty much always going to have to take some shortcuts.

-----


    But XML has XPath, XMLSchema and XSLT and JSON doesn't
XML has those things because it's data model is hostile to every programming language in existence and you need tools designed specifically for it to manipulate it concisely.

JSON fits well with the list/map/primitve data model that is common to most scripting languages so those tools aren't needed, you can just use javascript or python, something you use everyday anyway and doesn't look alien.

    A similar document would look like this in XML
... plus encoding and schema declaration and also all the garbage involved with dealing with namespaces. Or, at least, don't sing praise to XMLSchema and namespaces if you aren't going to put them in your example.

PS. XML is a markup language, if your data is a text document with markup then XML is a good choice, otherwise leave it alone.

-----


Wikipedia says that the taco is at least 500 years old, I'm not sure they were hard shell tacos but the description in unicode just says 'taco'.

-----


> What will happen in Wayland case?

Nothing at all. Very few modern applications use server side font rendering anyway, the ones that don't either are obeying fontconfig rules independently already or will keep ignoring them.

It's not even different for Windows and Mac OS X, some programs do their own font rendering that's different from the one provided by the system.

-----


> I don't have to ask Ubuntu and Debian and RedHat and god knows who else, "mother may I" before I write a program.

You can just offer your own repository and have the package you distribute install it. I think you are confusing linux package managers with appstores.

-----


Why would a "first time user" use a program that has a user interface designed in 1976? Why would a "first time user" expect this program to obey interface guidelines that did not exist until 11 years after that user interface was designed?

-----


> There is not a single person alive today who knows what technology will look like in 10 years

Agreed, nobody knows what's going to happen in the future, however...

>Computing is still in its infancy. A computer from the future - if they are even called that anymore - may be directly comparable to what we have today. Or the technology may change so much that the two devices are not even recognizable as being related.

... you should remember that 'nobody' also includes you. For computers to keep getting faster we'd need something faster than a MOSFET to build our computers on.

The fact that it hasn't been discovered in 90 years doesn't mean it's not going to be discovered ever, but it also doesn't mean it's bound to be discovered soon because of the unstoppable kurtzweilian march of progress.

Since we can't predict future discoveries because of the law of total expectations the best we can do is try to describe what the future looks like from here.

-----


Hexdumped the html of the webpage, there's only two codepoints here.

-----


> 12 Tender Japaneses

The only thing I can find googling this is a worldwar 2 aircraft carrier. Can I have the original title in romanji I want to see if it has been dubbed or subtitled.

-----


Looks like it's called Juninin no yasashii nihonjin: http://imdb.com/title/tt0104330/

-----


Yes, that's it. Thanks. Like 12 Angry Men, the original is a stage play.

-----


> nothing reflects entanglement better than a formal semantics

A formal semantics is just a way to translate from one formalism to another.

It's rather obvious that choosing the target formalism determines how simple the language will appear, when you talk about "formal semantics" you should specify "which one": operational? denotational? axiomatic?

Stricly speaking a compiler or an interpreter represents a formal semantics for a language: operational semanthics rules are often very very similar to the code of an AST interpreter, for example.

One could interpreter your statement to mean that the smaller the compiler the simpler the language, which means that assembly language was the simplest language all along!

For example, in your reddit post you claim that := is problematic, and indeed its semantics is tricky and often trips beginners (and even experienced!) programmers. However := semantics is not actually that complicated "define every variable that isn't defined inside the current scope, otherwise assign them" and the errors stem from the fact that people assume that the scope lookup for := is recursive, which would arguably result in a more complicated formal semantics.

-----


> For example, in your reddit post you claim that := is problematic, and indeed its semantics is tricky and often trips beginners (and even experienced!) programmers. However := semantics is not actually that complicated "define every variable that isn't defined inside the current scope, otherwise assign them" and the errors stem from the fact that people assume that the scope lookup for := is recursive, which would arguably result in a more complicated formal semantics.

Clearer examples of unnecessary complexity in Go would be the function-scoped nature of "defer" (implicit mutable state is much more complicated than block scoping) and the inconsistent behavior of "nil" with the built-in collections (reading from a nil map returns zero values, but reading from a nil slice panics).

-----


> A formal semantics is just a way to translate from one formalism to another.

Of course, we need to reach a gentleman's agreement regarding which formalism is a good “foundation” for defining everything else. My personal preference would be to define all other formal systems in terms of rules of inference.

> It's rather obvious that choosing the target formalism determines how simple the language will appear, when you talk about "formal semantics" you should specify "which one": operational? denotational? axiomatic?

I am fine with any, as long as the same choice is made for all languages being compared. What ultimately interests me is proving a type safety theorem, that is, a precise sense in which “well typed programs don't go wrong”, so perhaps this makes a structural operational semantics more appropriate than the other choices.

> Stricly speaking a compiler or an interpreter represents a formal semantics for a language: operational semanthics rules are often very very similar to the code of an AST interpreter, for example.

> One could interpreter your statement to mean that the smaller the compiler the simpler the language, which means that assembly language was the simplest language all along!

Sure, but the target languages used by most compilers are often themselves very complex. Which means a realistic compiler or interpreter most likely won't be a good benchmark for semantic simplicity.

-----


>Of course, we need to reach a gentleman's agreement regarding which formalism is a good “foundation” for defining everything else. My personal preference would be to define all other formal systems in terms of rules of inference.

If you are interested in defining "low cognitive load" that's a poor choice, in my opinion.

>I am fine with any, as long as the same choice is made for all languages being compared. What ultimately interests me is proving a type safety theorem, that is, a precise sense in which “well typed programs don't go wrong”, so perhaps this makes a structural operational semantics more appropriate than the other choices.

I'm not aware of any such thing, the kinds of formal semantics that academics prefer deal very poorly with the realities of finite execution speed and memory, the kinds that pratictioners use (which usually isn't referred to as "formal semantics" but rather "what does this compile to") deal very poorly output correctness.

However this has little to do with cognitive load, even if such formal semantics existed it doesn't necessarily mean it would be easy for a human mind.

> Sure, but the target languages used by most compilers are often themselves very complex. Which means a realistic compiler or interpreter most likely won't be a good benchmark for semantic simplicity.

If you agree that formal semantics is just a translation from one formalism to another, you can't claim that a formalism A is semantically more complex than formalism B without picking a formalism C as a reference point.

-----


> If you are interested in defining "low cognitive load" that's a poor choice, in my opinion.

I'm interested in “low cognitive load without sacrificing technical precision.” It's a much harder goal to achieve than “low cognitive load if we hand-wave the tricky details.”

> However this has little to do with cognitive load, even if such formal semantics existed it doesn't necessarily mean it would be easy for a human mind.

Which is exactly my point. I only consider a language simple if its formal description is simple.

> If you agree that formal semantics is just a translation from one formalism to another, you can't claim that a formalism A is semantically more complex than formalism B without picking a formalism C as a reference point.

No disagreement here. I even stated my personal choice of C.

-----


> I'm interested in “low cognitive load without sacrificing technical precision.”

You don't seem to be interested in low cognitive load at all, otherwise:

> No disagreement here. I even stated my personal choice of C.

you would have attempted to motivated your choice of reference point in terms of cognitive load. Even if induction mathematics was the way the human mind worked (which it isn't) it's very different from CPUs and there is a cognitive load (and semantical distance) in going from mathematics to CPUs.

-----


> Even if induction mathematics was the way the human mind worked (which it isn't)

Even if it isn't how the human mind works, it's how computing itself works. Would you take seriously a physicist who denies gravity? I wouldn't take seriously a computer scientist who denies structural induction.

-----


> it's how computing itself works

but it's not the whole story when it comes to computers.

-----

More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: