Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Quark: A secure Web Browser with a Formally Verified Kernel (ucsd.edu)
296 points by hershel on Sept 8, 2013 | hide | past | favorite | 78 comments


I'm wondering if we should be redesigning all software around formally proven shims.

I've been thinking a lot about what a system would look like if we re-designed it from the ground up. Nothing as extreme as throwing out von Neumann architecture, but starting with our current hardware as the basis, and reconceptualizing the OS around security and stability.

Certainly it would be more distributed, more sandboxed, and more interface-agnostic. We've been running so much legacy code for so long that the abstractions are constraining our thinking about what's possible and desirable.

Clean-sheet redesign may be both impossible and a little extreme, but it's definitely time to fundamentally re-imagine the "personal computer."


But I think the cool thing is that they did NOT do a clean sheet redesign, right? They ARE running legacy code (they didn't write a million lines of code -- just the kernel).

They are using OS processes and sandboxing to enforce the fact that the legacy code can ONLY communicate via message passing. (Is that actually a good assumption? You can break out of sandboxes) And then you can prove that the kernel that manages the IPC does not do stuff like leak certain types of messages between processes.

I'm a big fan of running legacy code in restricted processes and using that to enforce information flow and security properties. That is a LOT more realistic than clean sheet redesign where you try to write everything in a different programming language and/or prove properties about milions of lines of new code.

Although I'm wondering if they are proving stuff about the "easy" part. Isn't it implementation bugs like sandboxing Flash and Java that lead to holes in Chrome?

I need to look at the details more, but I will bet that if their browser had all the features Chrome had, then its architecture with restricted processes + proven kernel would NOT have prevented most Chrome security issues, e.g.:

http://blog.chromium.org/2012/05/tale-of-two-pwnies-part-1.h...

That is, these attacks aren't about message flow or corruption in the browser kernel.


I vote yes. Formal verification has great benefits for software security and it's the "final form" (if you will pardon a redditism) of unit testing. It's one of the top skills I would like to learn.

You might be interested in L4; there's a variant that has been formally verified (but I think it's closed source?).


Formal verification allows you to prove that an implementation faithfully implements its specification. What does the formal specification for a web browser look like? I'm guessing that we're talking about thousands of pages of extremely dense math and code. How do you know that that specification is correct, or reasonable or what the user wants?

Now, maybe we don't want to do formal verification for correctness because it is too hard. So we settle for formally verifying that a program has some property of interest, like memory safety. But if you want memory safety, it is a lot cheaper to just use a language that gives you memory safety for free (haskell/scala/go/python) rather than trying to formally verify that your C++ program is memory safe.


> You might be interested in L4; there's a variant that has been formally verified (but I think it's closed source?).

You are probably thinking of seL4[1] unless there's another I don't know about. I believe you are correct that it's closed-source. That always bugged me though, since it seems to defeat the whole purpose of verifying such a critical part of a system's TCB. In my mind, the whole point is that I as a user don't need to just take anyone's word for it.

Unless I'm mistaken (and I'd love to find out that I am!), what we have with seL4 is a commercial vendor handing out an opaque binary blob and saying "we proved it's correct!" but providing no way for the user to verify that they really did prove anything. Frankly, these days I just don't trust any organization's assertions on such things no matter how thoroughly they claim they have proved it _to themselves_. It's certainly better than "we hit it with a hammer and it didn't break, most of the time!", but it still leaves open the possibility that the person asserting it's proved is lying, or that they did prove it's correct _except for the backdoor the NSA secretly coerced them into adding to the version they released_.

I wish I had the time to tackle something similar for the open-source world, but with 2 small kids around I barely have time for the small open-source projects I do manage. It's very cool too see this article's work being released in source form, and I really hope some open-source devs pick it up and run with it. I for one hope to be able to contribute in what time I do have.

[1] http://www.ertos.nicta.com.au/research/sel4/


Is there a public repository for "proven code" ? Like a github specialized in that field ?


Isabelle has the "Archive of Formal Proofs"

http://afp.sourceforge.net/


As a systems and PL researcher/hobbyist, there would be no greater gift you could give my field than to do this.



I think you'll have a hard time persuading existing corporations and developers that this is worth doing; formal verification seems to be currently constrained to academia and safety-critical uses (say, Software Engineers that have been accredited by a local regulator and are working on shutdown routines for a nuclear power plant).

But perhaps if you built a formally proven shim around key pieces of software, you might be able to get someone else to adopt them and/or disrupt (usurp) an industry incumbent.


I think that's a matter of tool quality. If they're high quality tools/libraries/documentation, then it could just become another secondary skillset like writing good tests and using version control.

For motivation, talk to the folks worried about costs of substantial failures. This is really analogous to insurance. An unpleasant, but tolerable amount of overhead that gives you a maximal bound on how bad a failure can be.


> but starting with our current hardware as the basis, and reconceptualizing the OS around security and stability.

NB: Do NOT visit the code repository - it's governed by a copy-left proprietary license.

This has been done by Microsoft [1], although it was pure research (there were hints of them using it for something "real" eventually, though). Essentially it was an OS written in a variant of C# that had Eiffel-like contracts - static analysis was done by the OS when an application was installed.

Worth a read, as I said - make sure you keep away from the code.

[1]: http://en.wikipedia.org/wiki/Singularity_(OS)


Has anyone actually got this running yet? I'm using an Ubuntu 12 VM. I've spent about an hour trying to build this thing. The authors make it pretty clear in the INSTALL file that you're in for a fun time if things don't go according to plan. Maybe the process is seamless on Ubuntu 11.

> Quark has been tested on Ubuntu 11.04. A basic installation process is automated in ./install_module.sh file, and you can execute the installation script to install most of the requied packages and compile Quark itself. If any of the required jobs fails because of some conflicts in your system, you have to open the installation script, and track down what went wrong manually. As future work, we have a plan to implement a fully functional installation script.

The installation script involves creating "tab" users(tab0-tab9), but the install script doesn't check to see if these users exist before attempting to create them. If the installation fails after the user creation section, the install script will error out when it attempts to create users that already exist.

Here's what I did to save you about 2 minutes of brainpower

    for i in {0..9}
    do
        if id -u tab$i >/dev/null 2>&1;
        then
            echo "user 'tab$i' already exists"
        else
            echo "creating user tab$i"
            execcomm "sudo useradd tab$i"
        fi
    done

    if id -u output >/dev/null 2>&1;
    then
        echo "user 'output' already exists"
    else
        execcomm "sudo useradd output"
    fi
As for completing the rest of the install, you're on your own, as I was unable to get things working. The install script attempts to cd into some python-browser-8 directory which is supposed to have a makefile, but I never see it created or even attempted to be created.


There is a Makefile in python-browser-8/.


I did not see that directory at all. I tried to Grep to see if some process creates the directory, but I couldn't find it.


How long do you think it's going to take before program formal proof becomes the new standard in commercial applications quality standard?

Today is focused on unit-testing and test code coverage, but formal proof seems really even better.

I only have heard about Coq , which i think implies code has to be written in OCaml , but i suppose the same kind of tool exists / could be done for Haskell or Scala. What about less strict and more used languages like C or Java ?


How do you formally prove that a GUI is "usable"? How do you formally prove that a game is "fun"? How do you even know what you should be formally proving?

Formal proofs (and type systems too) can only prove that your programs and formal specifications are internally consistent. The overwhelming largest set of failures in commercial software development are in requirements solicitation!

"Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." - Bertrand Russell

There is a very high initial and maintenance cost to proofs and it would be a mistake to insist that people pay that cost when better cost/benefit trade offs exist for most needs.


I think the OP is asking about when what you mentioned in your second paragraph is going to become the industry standard. Nobody's going to be using formal verification to prove that something is usable or fun.

Automated proof checking and derivations systems have been under development and research for a long time now. It's almost just an engineering problem to introduce them in a way that commercial companies can use them.

The aerospace industry (ex Boeing) has been doing this for a long time, and they make it work quite well for them.


> Boeing) has been doing this for a long time

Sure, and Microsoft has been using formal methods for eliminating all manner of security holes since at least the XP SP2 days. I didn't say that formal methods were always a bad trade off. I said that they are very expensive and implied that they are a bad default tradeoff for general purpose tools to make.


could you point to documents explaining proof checking in aerospace industry?

I am especially interested in the history leading to software verification in avionics, the involvement of insurance companies and government regulation.

thx


I actually cannot off hand. I speak from meeting folks in the industry who actually do the proof checking for aeronautical software. Unfortunately some dude's word on the Internet probably isn't very appealing. I'm sure Google would pull something up, though.


I think the idea here is to guard against crashes and security violations, not whether software is able to accomplish some goal or not.


I think it's going to be a long time coming, since the negative effects of mostly-working-but-not-quite software tend to be borne by the purchaser and not the creator, which means the extra expense of formal verification is not commercially viable when compared to just having a rigorous (or not-so-rigorous) testing regime.

If we were able to get rid of the 'THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED' thing that graces virtually every software license agreement, so that software publishers were actually liable to some extent for the software they produced, that would help to improve things. I don't see that happening any time soon though.


Ok, but it looks more like a chicken-and-egg problem. Think about this : Let's say i create a java/scala/whatever web framework that is even just "30% formally proved" (i know that's pretty vague, but bear with me) : don't you think it would be such a competitive advantage in the corporate world, that it would instantly be a commercial hit ?

Then once one popular code lib or framework starts doing it, we all know the rest will follow and the trend will be set.


Three points:

1. This is one of those things where you want it to be all-or-nothing. "30% formally proved" is completely useless; it just means that the inevitable bug or security hole is in the other 70%.

2. You're limited to writing constructs that can be formally verified. (My understanding is that this means that it's a bit more difficult than "let's formally verify Ruby and then run all the Ruby programs on our verified Ruby"; some programs will need to be rewritten and others will simply not be verifiable.)

3. We still have to solve the politics problems that already are pervasive in some of these environments (e.g. people who intentionally leave easy bugs in their code and then fix them later so that they look like they made more changes as measured by a KLOC count).


I want to qualify your (1) a bit—there's actually enormous value in "partially proved" systems in a sense. At the very least, that 30% is something you won't have to think about every again so long as the spec it's met works for what you need.

But what I really wanted to talk about is the exciting stuff going on in languages like Idris where programming is paramount and proving is just there for support. In a situation like this, your goal is less to arrive at 100% formally proven systems, but instead steal 90% of the power of a proof assistant to build more maintainable, powerful code. In this case, you're quite likely to only prove 30% of your code, the skeleton perhaps, and leave the rest. The result is often however that when you do this, the remaining 70% of the code is so restricted by your design that there are only a handful of ways to continue and they're likely all the right behavior.

A great example of this is Ur/Web which uses just a little bit of dependent typing to ensure that the names in your forms, domain, and databases all match up. It means you design your entire system around those names (your domain) and prove it to be formally consistent. Then the remaining design space is tiny and it's easy to write a good program.

... Or even to augment that skeleton with a security policy and have static flow control analysis "for free".

(That said, Ur/Web is a prototype system with minimal documentation, so I won't recommend it as an actual "system of the future"... just a glimpse.)


Coq is based on OCaml and similar to it but actually forms its own completely independent language. Others in the same vein are Agda, Idris, Epigram, Isabelle. Each one shares more similarities with the others than with their "host" language (if one exists) of which they're usually syntactic cousins.

That said, each of them is far more similar to OCaml/Haskell than say, C.

They are all based to some degree or another on Martin Löf's Intuitionistic Type Theory which is a wonderful unification of mathematical logic and straightforward functional programming in the same vein as the Curry-Howard Isomorphism.

Generally, much of the forward research is aimed at making the proofs more automatic and easier to use. The problem is wildly intractable by brute force, so much cleverer tricks must be employed. Some of these are quite wonderful and suggest new ways of programming as a dialogue between a human and an intelligent compiler that I really hope become more common in the future, though the industry will be unlikely to drive it. See the literature on Epigram for some talk about this.

There's also an important distinction between proof systems and "dependently typed languages" where proof systems restrict their power greatly so as to ensure the things which are proved in them are mathematically consistent while dependently typed languages which are more "practical" seek only to provide proof-like heuristic support to programming. In the former, you'll likely see a complex way of describing group theory in the language of program types and the program itself is irrelevant because its mere existence is sufficient for logical purposes. In the latter, you'll see a strong synthesis of a program and its internal logic as expressed by its types which ensure that invalid programs are wildly difficult to write as you'll likely have to abuse some crazy flaw in intuitionistic logic in order to do so.

The most canonical example is the fixed-length vector where you can write things like "append takes two vectors and results in one that's the sum of their lengths" and thus track (statically!) that you have no out of bounds errors possible in your program. It's also incredibly useful for getting nice properties like "this vector-of-vectors representation of a matrix is not ragged" confirmed at compile time... something that even an advanced non-dependently typed language like Haskell or OCaml is (mostly) strictly in the domain of runtime checking.


Neither Coq nor Isabelle are based on Martin-Löf Type Theory (in a narrow sense). Coq is based on a variant of the calculus of construction (with (co)inductive definitions) and Isabelle is an LCF-style prover framework that you can instantiate with various different logics. The most widely used variant is Isabelle/HOL which implements a variant of Church's theory of types. To implement other logics, Isabelle uses intuitionistic second order logic based on lambda calculus, without numbers or recursive data types. Idris has Type:Type, so is unsound as a logic. I'm not sure exactly what Epigram uses, but I assume it's somehow based on Luo's ECC. Only Agda is based on Martin-Löf Type Theory.


Yeah... This is correct as far as I know though a bit out of my complete technical grasp. To my understanding MLTT was a fairly central topic and it helps to distinguish MLTT from just type theory since plain-old-TT might be harder to distinguish as something interesting above `(int *)`.

But, to solidify, mafribe is much more correct than my comment and there's, as far as I'm aware, a much larger menagerie of other type theories. The recent Homotopy Type Theory publication—which was on HN as notable due to its publication process—was yet another step in this direction.


Per Martin-Löf was instrumental in extending type theory to dependent types, although the first version of his dependent type theory was impredicative with Type:Type and was shown to be inconsistent by J.-Y. Girard. Subsequent versions of Martin-Löf type theory have been predicative, but other dependent type theories (such as the calculus of constructions) stuck with (more benign forms of) impredicativity.


Thanks for the theorical background. I just had a look at the Curry-Howard Isomorphism article on Wikipedia, and i finally understood what my OCaml teacher muttered once in the classroom 10 years ago :))

The part i didn't get in the equivalence between program and proof, is that a function can be seen as a proof that provided the input parameters are of a given type, the output type is going to be that one. Quite simple in fact, but it only clicked right now :)


I feel many people here miss the point: it's not

  edit edit edit ./src/Browser
  maybe annotate it with some smth
  ./verify some-property ./src/Browser.ml && echo 'seems OK, ship it!'
It is

  coq proof.coq > ./src/Browser.ml
The source code is just a side effect of proving some theorem.

How long do you think it's going to take before program formal proof becomes the new standard in commercial applications quality standard?

Long enough


thanks, didn't realize that.


You should probably ask the people who've been promoting these ideas since at least the 70s :) "No time soon" is my best bet

As I understand it, creating a proof becomes increasingly more complex in proportion to program size. You'll notice in this project, their proof only covers a few hundred lines of code. That's not because their program just happened to be a few hundred lines of code, but because they designed it to allow a practical proof.


The theory which was promoted in the 70s (whatever it was) was certainly vastly different from what's used nowadays. Thierry Coquand (who conceived the original theory behind Coq) was only 9yo. In 1970.


There are various tools in the research world, but little real world use.

Isabelle/HOL - Coq competitor like GCC and LLVM (http://www.cl.cam.ac.uk/research/hvg/Isabelle/)

ATS - C with proof assistant (http://www.ats-lang.org/)

KeY - Java with verification annotations (http://www.key-project.org/)


Interestingly, HOL Light is used to prove that the density of a packing of congruent spheres in three dimensions is never greater than pi/sqrt(18), a 400 year old problem:

https://code.google.com/p/flyspeck/wiki/FlyspeckFactSheet

300,000 lines of code, still unfinished.


> What about less strict and more used languages like C or Java?

there already exists a system for verifying C, Astree (http://www.absint.com/astree/index.htm). Airbus, volkswagen, and similar companies use astree (they are, as I understand it, required to by EU law) to verify safety-critical software in their airplanes and cars. though astree has some large limitations on the C code it can verify, for example, no dynamically allocated memory or recursion.

investigating why it has these limitations is a good way to begin investigating what our current limitations are when formally reasoning about computer programs, and why C is so terrible to assure...


Dynamically allocated memory is difficult to verify automatically. There are some tools based on separation logic (e.g. Smallfoot, HIP/SLEEK, jStar) that are able to verify C programs (or C-like programs). Current developments in this area are focusing on automatically inferring specifications (also known as abductive reasoning). Moreover, automatically verifying concurrent programs is also difficult and a hot research topic at the moment.

If I am not mistaken, Smallfoot is the basis for a professional product being developed by Monoidics (http://www.monoidics.com). Interestingly, the company was recently acquired by Facebook :)

Update: Here's a link with a list of related tools: http://www0.cs.ucl.ac.uk/staff/p.ohearn/Invader/Invader/Frie...


See also Frama-C (http://frama-c.com/), which can use multiple automated and interactive theorem provers. It's really a proof obligation management IDE that you can use with multiple provers / proof assistants.


For imperative languages, the most popular approach to formal proof is to add partial correctness annotations in an axiomatic semantics such as Hoare logic. I've heard that Microsoft does this to verify many properties of the Windows kernel. (I think the most popular tools are 3rd-party and proprietary at the moment.) Of course, finding the right theorems to write down is not easy either.

There's currently an attempt to formalize a subset of C, with an axiomatic semantics, all the way from specification down to machine language: http://vst.cs.princeton.edu/


Formal proofs predate how we think about unit testing (check out IBM's 'Cleanroom' process). The concept comes from an age when most programming was a lot more like applied math, and it still has a lot of traction in industries with very high reliability/correctness requirements. Unit testing took over because it's easier and cheaper to do, the results are good enough for most environments.


It's not feasible for large programs in any generally used language, especially imperative ones. This is because most languages have undefined constructs and constructs that are so flexible that proving useful things about them is hard.

Even if languages designed to abet formal proofs, like SPARK, were commonly used, proving anything but the most trivial properties requires a completely different mindset, significant additional training and experience. This puts formal proofs outside the abilities of the average engineer (including myself).

Finally, useful properties, like termination (lack of infinite loops/recursion), let alone termination within a time bound, or correctness with respect to non-trivial requirements is extremely difficult even for those with extensive experience in the area.

In addition to the difficulty of creating proofs, it is amazingly difficult to state non-trivial requirements formally and verify that the stated requirements are the right ones: often the source of the requirements (customers/users, supervisors, marketing) have neither motivation nor interest to state them formally or review formally stated requirements.


I think never. Between the "ship early" ethos and SaaS becoming increasingly popular, software quality has been going downhill for quite a while. Most parties seem to be OK with it, so I expect it to get even worse.


Most parties seem to be OK with it, so I expect it to get even worse.

Are they OK with it, or do they just not see that they have a viable alternative at present?

I can think of numerous cases where from a personal and/or professional point of view I would have happily spent real money on upgraded/alternative software to what I've got if it fixed bugs that waste my time or make the results I get worse than they should be. Obviously some people will just rip off software whatever you do, but for paying customers, I'd be very surprised if quality alone couldn't drive a significant movement in a market, other things being equal.

I think it's all the unrelated things that aren't equal that are holding back that kind of competition. An interesting question is therefore at what point the willingness to use good software to develop more good software could cost less than putting up with poor quality incumbents, given that the real cost of both strategies is high. Even as a glass-half-full kind of guy, believing that a relatively small part of the industry could begin to pull that off without requiring the entire mainstream to shift, it would still have to involve far, far more people than are involved at the moment if we're going to create a sufficiently comprehensive foundation of development tools and essential libraries to bootstrap a whole quality-first ecosystem. The good news is that if you can establish that foundation, everything you do afterwards is easier in quality-first world has that advantage over the quick and dirty status quo, so momentum is on your side.


These two concerns aren't separate. You can publish agile software frameworks like rails with assurance guarantees based on proofs. If programmers play within the bounds, they can retain various proof-grade guarantees about the software they write.

Just make better tools for Chrissakes.


I view "ship early" (and iterate!) as a superior verification strategy than formal methods for most typical software. Most errors occur much earlier in the pipeline than the authoring of the first line of code.


> Between the "ship early" ethos and SaaS becoming increasingly popular, software quality has been going downhill for quite a while

Do you have a citation for this? Having spent plenty of time working around bugs in the older era of software development which took years to fix, I think your statement reflects nostalgia or inexperience. Those long, deliberative development cycles weren't some bygone golden age – and any codebase from that era has the thicket of "#ifdef BROKEN_SUNOS_FEATURE" blocks to prove it.


I think we're more likely going to see formally verified software development frameworks...something like a "verified Rails" before we see programmers using proof-checking in the general case. Basically it'll be the model-checking software built into a framework and a tool that will tell you if your code plays well / is safe according to the model or not.

Having programmers use proof checkers to write proofs about their own software and then prove them is still very far away, if it'll ever happen at all. One can always dream happy dreams about the Curry-Howard Isomorphism, though.


You're thinking something like Ur/Web?


Oh cool! I didn't know about this. Thanks for sharing. I'm going to read into it now :).


Ur/Web has nothing to do with formal verification does it?


Ur/Web uses dependent types (Ur) to build statically checked web libraries that prevent common bugs such as malformed or stale links. The exciting part (IMO) is that the 'row types' allow encoding your database schema in the type system, so it prevents bugs common to frameworks with heavy code generation, such as the ORM in Rails or template libraries in C++.

The connection to formal verification is that dependent types (depending on how you do them) let you stuff custom computations into the type checking phase. The two coolest uses of dependent types (IMO) are to verify your code and, surprisingly, synthesize boilerplate.

(Btw, the guy making Ur/Web is quite important in the Coq community and Ur is one of his research vehicles for these ideas.)


Not an expert, but:

Currently formal verification is complicated, expensive, and can only be used for verifying small amounts of code. And if you look at the places where it's highly valuable , i.e. security,hardware design and critical/medical systems - i don't see a wide deployment there. So not good signs.

And i heard of formal verification tools for c#, ada, and c/c++.


Look at aerospace and aeronautics. It's becoming par for the course there.


I don't know if formal code verification is much like hardware verification, but modern hardware like microprocessors and larger FPGA designs undergo verification... and it is so impossibly computationally expensive, the key exercise becomes selecting the most important 0.1% of coverage.

So, if they are similar pursuits, I can't help but imagine complete formal verification of large modern codebases would also be essentially computationally impossible.


I have been told that this is only the case in America (and I guess Asia). in Europe, there are laws that require the use of safety-validation tools and worst-case execution time tools on code that will be used in safety critical applications, for example, flight control, process control, etc.


Safety verification tools - like static analysis/timing analysis - is different from full formal verification. Those tool run on their own and don't need complex definition of what to verify , and they verify just small amount of bugs.


A long time, because it's been around "forever" and yet has gotten only a very little traction because it is hard, and for most things lack of verification is "good enough".

What you may see more of is the approach taken in this project: Formally prove a tiny kernel that mediates system access, and sandbox everything else.

Though arguably, we're on our way to make that much easier through increasing support for varying levels of virtualization and finer grained authorisation (capabilities etc.), which means this approach will be able yield benefits even for software without a formal proof for the part that mediates system access.


I don't really like that a "secure" browser run: 'xhost +' on my computer.


The seL4 operating system microkernel also aims to be formally correct. http://www.ertos.nicta.com.au/research/l4.verified/


How this secure approach benchmarks with Bromium micro hypervisor one?


Don't know. bromium is interesting. Seems pretty similar to qubes-os.

BTW qubes-os is freely available , and i think is partly open-source.


I initially thought that you were trolling, because Bromium sounds insincere, but it appears to be another startup funded by Andreessen Horowitz http://www.bromium.com/ and it's not a Browser you've linked to, it's a hardware-software-policy solution for enterprises. Still interesting to know about.


Trolling? no, since I researched this market for my company and I found Bromium virtualization approach "unique", every new product following an innovative approach is interesting.

Currently there are many different approaches to virtualization and I am bullish to application oriented approaches like App-V and ThinApp.


How does this work, if the kernel is supposed to be TM-complete?


1) The halting problem only states that you cannot create a program that can decide whether any given program will finish. It is possible for special cases, though.

2) They didn't automatically verify the kernel. Coq is a proof assistant.


The halting problem is also decidable for finite memory machine, like our real ones, so ...


Yes, but our memory is so big and our machines can have so many states that the problem is still intractable in the general case.


But still very much solvable in the specific domains of real world applications.


You know what. I think I agree with you. I would actually go as far as to posit that most common applications don't require a Turing-Complete environment to run. If you could strip away the trouble spots and limit yourself to what you really need you can get a lot back in terms of software checking and assurance.



That's awesome. Thanks for sharing. It's nice to know that people a lot smarter than me are thinking about this already :).


I may be wrong, but AFAICT, the kernel in question is basically an API layer between the real browser and the OS.

The point of the exercise is to prove that something like a file read cannot be invoked with bad files and cannot cause buffer overflows etc..

That doesn't require turing completeness.


There is a typo in the paper, both 'Response Integrity' and 'Tab Non-Interference' are called 'second property'.


Does it support all the latest css3?


Yes.

Happy now?


is this like phantomjs fully headless?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: