Hacker News new | comments | show | ask | jobs | submit login
P: A programming language for asynchrony, fault-tolerance and uncertainty (microsoft.com)
266 points by JulienRbrt 212 days ago | hide | past | web | favorite | 40 comments

It's worth noting that this isn't just a research language -- it's used in practice for writing drivers:

P got its start in Microsoft software development when it was used to ship the USB 3.0 drivers in Windows 8.1 and Windows Phone. These drivers handle one of the most important peripherals in the Windows ecosystem and run on hundreds of millions of devices today. P enabled the detection and debugging of hundreds of race conditions and Heisenbugs early on in the design of the drivers, and is now extensively used for driver development in Windows.

This looks SO cool! Kind of like Erlang but at a different level, so you can model both inter-node systems (like Erlang) but also intra-node systems (like USB3 drivers)!

As someone who has been thinking a lot recently about SPARK/Ada and "free" formal verification, can anyone tell me how this language compares on that front? If I write a P program, do I get the ability to "prove" its correctness?

The docs are sparse... but it looks like the answer is that it depends on what you mean by "correct." It doesn't look like it's a general purpose theorem prover, but it does appear to be model/spec-driven and the linked article says that the system can prove safety and liveness. I'm guessing they mean "type safety" but they may be using the term in the broader distributed system sense. In any case, it does look like it gives you certain "proofs for free" if your definition of correct is "eventually converges on a consistent state." I don't think you can prove arbitrary properties like you can with Coq or with a dependent type system. Still, very cool.

You can if you want to go that 'far'. It's pretty easy to add full SMT support if you want (via Z3). Out of the box, it's not required. You get existential/universal quantification, conjunction and disjunction as your dyads, and invariance properties out of the box.

I'm guessing Lamport chose not to go the 'fully dependent' route Coq/Agda style (based on his presentation at least) because, well, as he said in the intro lecture - engineers don't really want formal verification at that level but still want strict invariants of their system to be ensured s.t. you can be guaranteed your program will never enter an undefined state nor encounter any undefined behavior as a result entering into an ill-defined state and proceeding to get UB (or 'implementation-specific', ugh) as a result.

What really drove it home is reading the paper Lamport referenced, "How Amazon Web Services Uses Formal Systems" (Newcombe, et al, 2015) along with his RTOS anecdote of decreasing their NASA RTOS' KLOC by 10x while getting stronger guarantees in the process (see: "Formal Verification of a Network-Centric RTOS", Verhulst et al, 2011).

It's also got more than a few similarities with "Fortress". Guy Steele really pushed this programming model while at Sun, but sadly DARPA killed the funding for it. That being said, Amazon has this is at > 14 of their AWS programs so it seems to be accessible and useful enough to the average engineer rather than 3 post-docs locked-up in their academic ivory tower, leaving twice a year only to present at ICFP and POPL.

At the 'formal level', I think the closest analogue to the strength of your guarantee is either Ada/SPARK or QuickCheck. Certainly way stronger than your standard "oh hey, I wrote a unit test in an untyped language, I'm good to go!". (I.e., declare your give me your invariants and I'll make sure your system never reaches a state you don't want).

Worth checking out if only because, hey, Lamport, Paxos. Have a look at Newcombe2015. It's only 8 pages, throw it on the Kindle or iPad, grab a coffee and have a go. What've you got to lose, right?

Lamport isn't behind the P language, and it isn't as expressive as TLA+ so it isn't a replacement for his work in that area.

"Safety" in a verification context generally refers to safety properties, i.e., properties that can be shown to not hold with a finite counter example (a test that hits a bug). Liveness properties in comparison are those that need an infinite counter example, i.e., a way to make the bad thing happen infinitely often.

Honest question: what is so complicated about writing device drivers, whose sole purpose typically is to transport data from input to output, and vice versa?

Also, shouldn't drivers be sandboxed to prevent stability problems from affecting the rest of the system?

>what is so complicated about writing device drivers, whose sole purpose typically is to transport data from input to output, and vice versa

I don't think it's the device driver part that makes it complicated. It's implementing something that is a collection of interdependent state machines in an asynchronous fashion, and doing that in a way that performs well and is reliable and safe.

In their specific example of a USB driver, here's what's in the paper:

"It receives a large number of un-coordinated events sent from different sources such as OS, hardware and other drivers, in tricky situations when the system is suspending or powering down. It can receive unexpected events from disabled or stopped devices, noncompliant hardware and buggy drivers. The hub driver can fail requests from incorrect hardware or buggy function drivers. However,it is important that the USB hub itself handles all events and does not crash or hang itself."

But it doesn't have to be a device driver. They give the example of elevator control in the paper as another collection of state machines that interact.

>shouldn't drivers be sandboxed to prevent stability problems

That would be a microkernel, which has it's own set of problems. There are ways to implement user mode drivers for Windows, Linux, etc, but they are currently limited in functionality and performance.

>what is so complicated about writing device drivers, whose sole purpose typically is to transport data from input to output, and vice versa?

What you ask if valid, and very true, but for cable construction, not driver writing.

There are tons of protocols and issues to handle when designing a device driver. Negotiate device capabilities. Handle hot plugging and unplugging. Handle intermittent state changes in the device. Handle buffering. Handle different versions of the protocol (or multiple protocols from the same bus). Handle throttling. Initialize and configure the device correctly (and infinite variations of devices from different vendors with perhaps slight or big inconsistencies in their implementation).

And of course device drivers are not just about I/O.

Add discovery, error handling, system resource allocation(dma, memory, interrupts, pins, clocks and timers).

>>> Honest question: what is so complicated about writing device drivers, whose sole purpose typically is to transport data from input to output, and vice versa?

Transporting billions of data per second from multiple inputs and to multiple outputs, on the fly in real time, within the processing and performance limits of all the different pieces of hardware involved, on a shared system.

>>> Also, shouldn't drivers be sandboxed to prevent stability problems from affecting the rest of the system?

You cannot sandbox a driver, the driver sandboxes you.

If it helps you, think of the drivers as a sandbox are your app. It sandboxes your app access to display, hard disk, keyboard...

In my day job, I work in a company that is into generation of device drivers from a high level specification - so I can talk forever on this topic! But other comments here are already spot on - and I don't have much to add.

But the question assumes(?) that the P language is meant for writing device drivers - and IMHO that assumption is not entirely true, for the language doesn't have any constructs for capturing the software's (the "physical device driver's" [1]) interaction with hardware.

For instance, I don't see a way to capture : 1. Software programmable registers (MMIO or over PCIe, I2C etc) 2. The software's contribution in DMA by setting up descriptors and buffers correctly and efficiently 3. Interrupts that the hardware can raise and how software should service them 4. Sequences to negotiate device capabilities and configuration, reset, perform transmit/receive, etc.

Many other comments here touch upon these and other aspects as being characteristic of "device drivers", they are missing altogether in the P language (I read the manual, didn't check the source, but I'm quite sure).

There is a "higher" driver - a logical device driver (LDD, [1]). An implementation of LDD is not specific to a device hardware (from some vendor); it remains same across all devices in a certain category. So an LDD is usually supplied/maintained by the OS vendor. For those who are aware of the WinCE architecture, the LDD is called an MDD ([2]).

The P language does seem to enable is specification of LDD - as long as it lends itself to the "message passing machines" paradigm (and in that respect, LDD is no more special to the P-language than any other software that can be modeled using "message passing machines").


[1] https://en.wikipedia.org/wiki/Device_driver - distinction between PDD and LDD

[2] https://msdn.microsoft.com/en-us/library/aa447512.aspx

Short answer: shared resource synchronization.

Long answer:

You have many devices connected yo many different buses, some of those buses having shared resources allowing bus mastering and Direct Memory Access, and those devices can fail in the middle of a long operation and the OS has to handle/recover those situations.

E.g. ethernet network hardware is very simple, however, depending on the bus being used for communicating with the CPU, driver architecture would be radically different.

So, until I/O buses get unified working always in packet mode, drivers will continue being a big mess.

Imagine that you had an oracle that you could feed code, and it would hand you back all possible bugs and their probability of the bugs manifesting in the real world. A "bug" here is some execution arbitrarily-long trace of events that results in an undesirable outcome, probably a crash. The "probability" here is drawn from the distribution of "real world events", which, along with the oracle magically finding all bugs, is another way the oracle is magic, since you don't have access to that distribution normally.

First, of course, you turn yourself into a multi-billionaire. But once you're bored with that, let's feed it the code to a device driver. Assuming that this is a driver that has had some decent work put into it and that you aren't getting something degenerate like a 100% chance of hitting a certain bug, you probably get back a list of bugs that almost certainly follows a power law distribution. If it's a good driver, the most likely bug may only be one in a trillion, but after that first bug it probably trails off for quite some time. For instance, you may find a bug that only applies once the 2^64-1th byte comes in AND you get unlucky on a scheduling operation or something. There's often a lot of these things even in incredibly well-tested code, such as: http://envisage-project.eu/proving-android-java-and-python-s...

Now suppose we characterize the challenge of writing a bit of code as A: the difficult of pushing the probability of the bugs down B: the impact of failure and C: how many rolls of the dice the real-world takes at the code. Easy-to-write code is code that is easy to write to push the probability of failure down below what I care about on the real world instances that I care about, and the impact of failure is not that big a deal. So the easiest code by this metric to write would be something like a one-off string massaging script written in bash shell where I'm going to examine the results by eye and fix it up anyhow. It's easy to get it right enough, failure just means I'm going to manually fix it up a bit, and the world only takes one shot at my code anyhow.

Why are device drivers hard? They are hard to write correctly enough to push the probability of bugs down, because they are running in a very constrained, yet highly-privileged environment, and often in an environment where it's very hard to test at all, let alone exhaustively test the state space of all possible states the hardware may be in and all transitions it might like to make next. Device drivers can, in the world case, potentially brick your device if they fail, but can certainly crash it or a major subsystem and cause massive data loss on the local system. And the real kicker is that a device driver is going to be exposed to the real world on millions upon millions of systems, potentially millions of times per second, for years and years on end. It's a situation where a sensible, sane programmer who might otherwise eschew all "that academic bullshit" might just find themselves reaching for formal verification tools.

Another example of something that is harder than you may think is "cloud code". Not code that uses the cloud, but the code that implements the cloud, like S3 or something. In principle a key-value store isn't that hard, but by the time you're done specifying that you want 99.9999999%+ reliability (not availability, reliability, i.e., even if the service sometimes goes down the data is still there when it comes back up), realizing that customers are putting crown jewels into your cloud server, and accounting for the trillions upon trillions of times that this service will be hit, the sheer expanse of complicated machinery executing it and the number of failure modes in the entire system from one bad bit of RAM up to correlated total datacenter failures, and suddenly something that looks hardly more complicated that putting a Python dictionary up on the web becomes a massive, massive engineering challenge.

P is a really cool language, and I've been keeping an eye on it. Unfortunately, the documentation has been pretty perpetually out of date, and the language is still a moving target. So you can't just "get started" in P, the example code won't compile.

I don't know what their plans are or if they ever intend for it to be consumed outside of MS. If they do, some focus on docs would be nice.

Pony is a similar language - but better documented. If you're interested in P, I suggest checking out Pony.

Pony generates also much tighter, better, faster code. Code which I would use in a driver. Not managed C#. I know no other language which generates faster code. I mean faster than C++ with OpenMP, while being memory and concurrency safe.

P has fantastic proof and test generating libraries and IDE's though. In pony you'll have to write perfect code to pass the type checker. P does much better handholding to get there.

At last, the long awaited successor to C, at least in the naming convention that assumes P comes after C because of BCPL. (Apologies to Walter Bright: No inference should be drawn that D was not also a good name for that sequence.)

L must be lisp, brought here from the future by the space aliens.

Or Logo.

It's a language for expressing protocols. So, P for protocols most likely. It also focuses on safety and ease of analysis. Can't possibly be connected to the C lineage...

I believe it does compile to C, interestingly, so not really connected, but a little bit nonetheless

Here is a demo for using P to program a drone, shows a little more of the language itself: https://www.youtube.com/watch?v=R8ztpfMPs5c

The graph visualization towards the end of the video in MS VS is amazing. I can see this being really useful for my robotics projects. I'll have to figure out how to get started. I was learning Erlang, and looked briefly at Pony, but the syntax and the demo appeals to me.

Thanks for sharing, this is really insightful.

Can we stop naming programming languages with letters and symbols and give it proper names that, when googled (or binged), makes the programming language appear on top?

Searching for "<Letter> Programming Language" works wonders. Give it a try sometime.

Second this. It might seem irrelevant for us folks, we aware of different places to seek help e.g. Stackoverflow but when introducing someone new and they encounter a million questions about programming in a language, searching just becomes a nightmare with these single letter naming.

So no B, C, or D?

What about common words like Java, Ruby, Python, Go?

I suppose this makes Scala and Perl the easiest languages to find documentation for.


Someone uses Bing?

And enough people to justify a new verb? binged is already taken and I don't think Microsoft wants to associate itself with such indulgence.

If anyone is interested in syntax and more tangible bits, here is the manual from the GitHub page linked to in the article:


That would have been really helpful for things like telnet and option negotiation. Or more recently, perhaps something like QUIC.

P is also being used to build safe robotics systems. https://drona-org.github.io/Drona/

I'm still holding out for NP

It'll be wayyy more complex

(Citation needed)

The high-level syntax of the language looks amazing for writing and specifying complex protocols.

Love it. What if all computing are communicating state machines

Microsoft's "Node.js" equivalent for asynchronous stuff?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact