P got its start in Microsoft software development when it was used to ship the USB 3.0 drivers in Windows 8.1 and Windows Phone. These drivers handle one of the most important peripherals in the Windows ecosystem and run on hundreds of millions of devices today. P enabled the detection and debugging of hundreds of race conditions and Heisenbugs early on in the design of the drivers, and is now extensively used for driver development in Windows.
As someone who has been thinking a lot recently about SPARK/Ada and "free" formal verification, can anyone tell me how this language compares on that front? If I write a P program, do I get the ability to "prove" its correctness?
I'm guessing Lamport chose not to go the 'fully dependent' route Coq/Agda style (based on his presentation at least) because, well, as he said in the intro lecture - engineers don't really want formal verification at that level but still want strict invariants of their system to be ensured s.t. you can be guaranteed your program will never enter an undefined state nor encounter any undefined behavior as a result entering into an ill-defined state and proceeding to get UB (or 'implementation-specific', ugh) as a result.
What really drove it home is reading the paper Lamport referenced, "How Amazon Web Services Uses Formal Systems" (Newcombe, et al, 2015) along with his RTOS anecdote of decreasing their NASA RTOS' KLOC by 10x while getting stronger guarantees in the process (see: "Formal Verification of a Network-Centric RTOS", Verhulst et al, 2011).
It's also got more than a few similarities with "Fortress". Guy Steele really pushed this programming model while at Sun, but sadly DARPA killed the funding for it. That being said, Amazon has this is at > 14 of their AWS programs so it seems to be accessible and useful enough to the average engineer rather than 3 post-docs locked-up in their academic ivory tower, leaving twice a year only to present at ICFP and POPL.
At the 'formal level', I think the closest analogue to the strength of your guarantee is either Ada/SPARK or QuickCheck. Certainly way stronger than your standard "oh hey, I wrote a unit test in an untyped language, I'm good to go!". (I.e., declare your give me your invariants and I'll make sure your system never reaches a state you don't want).
Worth checking out if only because, hey, Lamport, Paxos. Have a look at Newcombe2015. It's only 8 pages, throw it on the Kindle or iPad, grab a coffee and have a go. What've you got to lose, right?
Also, shouldn't drivers be sandboxed to prevent stability problems from affecting the rest of the system?
I don't think it's the device driver part that makes it complicated. It's implementing something that is a collection of interdependent state machines in an asynchronous fashion, and doing that in a way that performs well and is reliable and safe.
In their specific example of a USB driver, here's what's in the paper:
"It receives a large number of un-coordinated events sent from different sources such as OS, hardware and other drivers, in tricky situations when the system is suspending or powering down. It can receive unexpected events from disabled or stopped devices, noncompliant hardware and buggy drivers. The hub driver can fail requests from incorrect hardware or buggy function drivers. However,it is important that the USB hub itself handles all events and does not crash or hang itself."
But it doesn't have to be a device driver. They give the example of elevator control in the paper as another collection of state machines that interact.
>shouldn't drivers be sandboxed to prevent stability problems
That would be a microkernel, which has it's own set of problems. There are ways to implement user mode drivers for Windows, Linux, etc, but they are currently limited in functionality and performance.
What you ask if valid, and very true, but for cable construction, not driver writing.
There are tons of protocols and issues to handle when designing a device driver. Negotiate device capabilities. Handle hot plugging and unplugging. Handle intermittent state changes in the device. Handle buffering. Handle different versions of the protocol (or multiple protocols from the same bus). Handle throttling. Initialize and configure the device correctly (and infinite variations of devices from different vendors with perhaps slight or big inconsistencies in their implementation).
And of course device drivers are not just about I/O.
Transporting billions of data per second from multiple inputs and to multiple outputs, on the fly in real time, within the processing and performance limits of all the different pieces of hardware involved, on a shared system.
>>> Also, shouldn't drivers be sandboxed to prevent stability problems from affecting the rest of the system?
You cannot sandbox a driver, the driver sandboxes you.
If it helps you, think of the drivers as a sandbox are your app. It sandboxes your app access to display, hard disk, keyboard...
But the question assumes(?) that the P language is meant for writing device drivers - and IMHO that assumption is not entirely true, for the language doesn't have any constructs for capturing the software's (the "physical device driver's" ) interaction with hardware.
For instance, I don't see a way to capture :
1. Software programmable registers (MMIO or over PCIe, I2C etc)
2. The software's contribution in DMA by setting up descriptors and buffers correctly and efficiently
3. Interrupts that the hardware can raise and how software should service them
4. Sequences to negotiate device capabilities and configuration, reset, perform transmit/receive, etc.
Many other comments here touch upon these and other aspects as being characteristic of "device drivers", they are missing altogether in the P language (I read the manual, didn't check the source, but I'm quite sure).
There is a "higher" driver - a logical device driver (LDD, ). An implementation of LDD is not specific to a device hardware (from some vendor); it remains same across all devices in a certain category. So an LDD is usually supplied/maintained by the OS vendor. For those who are aware of the WinCE architecture, the LDD is called an MDD ().
The P language does seem to enable is specification of LDD - as long as it lends itself to the "message passing machines" paradigm (and in that respect, LDD is no more special to the P-language than any other software that can be modeled using "message passing machines").
 https://en.wikipedia.org/wiki/Device_driver - distinction between PDD and LDD
You have many devices connected yo many different buses, some of those buses having shared resources allowing bus mastering and Direct Memory Access, and those devices can fail in the middle of a long operation and the OS has to handle/recover those situations.
E.g. ethernet network hardware is very simple, however, depending on the bus being used for communicating with the CPU, driver architecture would be radically different.
So, until I/O buses get unified working always in packet mode, drivers will continue being a big mess.
First, of course, you turn yourself into a multi-billionaire. But once you're bored with that, let's feed it the code to a device driver. Assuming that this is a driver that has had some decent work put into it and that you aren't getting something degenerate like a 100% chance of hitting a certain bug, you probably get back a list of bugs that almost certainly follows a power law distribution. If it's a good driver, the most likely bug may only be one in a trillion, but after that first bug it probably trails off for quite some time. For instance, you may find a bug that only applies once the 2^64-1th byte comes in AND you get unlucky on a scheduling operation or something. There's often a lot of these things even in incredibly well-tested code, such as: http://envisage-project.eu/proving-android-java-and-python-s...
Now suppose we characterize the challenge of writing a bit of code as A: the difficult of pushing the probability of the bugs down B: the impact of failure and C: how many rolls of the dice the real-world takes at the code. Easy-to-write code is code that is easy to write to push the probability of failure down below what I care about on the real world instances that I care about, and the impact of failure is not that big a deal. So the easiest code by this metric to write would be something like a one-off string massaging script written in bash shell where I'm going to examine the results by eye and fix it up anyhow. It's easy to get it right enough, failure just means I'm going to manually fix it up a bit, and the world only takes one shot at my code anyhow.
Why are device drivers hard? They are hard to write correctly enough to push the probability of bugs down, because they are running in a very constrained, yet highly-privileged environment, and often in an environment where it's very hard to test at all, let alone exhaustively test the state space of all possible states the hardware may be in and all transitions it might like to make next. Device drivers can, in the world case, potentially brick your device if they fail, but can certainly crash it or a major subsystem and cause massive data loss on the local system. And the real kicker is that a device driver is going to be exposed to the real world on millions upon millions of systems, potentially millions of times per second, for years and years on end. It's a situation where a sensible, sane programmer who might otherwise eschew all "that academic bullshit" might just find themselves reaching for formal verification tools.
Another example of something that is harder than you may think is "cloud code". Not code that uses the cloud, but the code that implements the cloud, like S3 or something. In principle a key-value store isn't that hard, but by the time you're done specifying that you want 99.9999999%+ reliability (not availability, reliability, i.e., even if the service sometimes goes down the data is still there when it comes back up), realizing that customers are putting crown jewels into your cloud server, and accounting for the trillions upon trillions of times that this service will be hit, the sheer expanse of complicated machinery executing it and the number of failure modes in the entire system from one bad bit of RAM up to correlated total datacenter failures, and suddenly something that looks hardly more complicated that putting a Python dictionary up on the web becomes a massive, massive engineering challenge.
I don't know what their plans are or if they ever intend for it to be consumed outside of MS. If they do, some focus on docs would be nice.
Pony is a similar language - but better documented. If you're interested in P, I suggest checking out Pony.
P has fantastic proof and test generating libraries and IDE's though. In pony you'll have to write perfect code to pass the type checker. P does much better handholding to get there.
What about common words like Java, Ruby, Python, Go?
I suppose this makes Scala and Perl the easiest languages to find documentation for.
And enough people to justify a new verb? binged is already taken and I don't think Microsoft wants to associate itself with such indulgence.
It'll be wayyy more complex