
Performance of Persistent Memory: 300 nanoseconds - pbalcer
https://pmem.io/2019/12/19/performance.html
======
Rafuino
Part 2 is here:
[https://pmem.io/2020/03/26/performance-2.html](https://pmem.io/2020/03/26/performance-2.html)

------
danharaj
I think it would be fun to futz around with persistent memory but AFAIK it
would mean working specifically with Intel tech. Will there be a point in time
when I will be able to program against something that's not Optane as well
with the same interface?

~~~
pbalcer
Disclaimer: I work at Intel on Persistent Memory

If you are asking just about the OS interfaces - you can already emulate
persistent memory with DRAM [1], which allows you to play around with the
programming model. There are also NVDIMM-N devices available on the market
today that expose the same interface. IBM also has a product called Virtual
Persistent Memory [2].

I expect that once CXL [3] becomes adopted, there will be more diversity in
terms of Persistent Memory hardware vendors.

[1] - [https://docs.pmem.io/persistent-memory/getting-started-
guide...](https://docs.pmem.io/persistent-memory/getting-started-
guide/creating-development-environments)

[2] -
[https://www.ibm.com/downloads/cas/09ERZEVQ](https://www.ibm.com/downloads/cas/09ERZEVQ)

[3] - [https://pirl.nvsl.io/PIRL2019-content/PIRL-2019-Stephen-
Bate...](https://pirl.nvsl.io/PIRL2019-content/PIRL-2019-Stephen-Bates.pdf)

~~~
trentnelson
Can you share anything public about when the consumer-level gear is going to
hit the market?

~~~
pbalcer
I don't believe anything was announced publicly - sorry.

------
kragen
I don't know about this persistent memory thing, but I sure want one of the
tape drives this guy uses that can seek to anywhere on the tape in ≈100ms. (It
says ~100ms, but I feel sure that that's a typo — ~100 is -101, after all, and
it's even more improbable that his tape drive is a time-travel device.)

~~~
Arnavion
>It says ~100ms, but I feel sure that that's a typo — ~100 is -101

Using tilde to mean "approximately" is way more common than using it to mean
"bitwise-negation in two's complement", not in the least because the latter
meaning is specific to programming and even more specific to certain
programming languages.

~~~
cogman10
Not even programming language but in fact hardware architectures. While
everything now-a-days does twos compliment there were a few early
architectures that did ones compliment and a few that did just signed bits
(Heck, IEEE 754 floats use signed bits, not two's complement. But that's
mostly because there are no advantages AFAIK to a two's complement float)

~~~
kragen
I would be surprised to learn that there were implementations of C on sign-
magnitude or one's-complement architectures that implemented "~" as anything
other than bit inversion interpreted in two's-complement, and yet were
sufficiently compatible that they could run large C programs developed on more
traditional architectures. Are you familiar with any?

C does not permit the application of "~" to floating-point numbers. JS does,
but it converts them to integers first, and interprets it as bit inversion in
two's-complement.

~~~
AnimalMuppet
If I understand correctly, ~ means raw "invert the bits". Two's complement
doesn't enter into the picture - it relates to how the bits are interpreted,
which is a layer above what ~ does. One's complement, two's complement,
unsigned... we don't care, _just invert the bits_.

~~~
kragen
You're reasoning from a sort of Platonic perspective. But when people are
writing compilers, among the tradeoffs they are facing are whether to try to
make their compiler compile an existing popular language, with its known
flaws, or a better language with those flaws fixed, one more to their taste
and better suited to the machine it's being run on. The advantage of compiling
an existing popular language is that you can run software written in that
language. (If your compiler targets a new machine, as most machines were at
the time one's-complement was still seen as a reasonable design choice for
integers, this may be the only way you can run that software on that machine.)

Unix was written on the PDP-7 in 1969, rewritten for the PDP-11 in I think
1972, and ported to the Interdata 7/32 at Wollongong and the Interdata 8/32 in
New Jersey in 1977, the IBM 370 in 1977, the VAX in 1979 (32V), and the 68000
in 1982 (4.2BSD). Around that time Unix derivatives like XENIX (1984) started
to spread the Gospel to the microcomputer world: the Z8000 and the 8086. Every
single one of these processors was two's-complement. (I checked.)

It seems unlikely to me that a C compiler written with the objective of
running the existing body of Unix software, which is what was written in C at
the time, would be able to get away with "we don't care, _just invert the
bits_ ", because in practice most of the software you wanted to run would be
written with the assumption that it was running on a two's-complement machine.
There were some early "C" compilers that weren't compatible enough to compile
Unix software — for example, BDS C ( _not_ BSD), now public domain, and the
Tandem C compiler, for example, although those both ran on machines that
happened to be two's-complement.

But was there ever a C compiler that _was_ sufficiently compatible to compile
existing C software, but that implemented "~" as you suggest? Maybe, but I
would be surprised.

~~~
AnimalMuppet
The first source I found online said that ~ was _defined_ as the
one's-complement operator and therefore had the effect of flipping bits. It's
defined that way on all architectures, whether one's-complement,
two's-complement, or something else.

If you think about it, all of the bitwise operators had better be defined just
in terms of the bits. Bitwise and, for example, had better be defined as the
AND of the bits. No exceptions for the sign bits (or for anything else).

~~~
kragen
I read the Interdata 8/32 and Z8000 manuals and explained the historical
developments that motivated the development of C compilers to you, and you
come back with "the first source I found online said"? C'mon, gimme a specific
source talking about how a specific one's-complement (or sign-magnitude)
compiler handled "~", or go home.

~~~
AnimalMuppet
[http://www.cplusplus.com/doc/tutorial/operators/](http://www.cplusplus.com/doc/tutorial/operators/)

[https://www.geeksforgeeks.org/bitwise-operators-in-c-
cpp/](https://www.geeksforgeeks.org/bitwise-operators-in-c-cpp/)

[https://www.tutorialspoint.com/cprogramming/c_operators.htm](https://www.tutorialspoint.com/cprogramming/c_operators.htm)

All say the same thing.

(But, BTW, I _am_ home.)

~~~
kragen
Only the first of those is a reliable source, and it doesn't mention one's
complement machines at all, or for that matter C. It's a summary of the C++
standard, which agrees with the C standard on the relevant points, but both
are considerably more liberal than the practical requirements on actual
implementations of the languages.

It seems that you have no experience of or reliable knowledge about
programming in C on one's-complement or sign-magnitude machines, and
furthermore you seem to be unclear on the difference between C and C++. So
probably you shouldn't be so confident in your opinions!

~~~
AnimalMuppet
The first of those literally says "Unary complement (bit inversion)". That's
what the standard says. "Bit inversion" tells you _exactly_ what's supposed to
happen, regardless of the architecture of the machine.

Or is your claim that the website, despite being based on the C++ standard, is
simplifying it by assuming that all machines are two's-complement? All right
then, what is the actual text of the C++ standard? (You're the one making the
claim that on ones-complement machines, the standard doesn't require what
everyone thinks it does, so you get the burden of proof. Also, I don't have
access to the text of the standard.)

So far as I know, the C standard does not differ from the C++ standard on what
that operator does. If you claim it does, I'll ask you to document it.

Instead of throwing insults at others' level of knowledge, I suspect you
should be a bit more cautious about your own.

