("Yes", with the proviso that the proposed silicon atom definition is a more obviously direct link than the watt balance.)
The Avogadro number isn't a fundamental physics constant, it's just a unit conversion constant, and thus, didn't get to define any basic unit.
That's a pretty arbitrary assertion. Who defines what "fundamental" is? The number of periods of <something> of Caesium atoms used to define the duration of one second doesn't sound very fundamental either.
Avogadro's number is used to define the atomic units, what makes a lot more sense than pushing it into metric calculations.
The wikipedia page is great:
What you want, is to go down to the smallest unit possible, which is apparently Planck's Constant.
Essentially, that's what they were doing with Le Grande K. It's unstable because molecular properties are unstable in aggregate.
You may be thinking "Oh, you fool--E=mc^2 is for fission and fusion, and certainly doesn't apply to things like the energy in the heat of an object". But weirdly, it does. https://physics.stackexchange.com/questions/87259/does-decre...
This typically only matters in a relativistic context, and doesn't impact day to day life. The difference in passage of time can be measured by an atomic clock if you put it on a rocket into space, but is otherwise insignificant. It's unlikely any scale could measure your weight-gain from climbing stairs, since the amount of energy you gain is insignificant when converted into mass. You can compute the mass gain by solving for "m" in the formula E=m*c^2, so m=E/(c^2).
The speed of light c is a really big number, and so to compute the mass you gain, you're dividing the energy by c^2, which is a much bigger number. Thus a gain to kinetic or potential energy does not noticeably affect your mass in day to day situations. Conversely, if you can convert any meaningful part of your mass into energy, then it's an absolutely tremendous amount of energy: atomic weapons.
I've never heard this before. Can you link to some further explanation? Intuitively, if anything, you'd lose mass, because you're in a place now where space is less curved than where you were before.
And yes, there's also some change due to changes in gravity. I'd expect that to be much smaller.
But IANAP, and not that good with relativity.
There is a very good reason for this approach. In the simplest, most classic way of deriving the special relativity from the first principles, the 4-velocity is introduced before the 4-momentum. I.e., γ and m are coming from different places and are not connected in any ways whatsoever.
watt balance: https://www.youtube.com/watch?v=VlJSwb4i_uQ
"They never found the cause for the disagreement, but in late 2014 the NIST team achieved a match with the other two"
at a time when this story, also from Nature, is also on the front page: https://news.ycombinator.com/item?id=10383984
Having worked with some folks who do these high precision measurements, it's concerning that they never found the cause for disagreement. Pinning down systemic error is really, really, really hard.
As Feynmann pointed out in reference to the oil drop experiment(https://en.wikipedia.org/wiki/Oil_drop_experiment):
"We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that..."
 - http://fivethirtyeight.com/features/heres-proof-some-pollste...
What is problematic when it comes to humans is how our social structures are organized for doing science. They are hierarchical, resources are controlled centrally, and scientists are forced to compete with each other instead of cooperating with each other. Science is a career. We injure science and scientists by tying up their economic prosperity with their ability to convince the rest of the world that their work is worth anything. This creates a huge incentive to push forward and a huge disincentive to reevaluate past results: Your reputation can be damaged because you might be undermining the legacy of a high status scientist, and if you confirm the past result then you haven't done anything new and that reflects poorly on your 'performance'.
Science succeeds in spite of status, institutional monopolies, and hierarchical social organization. It would flourish in a more egalitarian society.
Is it intrinsic to humans to be hierarchical and status based? I want to say no. I don't think so. It is in the interests of the prevailing powers of the world to convince people that it is the case though, because they'd rather we not imagine a world where there isn't power to accumulate and hold on to.
I've come to the opinion that scientists are ideologues, but the ideology is based in the current understanding of physics rather than the results of experiments and the scientific method. A famous example is Arago's dot (aka Poisson's dot), where Freshnel was ridiculed for his wave-based theory of light by Poisson despite the latter not even bothering to do an experiment.
Sounds like Anchoring
and the conundrum was that they still needed to have a precise enough measurement of that constant because it's an experimental measurement.
Of these, mole and kilogram are dependent of the kilogram.
edit: this is a good, if maybe a bit misleading, illustration: http://www.nist.gov/pml/wmd/metric/upload/SI_Diagram_Color_A...
Did you make a typo? While it's technically true that the kilogram is defined by the definition of the kilogram, I just wanted to make sure.
That system is mostly irrelevant for cultural and legal purposes. Even in the specific philosophical framework where getting rid of the kilogram 'absolutely essential', the current importance of the kilogram would simply migrate to the importance of the specific conversion factor to traditional units.
On a related note, tonne (i.e metric tonne) is widely used, though not generally in the scientific arena. This is partly for historical reasons, but I think there's an element of it being a little awkward to apply prefixes to the kilogram, as one must multiply the "kilo". Nonetheless, we should exploit the full power of SI prefixes and use "megagram" instead. With two short syllables rather than one long syllable it takes about the same amount of time to say and is far less ambiguous.
This works well elsewhere; Fat Man wasn't 21 kilotonnes of TNT, it was 21 gigagrams.
There are three major reasons why Tarsnap pricing is defined in terms of picodollars per byte rather than dollars per gigabyte:
Tarsnap's author is a geek. Applying SI prefixes to non-SI units is a geeky thing to do.
If prices were listed in dollars per GB instead of picodollars per byte, it would be harder to avoid the what-is-a-GB confusion (a GB is 10^9 bytes, but some people don't understand SI prefixes). Picodollars are perfectly clear — nobody is going to think that a picodollar is 2^(-40) dollars.
Specifying prices in picodollars reinforces the point that if you have very small backups, you can pay very small amounts. Unlike some people, I don't believe in rounding up to $0.01 — the Tarsnap accounting code keeps track of everything in attodollars and when it internally converts storage prices from picodollars per month to attodollars per day it rounds the prices down.
20 ppb translates to uncertainty of about 10000 trillion atoms. The number will be set to a round number, because any number within trillions is compatible with the legacy standard.
Playing around with the chemistry more, it might be possible to use a gas that oxidizes more readily than silicon, so that if any oxygen does get in it will be neutralized before it gets anywhere near the silicon. I'm not sure if that's possible to do with a gas, though.
I think you meant 'one litre'
Now I'm disturbed that the people working in these laboratories were exposed to mercury themselves. Nasty stuff.
People interchangeably use lbs <-> kg but the actual equivalence is lbs <-> newton (N). The difference doesn't matter in the average person's life since we're all down here where gravity is homogeneous enough for most applications and people.
Now, both experiments have agreed to a precision high enough that current metrology best practices will not have to change.
My understanding is that up until now they've been using a reference kilogram (Si sphere or, for the watt balance, a reference object) to measure Planck's Constant.
Now that they are getting repeatable, converging results for Planck's constant they can turn the process around; define Planck's constant and then use the apparatus to generate a reference kilogram.
And so the beauty of it is that any sufficiently motivated team could assemble the equipment (either apparatus!) and manufacture a reference kilogram that would be just as good as anyone else's.
Unlike the present state of affairs, where no matter how hard anyone wants to try, they simply can't manufacture a reference kilogram better than the one in Paris.
However, as the methods are based on very well-established physics, there is good reason to expect that, as more measurements are made and any discrepancies are investigated, the results will continue to converge.
Even US laws explicitly refer to SI, for example the Metric Conversion Act of 1975: http://www.gpo.gov/fdsys/pkg/STATUTE-89/pdf/STATUTE-89-Pg100...
The question of GGP was, whether some countries use other metric systems which are not SI. I don't think such a country exists.
At about the same time as the Imperial unit reform Britain introduced the "florin" 2 shilling coin, 10 per pound, another vague attempt at decimalization.
>In 1981, the USMB reported to Congress that it lacked the clear Congressional mandate necessary to bring about national conversion. Because of this ineffectiveness and an effort of the Reagan administration — particularly from Lyn Nofziger's efforts (http://www.washingtonpost.com/wp-dyn/content/article/2006/03... ) as a White House advisor to the Reagan administration, to reduce federal spending — the USMB was disbanded in the autumn of 1982.
>The metrification assessment board existed from 1975 to 1982, ending when President Ronald Reagan abolished it, largely on the recommendation of Frank Mankiewicz and Lyn Nofziger. Overall, it made little impact on implementing the metric system in the United States.
>According to Mankiewicz, he prompted Lyn Nofziger's efforts to halt the 1970s U.S. metrication effort, who convinced President Ronald Reagan to shut down the United States Metric Board
> So, during that first year of Reagan's presidency, I sent Lyn another copy of a column I had written a few years before, attacking and satirizing the attempt by some organized do-gooders to inflict the metric system on Americans, a view of mine Lyn had enthusiastically endorsed. So, in 1981, when I reminded him that a commission actually existed to further the adoption of the metric system and the damage we both felt this could wreak on our country, Lyn went to work with material provided by each of us. He was able, he told me, to prevail on the president to dissolve the commission and make sure that, at least in the Reagan presidency, there would be no further effort to sell metric.
>It was a signal victory, but one which we recognized would have to be shared only between the two of us, lest public opinion once again began to head toward metrification.
And Fahrenheit's higher unit precision is a distinct advantage in taking and interpreting temperatures of people, particularly children.
Faherenheit's 0-100 range is basically a close match for the extremes in weather temperature experienced by typical human beings. Celsius is not even close. Additionally, Fahrenheit's scale has roughly twice the resolution of Celsius, and humans are good at distinguishing this precision. Unit precision is also helpful for body temperature.
Regarding the distant secondary use: obviously Celsius's range matches freezing and boiling: but because water's phase change is consistent and obvious at these temperatures (modulo altitude), cooking rarely involves measuring temperatures in this range except for unusual cases like candy. Celsius has no advantage at all for the primary use of temperature in cooking: baking.