Hacker News new | past | comments | ask | show | jobs | submit | nothrabannosir's comments login

> And how does it help to say "hey can you pick up 3.8L of milk?" If packaging sizes don't change then we'll still call it a gallon and we won't have "converted" at all.

I'm with you on everything but this. The imperial system allows retailers (and/or consumer good manufacturers) to take consumers for a giant ride. I have lived in both USA & EU, and in the USA I just give up entirely on comparing goods in a supermarket. With the metric system there's nowhere to hide, and I can compare all products, whether you use ml or l, mg or g or kg. In the USA different manufacturers will use any odd denominator they can come up with and after about two weeks of normalizing fractions every time I went shopping, I gave up.

Even the little tags supermarkets add to try and help you, aren't enough. Many shops use a different denominator, and even a single shop will vary internally. Something as simple as comparing the price of bacon becomes a middle school math problem.

I hate corporate greed, I am partial to pointless mental exercise like math, and I am very stubborn. I don't want to speak for other people but something tells me I'm not the only one who has given up on this battle. Retail customers have more power in the metric system.

For everything else though yes I agree who cares. Except °F which is actually better. :)


> The imperial system allows retailers (and/or consumer good manufacturers) to take consumers for a giant ride.

That's a really interesting point. However, ultimately I actually don't think it has anything to do with imperial vs. metric, but just consumer culture.

In Europe, when you order a drink the menu tells you how many centiliters it is. In the US, it's just small-medium-large-XL, which every location defines however they want. And in the US, the difficulty in comparison doesn't have anything to do with imperial units -- it's that one package of tomatoes is defined by volume while another is by weight, and the loose bell peppers are priced per pepper while the packaged ones are priced per weight, and so forth.

Switching to metric wouldn't change any of that.

That's a problem that can seemingly only be addressed by legislation -- e.g. that strawberries and tomatoes must be sold by weight not volume, or that selling produce by the item must also accurately list the average item weight.


Your post reminds me of the additional problem of "The Serving" which is a unit of measurement entirely conjured up by the food manufacturer to serve as the denominator when listing required nutritional information.

A normal 50g bowl of your sugary breakfast cereal too unhealthy? Just define a "serving" as 20g and cut all your bad numbers by 2/5! Problem solved! Is your bag of chips full of salt? Just invent a "Serving Size" of three chips and you don't have to draw attention to yourself on the nutrition label.

Letting companies define their own units of measurement seems to be a totally preventable regulatory mistake.


Indeed, it's something the EU prevented. There are regulations on what the standard serving size is, and other regulations specifying how the item must be priced -- so all the milk says "per litre" under the price tag in the supermarket, even the fancy one in a tiny bottle.

There were also preferred size regulations, which was meant to make it even easier. Breand could only be sold in multiples of 400g. I think this was relaxed, but it's still present for some things. A standard bottle of wine is always 75cL, for example.


Just to clarify that we're talking about the same thing in case I misunderstood something: autossh (style) scripts do these things:

1. fake data to keep a connection "fresh" for shitty middleware

2. detect connection which are stuck (state = open, but no data can actually round trip) and kill them

3. restart ssh when that happens

Is that what we're talking about here? I think people are saying that points 1 and 2, but not 3, are covered by SSH's ServerAlive* options. And that's also how OpenSSH advertises and documents those options, and apparently even how autossh talks about it in their own readme.

You're saying that those options don't actually solve points 1 and 2, while (your/their/etc) autossh does properly detect it.

Correct so far?

If so that seems like a bug in OpenSSH (or whatever implementation) which should get appropriate attention upstream. Has anyone reported this upstream? Is there a ticket to follow?

PS: I think we're all in agreement that option 3 is out of scope for stock OpenSSH (regardless of what other tools do)


I haven’t revisited this issue in years but on a project for thousands of similar devices we found autossh much more reliable.

I believe the issue is that the connections often fail or get wedged in other network layers; the only way to be sure that your ssh tunnel isn’t: a) lossy enough to “keep alive” but too lossy to send data, or b) isn’t just always waiting on TCP retry backoff, or c) etc, is to use the tunnel to transmit actual data at the application level.


> is to use the tunnel to transmit actual data at the application level.

Isn't that exactly what ServerAliveInterval does? The man page says: "ssh(1) will send a message through the encrypted channel". A plain TCP keepalive wouldn't count as being "through the encrypted channel".


Honestly at this point Im out of date, but autossh also takes care of bugs or connection issues within the ssh link itself


So does ssh now.

So much smoke & obfuscation. Autossh itself mentions ServerAliveInterval. It's worked flawlessly on all kinds of dodgy connections for me.

If anyone has any damned bug reports, link them.


I don't know if I would call it smoke and obfuscation, at the time systemd was not widely deployed and the ssh functionality was not as developed, so it made sense to use autossh. Now it sounds like it doesn't make sense anymore. It happens.

You summarized things well. #2 is the primary reason that ssh in a loop doesn't work as well or as reliably as autossh (the program discussed here; it's just coincidental that my own automatic ssh script is also called autossh).


“Fix lint” commits also taint git blame.

You could perhaps add some kind of filter to git blame to automatically skip commits whose message is “fix lint” but at some point the cure is worse than the disease.

I also see people argue that merge commits make git bisect harder than squashing in the first place but there is a third option: rebase and fast forward. If every commit in your branch is clean and stand-alone that’s viable. Linter fix commits break that paradigm.


Which one? I was looking forward to some awkwardness but all I can find are tutorials on teeth hygiene.


Dunno that it's awkwardness so much as the discomfort of watching someone with pretty messed up teeth forcing too-large ID brushes in between them


Thank you all very much for the feedback, it gives me a new perspective on things. We wanted to show a real case, rather than animations, because we thought it would be clearer for our patients; but we are probably desensitized to watching stuff like this. Would you prefer to see a 3D animation instead, or something else?


I actually really prefer the videos of real people doing the thing! I've literally never seen a video of how to floss - even at the dentist they show you how on a little model.

Thanks for sharing these!


I'm glad they were useful to you.


I watched all the patient videos and found them helpful. There's no substitute for seeing examples with a real mouth.

The interdental brush video is a bit more "intense" than the rest. Can't be helped: you need to show someone with teeth gaps. Perhaps move that one down in the list so newcomers start with a more gentle video?


Thank you for the feedback, I'm glad you found them helpful!

I wanted the interdental cleaning part to come first, because it usually gets neglected and it's just as important as tooth brushing.

But I like your suggestion to change the order, as that would indeed give a gentler introduction.


Another perspective, I don't mind the real videos. They are helpful. It might be easier to for some watch if the subjects had fairly nice teeth. I think animations would be less helpful.


We want clean, healthy and attractive teeth and mouths to stare at. Rather than the e.g. inter-dental mouth that triggers disgust even if realistic. Use the attractive models in the video if they have healthy teeth ideally.


Thank you for the feedback, that's a point that several commenters have brought up.

The problem with the interdental brushing video specifically is that we can't show how to use larger brushes on young healthy patients, as they don't have the spaces for it. But I will think about how we can improve that video (the comment above suggested moving it down in the page, to start with the 'gentler' videos).


This exactly. I don't think the average person is as comfortable as a medical professional at starting at videos/images of messed up teeth, injuries, disease, etc. It's not exactly what we want to stare at when learning.


Also, while I'm at it, I'd suggest maybe putting an hour or two of research into how to make content… exciting? I know you're a dentist and a software engineer, not a YouTuber, but it's worth looking up a bit about what YouTubers and entertainers know about how to hold an audience's attention. Just a few small changes can probably result in a 1.5-3x improvement in the number of people who make it to the end of a video.


Another perspective, I don't feel like these informational videos need to be exciting. For this, I feel like 'just the facts' are a breath of fresh air.


Maybe exciting is the wrong word, but compelling is a better one.

For example, just the order of how you present information matters. Compare these two approaches:

1. "If you don't floss enough, then <BadThing> may happen. Here's tips on how to floss: A, B, C."

2. "Here's tips on how to to floss: A, B, C. Btw, this can help prevent <BadThing>."

The first is better. "Boring" information ceases to be boring and instead becomes compelling when you have a strong reason to want to know the information. Thus, it's important to hook people by giving them that motivational reason to watch/listen before you jump right into a video or article. Otherwise, you will likely only retain viewers who already arrive with their own personal motivations.


The very first video, pinned to the top, is titled "Why is oral hygiene important?" and lists both <BadThing> and <GoodThing>.

The site follows approach 1 as you suggest (at least it does today).


Apple settled on Ireland precisely because of that tax scheme. Had Ireland levied taxes at a normal rate, they wouldn't have gotten any dollars. The choice for Ireland was between jobs and nothing.

Apple (& al) played countries out against each other and had Ireland not done it another one would've. It's a tragedy of the commons, and as always, that can only be solved through collective action (cue TFA).


Competition is a good thing. We all lose when powerful players band together and form a cartel.

If companies did that - it's illegal. If government politicians do it - their populism brings them votes.


Correct: a market where sellers compete is good for buyers.

Unfortunately in this market the buyers are corporations and the sellers are democratic governments (us).

That’s why this is not good for people.


We aren't democratic governments. We are subjects to governments, who we must pay taxes to, and customers of corporations, who we pay if they can produce stuff we like for cheap enough.


This moves the conversation to questioning the notion of representation of a people by its government. It is true that the entire conversation about whether or not it's good if Apple can play governments out against each other in order not to pay any taxes, rests in part on that assumption. That's a fine conversation to have. But in TFA and in here so far, it is assumed.

Note btw that even in your narrower definition of what government is to us, you still mention taxes, and that is precisely what is in question here, so even according to your formulation everything holds and it is still good for us if corporations can't play out governments against each other to lower their tax bill, because that's directly us footing that bill. You'd have to find some kind of definition of government that doesn't cover that, or argue that if Apple doesn't pay taxes, all those gains are passed on to us, the people, in a better way than if they do. Through a stronger tech market leading to better tech products, or something?

Anyway I think the original assumption is fair and the discussion holds. "Cheating on your taxes = stealing from the people" is a such a well established fundamental axiom that challenging it basically changes the conversation entirely.


> "Cheating on your taxes = stealing from the people" is a such a well established fundamental axiom

Excuse me?! Cheating on your taxes is illegal. Minimizing your taxes is what everyone of us does - it is perfectly normal and justified behavior.

And the discussion was not about that - companies are paying their taxes just fine. The discussion was about governments colluding to form a cartel to uniformly raise taxes. That is not OK.

Even if the stated purpose is somehow justifiable, government collusion is not good for the people. By definition governments are natural monopolies, they don't have internal competition. The only competition keeping them in check is external. And it comes in two forms: destructive (wars) and constructive (free trade). Without competition democracy alone cannot keep governments in check - just witness the decay towards populism and autocracy together with the raise of left and right extremism of the Western governments during the last few years. We need alternatives. We, the people, need to be able to pack our bags and go to a place with values and laws better aligned to ours. Otherwise we will end up prisoners behind barbed wire on the borders like the Eastern Europe the Cold War or facing fines and exit taxes like certain countries already impose on their citizens today.

In a world of bigger and bigger governments, with larger and larger budgets and deficits but smaller and crappier results, inter-national competition is the only recourse we have left. For example, the EU would be supremely satisfied with itself right now if USA's economic performance didn't point out that the Emperor is naked.

> argue that if Apple doesn't pay taxes, all those gains are passed on to us

Yes, smaller costs for Apple directly translate in cheaper products for us or larger profits for its shareholders - which is also us. On the other hand, that money going to the tax man will fund millions of fat bureaucrat jobs and countless wasteful government programs out of which an extremely tiny part will actually benefit us.

> democratic governments (us)

Even if you think democratic governments represent us (a debatable idea at best, then logically you should want competition for them. Because like us, without competition, they go lazy, wasteful and abusive.


> This is also the reason why undefined behavior can affect code executing prior to the occurrence of the undefined condition, because logical deduction as performed by the compiler is not restricted to the forward direction of control flow (and also because compilers reorder code as a consequence of their analysis).

According to Martin Uecker, of the C standard comittee, that is not true:

> In C, undefined behavior can not time travel. This was never supported by the wording and we clarified this in C23.

https://news.ycombinator.com/item?id=40790203


It is really hard to prevent this in an optimizing compiler. I don’t think it’s realistic. For example, loop invariants can be affected by undefined behavior in the loop body, and that in turn can affect the code that is generated for a loop condition at the start of the loop, whose execution precedes the loop body. This is a general consequence in static code analysis. Even more so with whole-program optimization.


It's also completely necessary to have any sort of reasonable language semantics. The goal is to have programmers be able to write code that does what they intend. With the C23 addition, time travelling UB doesn't exist, so programmers can write code that does what they intend up to the point of invoking UB. Good enough.

Let's say that's too difficult for compiler writers, so we bring back time travelling UB. That implies UB on a future execution path means the entire execution path has no semantics. We now have to ensure there is no UB on any future execution path to meet our goal. There are basically 4 options:

1. Rely on programmers to never write UB. This has not worked out historically.

2. Compilers must detect and/or prevent all UB statically. This is obviously impossible.

3. Runtimes must exhaustively detect and/or prevent all UB. This is both infeasible and expensive.

4. Give up on semantics for essentially all nontrivial programs. This is the situation today, but if we're going to make this the official position why should we even have a standard?


Maybe I don't understand something, but for me it seems pretty easy. What is needed to be done:

1. Make a list of all UB

2. Define the sensible compiler behavior in each case (for example, let MAX_INT+1 to calculate into MIN_INT on x86_64, just because `add` on x86_64 does that)

3. Treat this as a part of a standard, when compiling the code.

This approach allows to have different compiler behavior on different architectures, which are better suited for the architecture. Maybe on some architectures `add` on signed numbers will generate a CPU exception on overflow, so define this as a way to behave and go with it.


The requirement for “sensible” (i.e. repeatable) behavior breaks many simple, critical optimizations like maintaining the referent of a nominally un-aliased pointer in a register.

What if there’s UB & it is aliased? Some other pointer of a different type in scope also references the same value. The “sensible” thing to do when the value is updated through the alias is…?


That works for a lot of behavior but not everything. For example:

  int f(int x) {
    static int y[] = {42, 43};
    return y[x];
  }
What behavior should `f(-1)` or `f(100)` have? What is sensible?


Desugar to pointer arithmetic, try to do an dereference like

    *(y-1)
and more than likely segfault, or return the value at that address if it's somehow valid.


I'm not seeing how this is a change. C99 also said "for which".

C99: "behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements."

Martin Uecker said that something was fixed in the C23 draft, and when asked about it, pointed to the "for which" as pertaining to just that construct and not the entire program.

I'm afraid that this fellow showed himself unreliable in that thread, in matters of interpreting the C standard. In any case, a random forum remark by a committee member, is not the same thing as a committee response to a request for clarification. It has to be backed by citations and precise reasoning, like anyone else's remark.

Suppose we have a sequence of statements S1; S2; S3 where S3 contains the expression i + 1, i being of type int, and nothing in these statements alters the value of i (it is live on entry into S1 and there is no other entry). It is valid for S1 to be translated according to the supposition that i is less than INT_MAX. Because if that is not the case, then S3 invokes undefined behavior, and S3 is unconditionally reachable via S1.

The whole idea that we can have an __notreached expression which does nothing but invoke UB is predicated on time travel (time travel at program analysis time: being able to reason about the program in any direction). Since __notreached invokes UB, the implementation may assume that the control flow does not reach it and behave accordingly. Any statement which serves as a gateway to __notreached itself invokes undefined behavior, and is therefore assumed unreachable and may be deleted. This reasoning propagates backwards, and so the optimizer can simply delete a whole swath of statements.

Backwards reasoning has been essential in the implementation of compilers for decades. Basic algorithms like liveness analysis involve scanning basic blocks of instructions in reverse order! How you know that a variable is dead at a given point (so its register can be reused for another value) is due to having scanned backwards: the next-use information is a peek into the future (what will be the future when that instruction is running).

And, about the quesiton whether undefined behavior can make the whole program undefined, the answer is maybe. If there is no way to execute the program such that an undefined behavior is avoided, then the whole program is undefined. If the situation can be deduced while it is being translated, then the translator can stop with a diagnostic message.

E.g.:

  #include <stdio.h>

  int main() // (void) is now deprecated in draft ISO C
  {
     printf("hello, world\n");
     return 0/0;
  }
This program does not have a visible behavior of printing the hello message. The 0/0 division is undefined, and amounts to a declaration that the printf statement is unreachable. The implementation is free to delete that statement, or to issue a diagnostic and not translate the program.

Uecker is right in that there are limits on this. If a program issues some output (visible effect) and then performs input, from which it obtains a value, and that value is then embroiled in a calculation that causes undefined behavior, that previous visible effect stands. The whole program is not undefined. It's something like: the program's execution becomes undefined at the point where it becomes inevitable that UB shall occur. That could be where the value is prepared that will inevitably cause the erroneous calculation. So, as far back as that point of no return, the implementation could insert a termination, with or without a diagnostic.

Undefined behavior is not a visible behavior; it doesn't have to be ordered with regard to visible behaviors.


Funny, I did quite a deep dive on this same issue about two years back, and came to exactly the opposite conclusion: keep ASDF, and lean on Nix to do pinning. I now maintain my own Common Lisp package repository, testing daily against nightly SBCL etc, and I've learned that a lot of CL stability is just because test suites are never run. Nix goes a long way to shield me from the haphazard breakages that other commenters mentioned, but of course it comes with a huge downside: you need to use Nix :)

(just kidding)

Code at https://github.com/hraban/cl-nix-lite if someone is interested.

Obviously since this was written by Fukamachi you know it will be good. Much respect to the man.


Really unfortunate that Guix never took off. Guile is so much nicer than Nix slop.


At the risk of derailing the conversation (although Guix is a lisp so maybe not): I agree 100% but also maybe Nix's pragmatism is why it's more popular? "Pragmatism" being a programming language euphemism for "untyped hacky mess".


>Guix never took off

Why is this in past tense? Guix development is active. It's just Nix is more popular and it's understandable since it's the original idea and it's older.


Guix even has a very active, high-quality blog where maintainers detail major technical accomplishments, long-term goals, etc.: https://guix.gnu.org/en/blog/

From here it seems like they're growing and advancing well. I wish I could find ready historical data on the numbers of packages and services from, say, 4 years ago vs. today, though. I could have sworn Repology used to show year over year stuff but I can't find it now.


The biggest issue for me is how slow guix is when doing guix pull.

Also, the amount of space taken is unacceptable. Several times I had my HD filled with guix garbage.


The nixpkgs upstream CL infrastructure is also recently updated and IMHO excellent: https://nixos.org/manual/nixpkgs/stable/#lisp


If you want a basic version of this you can just use the built-in

  system.autoUpgrade = {
    enable = true;
    flake = "github:you/dotfiles";
  };


I became an actor and moved to USA (NYC).

Can't recommend it for the pay but it's nice being around extroverts :) or maybe that's just Americans in general.

Have you thought about contracting? I used to be a mercenary and switching every 6 months kept it somewhat fresh. It was a bit more exciting than a permanent position.


I actually wrote a script which creates a trampoline launcher for this. It has its flaws but it solves the spotlight issue, and supports pinning to Dock across updates.

Available as a plug-and-play module for nix-darwin and home-manager: https://github.com/hraban/mac-app-util


This is great! I run a script with mkalias[1] which works fine but all icons have this ugly arrow. Yours works perfectly - only thing missing is the icon in the spotlight search.

EDIT: Hm seems to not affect every app.

[1] https://github.com/reckenrode/mkalias


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: