Hacker News new | past | comments | ask | show | jobs | submit | phyrex's comments login

That's been resolved


“resolved” is a great whitewash. It wasn’t a bug but a “feature“, intentional.


Please be aware that handling thermal paper is super unhealthy: https://pmc.ncbi.nlm.nih.gov/articles/PMC5453537/


Not necessarily, if you choose a friendly alternative. In Germany, we have https://www.oekobon.de/ , I guess there a similar offers for other markets. As always, there are downsides. In this case, the eco version comes with a blue base color.


My daily supermarket uses these and I keep old receipts for personal finance evaluation, they definitely do not hold up as well as the website advertises. As soon as they get a few crinkles, they darken and get really hard to read.

Ideally, we’d all get to online-only receipts and stop the paper madness already, but that said, it’s still miles ahead of ordinary thermopaper.


Wow this is great! Danke


There's a reason why a lot of the Costco receipt checkers wear nitrile gloves now.


It's a mono repo across a dozen languages (good luck with ctags) that tens of thousands of developers commit to every day. Even if you'd spend the hours indexing it locally, it would be out of date right away.


You're already expected to learn a number of exotic (Hack) or old (C++) languages at Meta, so I'm pretty sure that's not the reason.

To quote from another comment I made:

> I don't have any numbers, but we know that the Meta family of apps has ~3B users, and that most of them are on mobile. Let's assume half of them are on Android, and you're easily looking at ~1B users on Android. If you have a nullpointer exception in a core framework that somehow made it through testing, and it takes 5-6 hours to push an emergency update, then Meta stands to lose millions of dollars in ad revenue. Arguably even one of these makes it worth to move to a null-safe language!


An NPE means an incomplete feature was originally pushed to production. It would still be incomplete or incorrect, in Kotlin and would still need a fix pushed to production.

It's even worse with Kotlin, without the NPE to provide the warning something is wrong, the bug could persist in PROD much longer potentially impacting the lives of 1 Billion users much longer than it would have if the code remained in the sane Java world.


How would a bug persist in production if you get a compile time error that prevents you from running the application? You don't seem like you know what you're talking about.

Even if I am charitable with my interpretation, I'm not sure I get your point. If you refuse to handle the case where something is nullable and you convert it to non-null via .unwrap() (Rust perspective, I haven't used Kotlin), then you will get your NullPointerException in that location, so Kotlin is just as powerful as Java in terms of producing NPEs, but here is the thing. The locations where you can get NPEs are limited to the places where you have done .unwrap(), which is much easier to search through, than the entire codebase, which is what you'd have to do in Java, where every single line could produce an NPE. So in reality if you push incomplete code to production, you will have strong markers in the code that indicate that it is unfinished.


"The" reason is not what I'm speculating on, because I don't think a singular reason is likely to exists.

There is likely a mix of reasons -- of which NPE avoidance is almost certainly one. And hiring/talent management is almost always another, when making technology choices. Particularly when choices are coupled with a blog post on the company's tech blog.


I don't have any numbers, but we know that the Meta family of apps has ~3B users, and that most of them are on mobile. Let's assume half of them are on Android, and you're easily looking at ~1B users on Android. If you have a nullpointer exception in a core framework that somehow made it through testing, and it takes 5-6 hours to push an emergency update, then Meta stands to lose millions of dollars in ad revenue. Arguably even one of these makes it worth to move to a null-safe language! I know your point is that you need to have that sort of crazy scale to make it worth it and that's true, I'm just annoyed at the comments suggesting that the move to Kotlin is just to pad resumes or because Meta let a bunch of bored devs run amok.


As a consumer you're only ever seeing the tip of the iceberg of Meta apps. There are at least 3 major user groups: The consumers, the producers, and the advertisers, and each of them is at least as complex as the others. Then you have to consider the global audiences, and that e.g. ads are handled very differently in the EU than in North America, and that needs to be accounted for.


From the article:

> The short answer is that any remaining Java code can be an agent of nullability chaos, especially if it’s not null safe and even more so if it’s central to the dependency graph. (For a more detailed explanation, see the section below on null safety.)


One of my biggest gripes with an otherwise strictly typed language like Java was deciding to allow nulls. It is particularly annoying since implementing something like NullableTypes would have been quite trivial in Java.


Would it have been trivial and obvious for Java (and would Java still have been "not scary") back in the 90s when it came out?


It wouldn't have been particularly hard from a language, standard library, and virtual machine perspective. It would have made converting legacy C++ programmers harder (scarier). Back then the average developer had a higher tolerance for defects because the consequences seemed less severe. It was common to intentionally use null variables to indicate failures or other special meanings. It seemed like a good idea at the time


> It would have made converting legacy C++ programmers harder (scarier).

And that, right there, is all the reason they needed back then. Sun wanted C++ developers (and C developers, to some extent) to switch to Java.


It would have been trivial for record types to be non-nullable by default.

Record types are 3 years old and they are already obsolete with regards to compile time null checking. This is a big problem in Java. A lot of new features have become legacy code and are now preventing future features to be included out of the box.

This is why the incremental approach to language updates doesn't work. You can't change the foundation and the foundation grows with every release.

I am awaiting the day Oracle releases class2 and record2 keywords for Java with sane defaults.


Tony Hoare (the guy who originally introduced the concept of null for pointers in ALGOL W) gave a talk on it being his "billion dollar mistake" in 2009: https://www.infoq.com/presentations/Null-References-The-Bill...

Now, this wasn't some thing that just dropped out of the blue - the problems were known for some time before. However, it was considered manageable, treated similarly to other cases where some operations are invalid on valid values, such as e.g. division by zero triggering a runtime error.

The other reason why there was some resistance to dropping nulls is because it makes a bunch of other PL design a lot easier. Consider this simple case: in Java, you can create an array of object references like so:

   Foo[] a = new Foo[n];  // n is a variable so we don't know size in advance
The elements are all initialized to their default values, which for object references is null. If Foo isn't implicitly nullable, what should the elements be in this case? Modern PLs generally provide some kind of factory function or equivalent syntax that lets you write initialization code for each element based on index; e.g. in Kotlin, arrays have a constructor that takes an element initializer lambda:

   a = Array(n) { i -> new Foo(...) } 
But this requires lambdas, which were not a common feature in mainstream PLs back in the 90s. Speaking more generally, it makes initialization more complicated to reason about, so when you're trying to keep the language semantics simple, this is a can of worms that makes it that much harder.

Note that this isn't specific to arrays, either. For objects themselves, the same question arises wrt not-yet-initialized fields, e.g. supposing:

   class Foo {
      Foo other;   
      Foo() { ... }
   }
What value does `this.other` have inside the constructor, before it gets a chance to assign anything there? In this simple case the compiler can look at control flow and forbid accessing `other` before it's assigned, but what if instead the constructor does a method call on `this` that is dynamically dispatched to some unknown method in a derived class that might or might not access `other`? (Coincidentally, this is exactly why in C++, classes during initialization "change" their type as their constructors run, so that virtual calls always dispatch to the implementation that will only see the initialized base class subobject, even in cases like using dynamic_cast to try to get a derived class pointer.)

Again, you can ultimately resolve this with a bunch of restrictions and checks and additional syntax to work around some of that, but, again, it complicates the language significantly, and back then this amount of complexity was deemed rather extreme for a mainstream PL, and so hard to justify for nulls.

So we had to learn that lesson from experience first. And, arguably, we still haven't fully done that, when you consider that e.g. Go today makes this same exact tradeoff that Java did, and largely for the same reasons.


Progressive typing of an untyped code base. Types that are too complex to represent in that type system.


Yep, adopting strict after the fact is a different conversation, but one that has been talked about a bunch and there is even tooling to support progressive adoption.

Types that are too complex... hmmmm - I'm sure this exists in domains other than the bullshit CRUD apps I write. So yeah, I guess I don't know what I don't know here. I've written some pretty crazy types though, not sure what TypeScript is unable to represent.


Progressive code QA in general is IMO an underexplored space. Thankfully linters have now largely given way to opinionated formatters (xfmt, black, clang-format) but in the olden days I wished there was a way to check in a parallel exemptions file that could be periodically revised downward but would otherwise function as a line in the sand to at least prevent new violations from passing the check.

I'd be interested in similar capabilities for higher-level tools like static analyzers and so on. The point is not to carry violations long term, but to be able to burn down the violations over time in parallel to new development work.


This is how we introduced and work with clang-tidy. We enabled everything we eventually want to have, then individually disabled currently failing checks. Every once in a while, someone fixes an item and removes it from the exclusion list. The list is currently at about half the length we started out with.


> not sure what TypeScript is unable to represent.

I want a type that represents a string of negative prime numbers alternating with palindromes, separated by commas.


Oh yeah, you have to get into branded types for this I think, which means a parsing step. Fair point.


Important note: TS doesn't let you enable strict mode on per-file basis. Flow allowed that.


  * Common functions such as parsing functions in languages that don't support function overloading

  * "equals" and other global functions.


Typescript have solutions for both of those problems: conditional types and generics


At my workplace, coding automatically silences your message notifications!


sounds interesting. How does it work?


I don't know the specifics, but we already collect the time people spend coding for productivity telemetry, so we already have that signal. I assume the team that maintains our chat system either hooks into an editor notification, or into the Kafka-like event queue that feeds into the data warehouse. It likely only works if you're using the officially blessed IDE, but very nearly everyone does


That does sound Kafkaesque, but maybe it's a weird example where intrusive surveillance isn't completely bad.


The actual non-anonymized data is closely guarded to make sure that nobody gets any funny ideas about mistaking it for actual performance measurement!


You don't get access to randomness or time functions or anything that could change the output of a function with the same input


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: