Hacker News new | past | comments | ask | show | jobs | submit | tsbertalan's comments login

And if ties within that second were resolved randomly, what perverse optimizing would you predict?


Well, they did a Big Damn Rewrite to use it for Wallet/Pay, and, in the process, dropped a bunch of features and made it infuriatingly tied to a phone number, because the rewrite was really for the Indian market.


My understanding is only Pay in the US and India are the Flutter version, Wallet is not using Flutter. This also had the Flutter boat shaking on reddit a bit back.


They actually wrote the newest Wallet app without Flutter, so it's a step backwards there.


NumPy or Matlab. And it's possible the "original image" is multispectral (more than 3 channels), so you need to choose an arbitrary 3-channel projection.


After a few more leading attempts, I managed to give this prompt at https://beta.openai.com/playground:

     What problem does the following pseudocode have?

     def some_worker
             if disabled_via_feature_flag
                  logger.info("skipping some_worker")
             some_potentially_hazardous_method_call()
And receive this response:

     The problem with this pseudocode is that there is no "end" keyword to close off the "if" statement. This means that the code after " some_potentially_hazardous_method_call()" will always be executed, even if the "disabled_via_feature_flag" condition is true.
And that's with a GPT3 without any special fine tuning. Of course, the name `some_potentially_hazardous_method_call` is pretty leading in itself. I rewrote the prompt slightly more realistically, as:

     What problem does the following code have?

     def optionally_do_work():
             if disabling_flag:
                  logger.info("skipping the work due to flag")
             do_work()
and received:

     The problem is that the code will still try to do the work, even if the flag is set.
This does seem like a pretty trivial easier-than-fizzbuzz question to be asking, though, since it's so encapsulated.


I've also never seen ads in the Gmail app. Maybe it's because of G Suite.


For example, the scourge of "people also ask" at the top of search results, that appears synchronously where the top result was a second ago, and has a randomly-generated container ID to prevent easy blocking. Not an ad, but, equivalently, content that I didn't ask for but that Google, for some reason, clearly really wants me to click on.


See Borell-Cantelli lemma.


I looked this up but I’m not sure I grasp your point.

Are you saying that:

- given a long string, we might ask “can this string be found in PI?”

- the probability of finding a long string in PI is infinitely small

- the number of possible strings in PI is infinitely large

- it’s not possible to decide if the answer is yes or no?


You have a citation on that for me to read?


Another reason scientists often don't release code is that code is not considered the valuable artifact produced by research--in more methods-focused fields in particular, it's instead the mathematical model, as embodied in some latex in the published paper. "Reproducing" the results then doesn't consist of trivially rerunning the same code, but in reimplementing it, possibly in a different language. This could maybe be seen as valuable in that the reproduction will surface both implementation errors and logic/modeling errors (though, really, having the source facilitates both).

In fact, more applications-focused researchers (those who combine real data with established models, for instance) tend to write higher-level scripts stringing more-engineered tools. In this case, open sourcing the scripts would be both easier and more pointless, since they will be almost the same as what is stated in English, tables, and plots in the paper. Epidemiology is usually this way, in my experience, though the linked repository seems to have some of both flavors.

The underlying misunderstand in that issue thread seems to me to be a disagreement on what the main valuable byproducts of scientific programming are. Professionally programmers will naturally think it's the code, but, traditionally, the code has been seen in academia as only a temporary vehicle for the model and it's parameters, which is the real product. (Also, the "program" might really need to include the lab, its papers, and its institutional knowledge, which is harder to package and open-source.)

Right or wrong, the assumption is that any competent grad student could reproduce the result in a week or so of (admittedly unnecessary) greenfield coding. But this is clearly not ideal, and newer work does strive for more professionalism in coding, open-source-by-default, and therefore faster replication. The project in question clearly predates this trend.

(Of course, a third reason academics don't open source is that some secrecy is required before publication in competitive fields. On a months-long project, you might not want to be committing publicly before publication. But this isn't much of an excuse.)


Using the OS to invert the whole screen can easily be put within easy reach by keystroke or notification-area-button on Android, iOS, Windows, Mac, and Ubuntu. It doesn't require installing any new software (it's usually somewhere in "accessibility"), and, most importantly, it's consistent. When you have a "dark mode" for one site or app, it fights with the surrounding browser chrome bring light, and switching to other apps is jarring. And when you try to enable a dark UI theme in the OS, inevitably there will be elements that missed the restyling, and so retain black-on-white, or even worse, are now black-on-black.

I get that a "dark mode", done properly, requires more than just inverting colors, but this consistency is really nice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: