Hacker News new | past | comments | ask | show | jobs | submit | _pius's comments login

actually, having gotten a phd and attended many journal clubs, "seen his papers roundly critiqued on the merits at too many journal clubs to count" is as strong a point as anybody can make.

Journal clubs are vicious. Everybody's got their brains and knives out …

actually, having read what you wrote, saying that “journal clubs are vicious … everybody’s got their … knives out” undermines your argument by implying that even good papers will be “roundly critiqued” in that setting, making this about as weak a point as anybody can make.

on further reflection, what I do find to be a strong point but in favor of Loeb’s argument is the fact that it triggered the scientists who wrote this comment and its grandparent to underscore the point of the article by more or less heaping scorn on the author, listing their own credentials, and proceeding to make — forgive me — non sequitur arguments from authority instead of substantively refuting the claims.

if this is what happens to a former department chair at Harvard when they question orthodoxy, it indeed does not augur well for less-credentialed researchers, regardless of the merit of their work.


the only good papers are the ones that survive multiple rounds of critiquing from a wide range of experts. Even great papers have problems, the point of journal clubs is to argue out all the varying reasonable lines (not the fringe ones) of ways the paper could be making a false conclusion (typically due to bad experimental technique or mistaken data analysis).


How would that have worked out in the example you have given, prior to a successful delayed-choice quantum eraser experiment? Would Bell's theorem have been consigned to the scrap heap? Would delayed-choice quantum eraser experiments then have been performed when they were?

By these standards, Darwin should have been rejected on the grounds of his faulty model of biological inheritance.

There needs to be some slack, because sometimes critics are more convincing than they are right.


No, the point of delayed choice quantum eraser is that it's the first really simple experimental setup that non-physicists can understand, and see how it violates the simple assumption of classical physics. It's sort of the Hershey Chase experiment, but for QM. If DCQE hadn't been done, Bell would have been fine, as the last of the no-loophole experiments are being run now.


To compress further, journal clubs [war game a paper's methods, results, and conclusions], so to speak.


This may or may not change your mind on the topic.

https://www.cjr.org/analysis/capital-b-black-styleguide.php

From the article:

To capitalize Black, in her view, is to acknowledge that slavery “deliberately stripped” people forcibly shipped overseas “of all other ethnic/national ties.” She added, “African American is not wrong, and some prefer it, but if we are going to capitalize Asian and South Asian and Indigenous, for example, groups that include myriad ethnic identities united by shared race and geography and, to some degree, culture, then we also have to capitalize Black.”


I never saw the capitalization as being about ethnic or racial identity and just due to the fact that e.g. countries or continents are capitalized.

I think it’s even more false to capitalize “Indigenous”, no?

Being “Asian” or “South Asian” are broad enough that they don’t refer to any particular ethnicity, nationality or culture.

This phenomenon exists everywhere.

(Disclaimer: English is second language, in my native language we don’t capitalize any of these words and even in English I would only do so when required by grammar, so it may impress me different than others)


Cultures without their own country are capitalized too.

Capitalizing indigenous is unusual. But capitalizing Native American is standard.


So by that logic Obama should be small capital black since his father was from the wrong side of Africa and not a slave. When will we have our first Black president who isn't just black?

And what of the whites in Africa. Do we capitalize only the Boers who suffered a genocide from the British? Is Musk the only W white we can talk about? That poor richest man with such historic baggage. Truly the best role model for all African Americans.

Oh the racialist hierarchies we build.


I really do think this rings of a similar problem of Americans imposing their lens of reality onto the rest of the English-speaking world.


So by that logic Obama should be small capital black since his father was from the wrong side of Africa and not a slave … Oh the racialist hierarchies we build.

Not sure why you’re injecting inflammatory bile into an otherwise civil thread. If you read past the first sentence, you’ll refute your own point. Think before trolling.


The people cheering this on will not be protected when the mob comes for you.

The facile sophistry of comments like yours is becoming obnoxious. An actual fascist mob already came for us last week and we were not protected, get it?

There can be interesting intellectual arguments about whether this was the right move but breathlessly calling ToS enforcement by private companies a “mob” while ignoring an actual mob with lead pipes, guns, and bombs ain’t it.


Yes, you should delete tests for everything that isn't a required external behavior, or a bugfix IMO.

For the edification of junior programmers who may end up reading this thread, I’m just going to come right out and say it: this is awful advice in general.

For situations where this appears to be good advice, it’s almost certainly indicative of poor testing infrastructure or poorly written tests. For instance, consider the following context from the parent comment:

Otherwise you're implicitly testing the implementation, which makes refactoring impossible.

A big smell here is if the large majority of your tests are mocked. This might mean you're testing at too fine-grained a level.

These two points are in conflict and help clarify why someone might just give up and delete their tests.

The argument for deleting tests appears to be that changing a unit’s implementation will cause you to have to rewrite a bunch of old unrelated tests anyway, making refactoring “impossible.” But indeed that’s (almost) the whole point of mocking! Mocking is one tool used for writing tests that do not vary with unrelated implementations and thus pose no problem when it comes time to refactor.

Now there is a kernel of truth about an inordinate amount of mocking being a code smell, but it’s not about unit tests that are too fine-grained but rather unit tests that aren’t fine-grained enough (trying to test across units) or just a badly designed API. I usually find that if testing my code is annoying, I should revisit how I’ve designed it.

Testing is a surprisingly subtle topic and it takes some time to develop good taste and intuition about how much mocking/stubbing is natural and how much is actually a code smell.

In conclusion, as je42 said below:

Make sure you tests run (very) fast and are stable. Then there is little cost to pay to keep them around.

The key, of course, is learning how to do that. :)


Did you ever actually refactor code with a significant test suite written under heavy mocking?

The mocking assumptions generally end up re-creating the behavior creating the ossification. Lots of tests simply mock 3 systems to test that the method calls the 3 mocked systems with the proper API -- in effect testing nothing, while baking in lower level assumptions into tests for people refactoring what actually matters.

You might personally be a wizard at designing code to be beautifully mocked, but I've come across a lot of it and most has a higher cost (in hampering refactoring, reducing readability) than benefit.


Did you ever actually refactor code with a significant test suite written under heavy mocking?

I have. The assumptions you make in your code are there whether you test them or not. Better to make them explicit. This is why TDD can be useful as a design tool. Bad designs are incredibly annoying to test. :)

For example if you have to mock 3 other things every time you test a unit, it may be a good sign that you should reconsider your design not delete all your tests.


It sounds like your argument is “software that was designed to be testable is easy to test and refactor”.

I think a lot of the gripes in the thread are coming from folks who are in the situation where it’s too late to (practically) add that feature to the codebase.


Mocks allow you test a certain method was called, with certain parameters, and in certain order.

That's extreme test implementation coupling.

Most don't use those features, but in my experience mocks indicate implementation coupling.


You seem to think the rationale is testing performance; but from GP it seems that the rationale is avoiding the tests ossifying implementation details against refactoring rather than protecting external behavior to support refactoring.


I think you wrote this before I finished elaborating on my comment. :)


> Mocking is one tool used for writing tests that do not vary with unrelated implementations

What if I chose the wrong abstractions (coupling things that shouldn't be coupled and splitting things in the wrong places) and have to refactor the implementation to use different interfaces and different parts?

All the tests will be testing the old parts using the old interfaces and will all break.


"... can this guy even be considered black? He looks like a regular person ..."

This subtly dehumanizing idea that certain races are normal while others are irregular is the root of many problems.


[flagged]


So what? We should not be normalizing this kind of language.


This headline rewrite does a disservice.

It editorializes away the point of the post, which is that, according to the author, "Apple just killed offline web apps while purporting to protect your privacy [by forcing WebKit to delete all local storage after 7 days]."


I didn’t know the part after # in a URL doesn’t get sent to the server.

While that is technically true, please know that it is not true in a way that is meaningful for many threat models. The JavaScript running on the page can trivially detect, inspect, and log changes to the location hash.

More information here:

https://developer.mozilla.org/en-US/docs/Web/API/WindowEvent...


Isn’t this true of any client side site? If the client side JS has access to some information, it’s always possible for the server to inject custom JS that returns the data. Theoretically this setup provides little additional security but it does allow (for example) people to use a client pinned to a version they’ve verified to not leak information and collaborate without worrying about leakage. (Excalidraw developer here)


Thanks for the interesting post. This isn’t a critique or evaluation of your work and you did mention that the client side JS can read the location hash.

I agree with your comment. I just don’t want anyone to think that a key stored in the location hash is somehow truly protected from ever getting back to the server, which was how the comment to which I responded sounded to me.


You're saying that "[you] hate the way the cryptocurrency world has sucked all the air out of the decentralized room" but also that "we already have tons [of decentralized apps] that do not [use cryptocurrency]." That seems contradictory.


It's not contradictory. When they say they hate that cryptocurrency has sucked all the air out of the room, they're saying that they hate that nobody is talking about the non-cryptocurrency alternatives, not that they don't exist.


”I've read all of the founders' blog posts now and I'm convinced they've packed every anti-pattern they could find into this startup .... they [list their values] ... yet there’s essentially nothing about what the product is.”

Imagine you and your friends excitedly announcing on your personal homepages that you are setting out to build your passion project into a going concern “not driven by power or greed, but by accomplishment and self-fulfillment” [1] and being flooded in a community of supposed hackers with derision that your post wasn’t a good enough press release.

For some people, the actual human excitement of friends earnestly getting together to build something new primarily because they want to see it in the world is so foreign that they can only snark about it.

Hopefully that kind of response vindicates their values more than it demoralizes.

[1]: https://blog.jessfraz.com/post/new-golden-age-of-building-wi...


Vaporware has been looked down upon in our industry for decades:

https://en.wikipedia.org/wiki/Vaporware

Until you have a product, there's nothing to talk about. Plenty of teams that seem good end up producing nothing. Announcements like this are a huge red flag to people who have seen this pattern many times over.


Until you have a product, there's nothing to talk about.

Yeah ... if you’re a marketer. People who actually make stuff are allowed to be excited about what they‘ve been working on in their garages and get to tell people more about it on their own schedules. They don’t owe you a feature comparison table every time they talk about their project.

In fact, they don’t owe you anything at all.


[flagged]


As this announcement stands, I ... won't be using what they build.

Considering you literally haven’t been told — let alone allowed to use — what they’ve built, that sounds like a reasonable prior.


When I use products for business I expect some level of professionalism. It's way too easy to get hung out to dry with a critical piece of infrastructure and a dead company behind it. What they've done here with the vaporware and vanity posts is a classic misstep. It's not product or customer centered, the two things you absolutely must be to succeed in the enterprise world.


[flagged]


This is literally the company blog that we're commenting on, not their personal blog. There's also 3 posts on the HN front page about this same topic and not one of them has any substance. So yes, I'm going to stick with my decades of experience and say they're producing red flags far faster than they're producing product.


I’m sure your indignation can be better used somewhere else, other than commenting at length on something you’re not interested at all.


I'm very interested in how our industry operates, how purchasing decisions are made and how investors vet opportunities. This intersects all of those areas, I think it's probably worth listening to what people are complaining about instead of just lashing out at them. This isn't just me complaining for no reason, there's a lot wrong with this announcement and it doesn't lead to a healthy industry.


And it seems others were thinking the same way: 'After you won the Turing Award, a comment appeared online saying, “Why did she get this award? She didn’t do anything that we didn’t already know.”'

This is not only distasteful but false.

First, modules in Modula were primarily about scope not abstraction.

Liskov pioneered a core aspect of abstraction which, among other things, allows modules to rely on abstract interfaces rather than their implementations (or a type or class rather than its subtypes or subclasses).

Abstraction only works reliably if any provably true property of a type is also provably true about its subtype.

Want to learn more? You’ll find it referenced in the literature as the Liskov Substitution Principle.

https://dl.acm.org/citation.cfm?doid=197320.197383


When did ML gain its module feature (which does enable the equivalent to data abstraction)? From a very casual search, the earliest references I can find to it are from the early 1980s, so well after CLU - but the very first versions of ML itself were being worked on around the same time.


That’s a good question, I don’t know the history well enough to comment on that.

Much more important than CLU itself — groundbreaking as it was — was Liskov’s formalization of substitutability as a rigorous principle one can use to reason about the quality or correctness of an arbitrary abstraction (or model thereof).


I'm pretty sure this was around 1983/1984 See the papers "Modules for standard ML" 84, by Dave Macqueen, MacQueen had previously written a paper Modules for Hope (81) which likely influenced it.

Unfortunately the SML-Family doesn't have links to these in it's history section afaict. But ML languages didn't get modules until the standardization effort that produced standard ML.


If I recall correctly, ML first appeared as part of the Edinburgh LCF theorem prover in 1979 and it did not have modules at that time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: