Hacker Newsnew | past | comments | ask | show | jobs | submit | munfred's commentslogin

Indeed, a fantastically written article!


1) What are the differences between Scispot and Benchling?

2) What are the things you offer that nobody else does?

3) How long have you been developing the product?

4) If it is only three people, why would a lab choose you over established players?


Great questions! 1) What are the differences between Scispot and Benchling? Scispot is completely no-code. So, you can design your own sample manager, pick and choose the workflows you want to automate, and set inventory automation (maintain an always-updated inventory with our inventory automation module).

2) What are the things you offer that nobody else does? There is no other no-code workflow automation in life science space yet.

3) How long have you been developing the product? One year

4) If it is only three people, why would a lab choose you over established players? We only mentioned our founding team as part of the post. We have a team of highly talented engineers, scientists and advisors without whom Scispot wouldn’t be the product it is today. Customers choose us because we are more agile compared to bigger companies. Our agility comes from the orchestration layer that you can use to design your own life science workflows. For instance, every customer has its own different flavour of Scispot with different workflows, and metadata.

Happy to set-up a call to continue your discussion.


Strongly agree on the archival concerns. The reason scientific publishing is nowadays built upon 2D PDFs is that those are the digital analogue of paper.

Distill used a new medium and media, with all the good and bad that comes with it. In my view the biggest challenge is archiving to ensure readability and accessibility in 20, 200, 2000 years. We can read things written in parchment 2000 years ago, we should aspire to properly view digital media 2000 years from now. Yet 2 years from now much of digital media on the internet is already broken. The internet archive is humankind's savior in this regard, but we need to do more, better and faster (because so much digital content is being created and lost before we can save it...).

Regarding Distill specifically, short of a GitHub repo for each article archived in other mirrors, I don't see much else that is straightforward and flexible enough. Even the Distill arXiv idea mentioned would likely have to run on a combination of GitHub + mirrors...


> Another significant risk factor is having unachievable goals. We set extremely high standards for ourselves: with early articles, volunteer editors would often spend 50 or more hours improving articles that were submitted to Distill and bringing them up to the level of quality we aspired to. This invisible effort was comparable to the work of writing a short article of one’s own. It wasn’t sustainable, and this left us with a constant sense that we were falling short. A related issue is that we had trouble setting well-defined boundaries of what we felt we owed to authors who submitted to us.

As someone finishing a PhD, I think that doing LESS and doing it SLOWER is actually a very desirable thing for most of science. We are limited by how fast humans can wrap their heads around articles, and if we have fewer articles that are better written, that's a huge compound gain!

To the Distill team, if anyone is reading this: I don't think you should feel bad for being slow, or for doing "few" things at all. We humans to place big emphasis on superficial large numbers in the heat of the moment, but only good things withstand the test of time. I've only read a few Distill articles, but they were all really good and I can see myself coming back to most them 5-10 years from now. I don't think any other academic journal comes close in the ratio of (total goodness)/(total content). Good job Distill team for making a great thing, and summarizing the lessons learned so well in this goodbye article!

My only wish would be that you could find a way to continue to do auch good work that does not entirely rely on unpaid volunteering. In the end of the day volunteering only means some other institution bears the cost of supporting the volunteers.

For example: Could you get a Distill editor endowment to pay editors using donations throughout a non-profit fiscal sponsorship partner? Could you partner with a university, or even publisher, to support long term writing?

GOOD work takes TIME and is SLOW and we are bad at appreciating that. I hope the distill team keeps taking their time to put out good work, whatever it is they go do next!

Cheers to Distill!


Thanks for the kind remark!

> I don't think you should feel bad for being slow, or for doing "few" things at all.

Unfortunately, I think it's tricky to do this in a journal format. If you accept submissions, you'll have a constant flow of articles -- which vary greatly in quality -- who's authors very reasonably want timely help and a publication decision. And so it's very hard to go slow and do less, even if that's what would be right for you.

> Could you get a Distill editor endowment to pay editors using donations throughout a non-profit fiscal sponsorship partner? ...

I don't think funding is the primary problem. I'm personally fortunate to have a good job, and happily spend a couple thousand a year out of pocket to cover Distill's operating expenses.

I think the key problem is that Distill's structure means that we can't really control how much energy it takes from us, nor chose to focus our energy on the things about Distill that excites us.


Yeah, the real question shouldn't be whether something looks productive on the short term, but whether it has a good chance to help us move forward or not. Sometimes we move a lot... without going anywhere; and I don't think that's necessarily terrible, we can also learn a lot from it, but it definitely shouldn't be the only model. At some point you should be allowed to try to go on a long journey. Everyone needs to decide for themselves whether what they are making is relevant or not, and from there on we just need to trust people.


You can use the public matrix server that offers an interface to it with the Element web client and is maintained by Element the company (used to be called New Vector): https://app.element.io/

If you do want to set up your own server I wrote a guide when I learned how to do it with Google cloud instances: https://munfred.com/matrix


I have a suggestion to make on this regard, not related to body language.

In scientific conferences, typically at the end of the day (e.g. after dinner, with drinks) you have "poster presentations". These presentations look pretty much like these images from Google [0]. Everyone goes to a big room, and the presenters stand in front of their posters which are ~90x120cm. People walk around looking at the poster titles figures, and when one of them catches their interest they stop to listen to the presenter (if they are already taking to a group) or ask him about it.

The fact that you have these big posters at all ends up being just a cue so people can identify who is working on which kinds of subjects/projects, and go talk to them about it. My advisor had this really funny idea of having "microposters" at a conference, where he just taped an A4 paper with one figure and the project title and his name to his back, and that is all it took to start conferences. It worked super great, it was exactly what it took to "break the ice".

In these conferences people will typically have read your papers, but they don't know your face, or even that you were in a paper (at most they will remember first author or last author, which is the professor). So just having a small cue "I'm Bob and I work on X" is already sufficient to break the ice.

My suggestion would be two things: 1) For mimicking the poster presentation aspect of scientific, to allow people to upload an image which gets displayed on an area of the map. Then the presenter can stand there, and interested folks just gather around him to listen.

2) Allow some kind of mouseover view that shows additional information about the person, eg a bio, a description of the projects, or a picture/link to a description of the projects.

I haven't yet seen a good virtual scientific conference, and it is exactly this aspect of the social interactions that is missing. In the end of the day, the primary reason why scientists move halfway around the globe to gather in a hotel for a few days is not to so much to listen to the speakers as it is to gather in small groups at breakfast/lunch/dinner/happy hour/breaks, and exchange ideas and get to know each other. Academic conferences allow people to put a face to the names they see in papers, and it really helps make science feel more humane and prompts you to exchange more.

Thanks!

[0]https://www.google.com/search?q=scientific%20poster%20presen...


Yes. This, this, and more this. Basically, having a way to see someone's table or booth could work for scientific conferences or trade shows or many different conferences, and also for informal conversations as well. I would love this feature.


As another commenter mentioned, we support this already! It was one of the first features we built, because our first paid customers were academic conferences. You can check it out here: https://gather.town/app/p4B9DUqB8NAazd3t/DemoConference


#1 is already supported. Gather actually has a poster object for this purpose. You might want to check out the conference demos.


The author of this post raises valid points about some avenues for publishing scientific results. Journals have high barriers, rigid formats, paywall, and sometimes exclusionary reviewers. Twitter and blogs are fringe because they are not formal, but increasingly where the conversation is happening. He advocates for a model like Substack for scientific publishing. Which I don't see how it is different from blogs.

However to me the real problem is that he completely ignores the existence of preprint servers. This started with the arXiv in the 1990s for physics, math and computer science, but finally the model has begun to catch up with other fields. In biology the bioRxiv is well established and I'd say more than half of the papers I see do have a preprint deposited there. For medicine, which has a notoriously protective culture, the medRxiv was launched last year and got a gigantic boost due to the pandemic.

Other fields also have their own preferred preprint repositories. To me preprints solve the gatekeeping and cost barriers very well. I only wish they were a little more tied to comments from the readers (in depth comments of the kind you get during peer review, not the knee jerk ones we're used to seeing on the internet).

I say this because there is almost and "adversarial" relationship between the readers, who just want the bottom line in two sentences, and the authors would like their work to be perceived as a big important and novel contribution. Because of this often papers are written with more flourish language than actually necessary to make a point, and so it is really a hard to parse articles and tell good from bad. Often the "summaries" that peer reviewers wrote before making the comments are better than the official abstract of an article, but readers don't get to see and benefit from them. Each person that reads an article has to redo all that work of judging it for themselves, and with the deluge of articles we have it is simply impossible to keep up.

Initiatives such as openreviews.net are already a big step in the right direction. I think in the next 10-20 years this is where the bulk of scientific publishing is headed. A mix of preprint server coupled with curated community reviews that makes papers much easier to gauge and also incentivizes authors to write better and not bullshit, or risk getting wrecked publicly with the public reviews.


There is actually another bill that aims to legislate over assessing the value of user data: S.1951 - DASHBOARD act, introduced 2019-06-25.

Full text: https://www.congress.gov/bill/116th-congress/senate-bill/195...

Official summary: To require the Securities and Exchange Commission to promulgate regulations relating to the disclosure of certain commercial data, and for other purposes.

It applies to “commercial data operators”, defined as entities offering consumer online services or a data broker with over 100M monthly users in the United States for most months over the last year. It says that they must routinely provide each user with an estimate of how much they think that users data is worth for the operator, and clearly describe the data collected and how it is used, as well as allow users to delete their data.

It also says that operators must disclose every quarter to the SEC the “material value” of the user data they hold, contracts they have for collection and use of data, and the value of anything else the SEC determines is necessary. It puts the SEC in charge of figuring out a valuation methodology for the data. It also says the SEC should amend the rules for disclosures from public companies that classify as data operators to include information on how data held by them: how it is protected, liabilities, sources, revenue generated, large contracts or acquisitions of data.

For public companies I believe this would significantly change how they must do their accounting, since they must treat data as an asset. It could have significant impacts on their market cap, which could be real (since more information will be available) or just a result of the new reporting rules affecting their listed assets.


Wow that's very interesting. I wonder why they don't go a step further to define user derived data as partially owned by them


Having professional standards bodies for interfaces has actually been proposed - in another bill! S.1084 - DETOUR Act, introduced 2019-04-09.

Full text (only 1800 words!): https://www.congress.gov/bill/116th-congress/senate-bill/108...

It was written to get rid of dark patterns and deceptive interfaces. The official summary says: "To prohibit the usage of exploitative and deceptive practices by large online operators and to promote consumer welfare in the use of behavioral research by such providers."

In my reading, the DETOUR act says that large online operators (defined as services with more than 100M monthly active users anywhere, not just in the US) may not use misleading interfaces or unclear wording to mislead the user. It also says that they can only conduct behavioral experiments (e.g. A/B testing) if they have an independent review board registered with the FTC and have informed consent from the users as well as routine disclosure to the public of experiments being done. Finally it says that online large operators may form professional standards bodies, and that those bodies should develop on a continuing basis guidances and bright line rules for developing their technology products in a way that does not impair user autonomy or induce compulsive behavior in children.


I find this idea really interesting! Do you have any reference on studies that have looked into the idea or places that have implemented it?


No, all I've come up with googling 'progressive tax for companies' are textbook explanations why it is not 'fair' to do this. Presumably it wouldn't be fair to tax a company with revenue more than a highly valued startup. Fairness is too ambiguous a term I think to determine policy. Besides that, there are other monetary features to apply a progressive tax on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: