"Which BPA-free plastics have effects similar to BPA?"
Answer: nearly all of them
That is, if legally forced to stop using a poisonous chemical, the company is regrettably forced to substitute its most chemically similar (but still legal) cousin compound.
The differences are usually so slight as to make no difference in its harms. It's the designer drugs of manufacturing.
It applies to molecules (e.g. BPA), where a single atom substitution gives you a new molecule. So it's like saying you can't make a navy shirt with exactly 44 stripes on it - you just make one with 45. Or change the hue slightly. It makes no real difference.
Sulfur is an element, so it's a whole different category of discussion. It's like saying "you have to use less cotton." You might replace it with silk or rayon or just make fewer shirts, but it's a much more impactful restriction.
CFCs aren't a specific molecule, but a class of molecules: they're organic compounds of usually 1-4 carbons that have (some number) of chlorine and some number of fluorine atoms on them. Here you can't just swap one fluorine for one hydrogen and squeak past the regs, because you're still in the same class. It's like a regulation barring button-down shirts: one button more or less isn't going to let you avoid the rules, though you can still make shirts of some variety.
This is why regs targeting classes are far more effective than those targeting single molecules.
Sulfur in fuel is a contaminant. It isn't added, it was there when the crude was pumped out of the ground. Once you remove it you don't need to replace it with anything. The oil industry has claimed the sulfur was useful as a lubricant, but that was largely a red herring because they didn't want to pay to remove it.
And CFCs were used for their physical properties rather than their chemical properties, so it's possible to substitute compounds that are chemically different as long as they still have the right physical properties.
A good example on the other side is fire retardant chemicals used in furniture. They're pretty much all toxic, so every few years the most recently popular one gets banned and the manufacturers switch to something else which is just as bad but isn't prohibited yet.
• Many libraries have free online access to very large journal collections and are open to the public.
• Deepdyve.com. It's kind of like Spotify for journals. Unlimited online access to a very large collection for a fixed monthly or annual fee.
Most US public libraries don’t have access to ‘scientific’ journals. Only educational and research institutions libraries subscribe and provide access to their faculty students and staff. Most univ libraries are not ‘officially’ open to general public.
Public universities generally, and some private universities as well, have (ususlly non-free, but also usually inexpensive; e.g., UC Davis is $60/yr) provision for public privileges.
But I would agree that this is not really a good solution for most people as it requires a lot of conditions which aren't always possible:
-- close proximity to a University which has a library with such policies
-- knowledge of the search systems to be able to find the document in the alloted public access time
-- available time during library public hours to do said research
More or less most people are blocked from such research without any means aside from very expensive fees to access such data.
Which libraries? I spent a little time looking for that service, and none provided it. IME: Public libraries don't have access to JSTOR or any decent substitute (if there is one). Academic libraries charge several hundred dollars per year for guest membership and provide guests with limited services; none provide offsite access to JSTOR or other journal collections - due to the consistency of this policy, I guessed that it was a contractual restriction imposed by JSTOR.
Source: I work there.
I'm not sure that's a good example.
Is that information actually in the peer-reviewed literature? I tried looking for those numbers for Tallahassee, FL but failed miserably.
I did find https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2854760/ which says "Predicted concentrations in drinking water were used instead of measured concentrations because few studies have measured estrogen concentrations in U.S. drinking water, and those that are available report primarily nondetected concentrations [see Supplemental Material, available online (doi:10.1289/ehp.0900654.S1 via http://dx.doi.org/); see also Hannah et al. 2009]."
The supplemental information says "The consistently large MOEs and MOSs strongly suggest that prescribed and total estrogens that may potentially be present in drinking water in the United States are not causing adverse effects in U.S. residents, including sensitive subpopulations."
So that's one answer. Why go further (assuming you trust it)?
Of course, that's for estrogens, not the wider category of "chemicals that act like estrogens", but 1) what do you mean by that phrase?, and 2) is anyone measuring these values for every water system?
Public water systems do measure some values. I believe they are required to publish a report every year. Here's the recent one for Tallahassee https://www.talgov.com/Uploads/Public/Documents/you/learn/li... . You'll note they don't measure the potentially large number of chemicals you mention. There are people that measure concentration levels of possible hormone disruptors, but as far as I can tell these are spot checks, and not done at a system-wide level that someone could simply look up in a published paper.
In any case, that sort of reporting information doesn't go into the scientific literature, but rather into a scientific database. Here's one, for example: https://www.waterqualitydata.us .
I can be wrong. Which $45 paper will tell me the answer to your question, for the public water system in Tallahassee?
But that's just technical details. The point is, without access to these articles a consumer cannot even make informed decisions on the most basic health choices.
I found a water treatment company trying to sell RO and other point-of-use systems which had pointers to reports about hormones, pharmaceuticals, and other contaminants in the water, but they don't offer estrogen testing.
FWIW, $45 at http://www.ackuritlabs.com/residential-testing/residential-a... gives me either an E. coli survey or a filter survey for iron, pH, tannins, hardness, and turbidity.
Perhaps you can do better than I did?
Regarding the testing, I was only looking at local Tallahassee water testing companies. This was an overly strict restriction of mine.
The OP has not yet clarified what "chemicals that act like estrogens" means, so it's possible that  of yours may be too limited in what it tests for.
FWIW, your  does not include the "Thermo Scientific Ultimate™ 3000/TSQ Vantage™ LC-MS/MS", with which it's designed to work. I am unable to find a price for that, but it looks expensive.
As I've no problem with the statement "The consistently large MOEs and MOSs strongly suggest that prescribed and total estrogens that may potentially be present in drinking water in the United States are not causing adverse effects in U.S. residents, including sensitive subpopulations.", I will not be starting a testing business, as the temptation to market to fear is too high.
(And that's not even counting the amount of law in the US that is from case law instead of statutes; most case law is behind some sort of paywall. Hence RECAP, which is sort of like Sci-Hub for law.)
Sounds evil and twisted to me.
Some of the arguments passed here is that these researches are funded by tax-payers money. I can stand behind that. But if it is a private research, then shouldn't the entity be entitled for compensation?
You don't work for free, do you?
The only way to solve the science publishing problem is from the top down: lean on scientific funding agencies for them to mandate results must be published in an open access journal. Take it a step further and say none of this "pay-us-super-extra-money-on-top-to-open-your-article-up-early" garbage either.
Hesitation from many scientists to publish in open access is that many full open access journals are not as popular and thus you lose some impact or credibility to the works when publishing in them. But if everyone is forced to migrate to open access, that will go away--perhaps with a few years of turbulence.
Otherwise, the behemoths like RELX and Wiley will endlessly pursue any sort of effort to open up their copyrighted material (and rightly so within their legal rights), just like the RIAA and music sharing.
Who cares? They will burn their money in court in exchange for literally nothing. That is exactly the result the world should hope for.
Or you could think carefully through the incentive structure of the scientific publishing system to see if there are places where small tweaks could go a long way. This is a system that emerged over centuries, and using your top-down hammer may change it in the short-run, but it will invariably morph into something unintended and unexpected if the proper incentives are not in place to guarantee long-term success. And I don't think those incentives should be another dose of your hammer.
The landscape includes many parties: researchers who produce papers, journals that publish papers, consumers who read papers, institutions that pay for journals, institutions that fund research, and institutions that employ researchers. There are probably more. All of them have different costs, preferences, and incentives. Disentangling that web may yield some very good opportunities for improvement, either as policy, advocacy, or entrepreneurship.
Don't get me wrong for a second: I'm not defending the copyright lobby and its obscene partnership with the state.
It sounds to me like you're so slavishly devoted to Silicon Valley economics that you want to incentivize nails rather than use a hammer.
> This is a system that emerged over centuries, and using your top-down hammer may change it in the short-run, but it will invariably morph into something unintended and unexpected if the proper incentives are not in place to guarantee long-term success.
I.e. scientific publishing, like almost everything else that involves humans, is a dynamic system. Smash it, and it will reconstruct itself in some form or other. If you don't change underlying incentives, you'll end up in a similar state with which you started. See also: comments that refer to "regrettable substitution" in the comment page here.
It's not "Silicon Valley economics", it's just a basic application of reason.
I appreciate caution as much as the next person, but we've been operating with the natural alternatives for some time now without any issues; the incentives for researchers and universities when it comes to publishing articles hasn't really changed, nor has the incentive for the the public (academics or otherwise) that want to consume them. The only part of this system that has an incentive to keep the old system is the publishers themselves, because they're no longer a required component. Their infrastructure and their pricing schemes are no longer beneficial and have been replaced, while the incentives for the researchers publishing and the audience consuming are provided and met by the replacement systems.
Removing the publishers isn't a hammer destroying it from the top-down, it's an appendectomy like prodecure.
Perhaps I don't know a perfect solution, but I know that top-down solutions often produce negative unintended consequences.
> rent seeking middle men who provide no value
I hate rent seeking and would love to see it eliminated from all sectors of the economy. But non-rent-seeking middle men often provide immense value to society, which is why they exist in the first place.
> slavishly devoted
Do you insult everyone you encounter online?
> Why is it unreasonable to have public funds go to publicly-available research?
I never said this was unreasonable, merely that it's complicated.
Of course, my "Silicon Valley economics" approach would be to start with a reduction in copyright regulations and privileges (along with patents for that matter).
We already have such an incentive system: patents. Sadly, they are pretty flawed. The deal with patents is that the government grants you a monopoly right over your idea if you describe and publish it. Problem is that they are expensive to obtain, and the language of a patent is designed for lawyers, and is so obtuse and difficult to read that they are almost useless for practitioners. But the fundamental incentive to publish ideas and make them free to read is there.
> If the U.S. District Court Judge adopts this recommendation, it would mean that Internet providers such as Comcast could be ordered to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.
This isn't just routine domain name forfeiture, this could escalate to an order requiring ISPs to block IP addresses, which is far more troubling.
Am I missing something?
> Source : https://ballotpedia.org/John_F._Anderson
I mean: I'm as sympathetic to Snowden as anybody. But it's pretty obvious that the law, as it currently is, allows for his prosecution. Judges are among the last remaining people sometimes allowing for larger ideals to trump the immediate business, and this "rule of law" they seem to cling to may come in handy some day.
Twice, it means they've found a very special person for this.
Why hasn't the maintainer of sci-hub backed up the existing content up in a series of publicly-available torrents-- either by category, or if that's too difficult even arbitrarily with the torrents enumerated?
Especially now, when it's obvious it's only a matter of time before the service gets blocked. E.g.: "Sci-hub is temporarily down until you fools replicated all these things."
At least AFAIK. I'd love to know if that's not true, since it's one of the things keeping me from adopting IPFS for certain use cases.
1) you can only provide useful names (not simply hashes) for things that are in directories, so a large number of bare files (easiest to update in IPFS) absolutely requires an external index linking the hash to a friendly name. I haven't found a suitable tool for automating any of this.
2) you can't modify an existing IPFS directory in-place and just get a new hash—you need the whole thing outside of IPFS to perform a CRUD operation. Maybe you can perform some trick with links or something, IDK, but it's definitely a pain. This makes One Giant Root Directory to Rule Them All very cumbersome.
3) IPNS is kind-of a solution, but centralizes things, and you can only host one of them per node-daemon (!) and last I checked these were still prone to availability problems and general quirky behavior.
4) Most software that doesn't speak IPFS natively (so, nearly everything) expects stuff to be in ordinary directories. Nothing expects to receive as config giant list of files that's changing all the time. They all expect one directory, or maybe a few directories, often with some reasonable structure. Translating between easiest-for-IPFS (a whole bunch of bare files, or smallish sets of closely related files each in their own root directory) and easiest-for-literally-all-other-software-that-uses-the-filesystem practically requires some kind of linking—I am not aware of any existing solution for this.
[EDIT] to be clear, (4) is no less a problem if you're using an IPFS FUSE mount. In fact you'd have to for linking to be viable at all, though I'm not 100% sure it works even then.
1) You can have individual files automatically be wrapped in a directory: `ipfs add -w dog.jpg` will result in `/ipfs/QmSomeHash/dog.jpg`
2) There's commands for modifying objects: e.g. `ipfs object patch add-link QmDirHash the-link-name QmLinkHash`
3) You can host multiple IPNS names from the same node since 0.4.9 or so -- see `ipfs key` and `ipfs name publish -k`. You can also publish updates for the same IPNS name from multiple nodes, although admittedly that's a bit hacky right now, since it involves an additional tool: https://github.com/whyrusleeping/ipns-pub
4) What kind of directory structure are you looking for here? We've successfully and repeatedly used IPFS with large datasets of different kinds so I'm curious how we can improve this.
On (3), cool, I'd checked out a bit on following IPFS while waiting for private networks and filesystem-store to leave experimental status (here's where you tell me that's happened, too!) so I must have missed that. Good to know.
(4) was mostly my explaining why just having all the files individually in IPFS is really, really useless if you need to apply any tools to them that expect to operate on one or a small number of (possibly structured) directories, which describes an awful lot of tools that operate on large numbers of files, but that's moot given collection-modifying commands are available.
Thanks for the response.
I'd say at this point private networks and the filestore are unlikely to change significantly, but we're not 100% satisfied with the respective test suites (and the docs, oh my) so it'll be a bit longer until they'll be elevated out of experimental status.
About the object patch command, you can also call it through the HTTP API if that makes it easier. The CLI is essentially a 1:1 mapping of the HTTP API to a console command.
Suppose a maintainer has a "legacy web" application consisting of two parts: 1) the app proper with a publicly accessible interface, and 2) regular backups of the app database that also have a publicly accessible interface. Let's assume for the moment that 3rd parties have automated the process of pulling from the backups into well-seeded torrents to prevent data loss.
The app proper functions to add entries to the database.
What does IPFS bring to the table for such a legacy web app as the one I've described?
For the legacy web, once the maintainer makes the snapshots available they get scooped up and replicated by the general public. So while IPFS' immutable graph might be a design improvement on whatever set of scripts our hypothetical maintainer is using, it's an incremental improvement and not a paradigm shift.
For the app proper-- what is the benefit of content-addressability over location-addressability? Let's assume both the identity and location of the maintainer is known.
I think I know a vague answer to that, but I'd like to read what a dev has to say about it.
We use the free market, which limits distribution to those who can pay the most in order to maximize profit - that's how the tool is intended to work. That's great for some things, such as laptops, but it works against the mission of scientific research, for which the mission is to advance knowledge, advance the world, and solve important problems.
Unfortunately, our society now dogmatically applies the free market hammer to almost every problem, nail, screw, wheel, or fragile crystal vase. If only Einstein had thought of it - how much could he have sold the theories of special and general relativity for? How much could Watson and Crick cashed in for? Tim Berrners-Lee?
The tool we use for academic publications is copyright, not the free market. The free market outcome is Sci-Hub. The monopoly-enforced-by-the-government outcome is Elsevier, the RIAA, the MPAA, rent-seeking by the aforementioned organizations, proprietary software, and DRM.
The fundamental problem is indeed that we use the wrong tool to solve the problem of academic publication. We should use the free market, instead of government-enforced regulation.
Heck, a vanilla Wordpress install would have been sufficient for this, if they don't have resources for anything other than a free off-the-shelf solution.
Publishing became a big industry due to the invention of the printing press. And because publishing a book requires a printing shop that can be easily located by the police, it was practical to pass copyright laws that forbid counterfeiting.
With the invention of the internet, piracy become trivially easy and basically impossible to block, and so copyright laws are simply not enforceable. I think the publishing houses realize this, and are just trying to hold off their demise as long as possible.
Ppl are more than willing to pay for legal content if it's in consumable form in a reasonable price point as Netflix and spotify have shown successfully.
Good thing that almost all of that expensive effort is done by volunteer scientist editors and reviewers. I routinely see spelling errors and english errors (going against journal guidelines for British/US english) in scientific literature. I'm convinced publishers do as little proofing as possible.
"The Darknet - nobody goes there except people interested in crime, porn, and... science."
Not to say you shouldn't contribute to them, or that your money won't be useful in working around this. But it's probably not going to be used for a traditional legal defense.
Also, American unis don't charge $100k/yr. Half that is enough for an ivy.
If, however there's this shiny new repository where everyone can get the articles for free, that may encourage _some_ researchers to stay home instead of attending that dreadful budget meeting. Or at least not be quite as forceful in their demands.
So I'd say this works pretty much the same as it does for other digital content, only that publishers will not see their revenue decrease gradually, but rather with a certain delay, but drastically. The subscription contracts usually covering multiple years probably adds to that, as well.
Of course the large journals will see the effects last. It really makes for an awkward conversation when that Elsevier guy asks "So at the Max Planck society, with 30,000 Phds, nobody reads Nature?".
When do they come for the VPNs?
Which public interest?
I can easily see a public interest defence succeeding in this made up case, so where is the line?
To get to your question of "public interest", one argument for Sci-Hub is precisely that:
> In her defense Alexandra Elbakyan has cited Article 27 (1.) of the UN Declaration of Human Rights "to share in scientific advancement and its benefits",
It's very unlikely that it would work in the US. I think issues related copyright ownership of scientific publications was resolved 40 or so years ago. I vaguely recall it was coupled with changed to the Copyright Act of 1976.
At the very least, that's about the time you start seeing explicit copyright statements appearing in scientific publications.
hide it, inflate the value of applying the cure, etc
At some point the “prestige” afforded by research gatekeepers will reach a point of diminishing returns for professionals looking to publish.
Copyright is so ingrained into our assumptions and culture that we almost never think or talk about it, but it's the beating heart behind so much of what drives the biggest companies today.
Consider that massive logistical operations that deliver critical supplies to people across the country every day, like Exxon and Walmart, are rivaled by companies that just print out DVDs. At some point we need to look at this and ask ourselves why these people are allowed to use a strictly artificial, government-granted monopoly to capture returns so disjointed from the social utility they provide, and think about the bigger picture effects of awarding them that excess.
Copyright and the CFAA are a disastrous tag team for online entrepreneurship, and the mechanism by which Google, Facebook, et al enforce their effective monopolies. The CFAA makes it illegal to get content over a network without having permission first, and copyright makes it illegal to load those pages into RAM whether you're authorized to download them or not.
Of course, when Google et al were brought to task for their ignorance of these laws, after dozens if not hundreds of small sites doing very similar things had been mercilessly crushed by the legal system, the judges magically felt that Google's use was "fair" (see Perfect 10 v. Amazon).
IANAL but people grossly underestimate the draconian reach of copyright legislation. And that's not really an accident; media outlets don't want attention on this, because they know if people realized how bad these laws are, their gravy train wouldn't last much longer.
RAM Copy Doctrine is the digital equivalent of stating that a new infringing copy of the work is created every time someone looks at something without a license from the rightsholder, since it creates a "fixed" copy in the retina.
Effectively, this neuters Feist v. Rural Telephone  for online purposes, because you can't use a computer to "look at" the source material in order to reference it's non-copyrightable facts.
The computer formatting that encapsulates unoriginal, Feist-style data almost automatically meets the standard of originality necessary to constitute a copyrightable work (e.g., document structure like XML tags or JSON layout), so under the RAM Copy Doctrine, just loading that small shell into RAM for any purpose, even if it's just to read out non-copyrightable facts, may well constitute infringement.
This argument has worked against many scrapers. And of course, under the CFAA, speaking to their servers without explicit permission is of dubious legality, and continuing to speak to their servers after being told to stop is certainly illegal. In most cases, both of these arguments are brought and both arguments are considered persuasive.
Eric Goldman's blog  usually does a decent job following cases that involve the CFAA and/or web scraping, IMO.
Legal threats along this axis are apparently so frequent that Goldman set up a form letter telling people that he can't answer their legal questions. 
I am not a lawyer.