For anyone doing HPC work and hasn't tried GFortran >=10, I highly suggest giving it a go. We switched to it for the arm64 improvements, but surprisingly also found a 20% speedup on the x86-64 target. My best guess is that it's a combination of IPO and autovec enhancements.
Profiling and -fopt-info should tell you why. (If you care about speed, use them anyway!) I'd be surprised if it's vectorization improvements, other than a specific bug fix.
You can allow IAM roles in your account (which simply just has the permissions defined, with no keys or other credentials associated) to be assumed by identities in another account. Vantage would then be responsible for securing credentials for the target identity in their account, but there would be no transfer of keys involved whatsoever from one party to another.
The more I think about it, the more I believe Benford's Law can be violated too easily within more urban areas, especially when precinct sizes are artificially limited to a fixed size (500~1000 people) without geographically-induced truncations. As long as voter turnout is consistently above 10-20%, with the 2-party system you're simply not going to get the vote count number to span the several magnitudes needed to induce Benford's Law. (Exception of course for smaller/independent candidates, and perhaps if one of the major candidates really sucked and kept getting 10% of the votes)
> Led by Sherry Suyu, the H0LiCOW (H_0 Lenses in COSMOGRAIL’s Wellspring) collaboration uses gravitationally lensed quasars to independently measure H_0.
"Its earliest known appearance was in a tongue-in-cheek letter to the editor: "A lover of the cow writes to this column to protest against a certain variety of Hindu oath having to do with the vain use of the name of the milk producer. There is the profane exclamations, 'holy cow!' and, 'By the stomach of the eternal cow!'""
I really want to address the math support, because I work with a lot of mathematicians, and one of my main goals is to support most of the latex AMS math features we're used to (as much as MathJax can support outputting on the HTML side anyways), and of course that includes aligned equations. In fact one of the first things I did was to figure out how to support multiline equations.
Turns out, you just need to write consecutive expressions one after another. Put ampersands in all these consecutive expressions and ScholarlyMarkdown will turn it into one giant align environment. If there is no ampersands, it generates a gather environment instead.
I apologize for this not being in the documentation. After all, I estimate that it is only about 30% complete. It really is in a dismal state. I originally didn't expect much people to see the site until later this year when I plan to launch it. This will all be rectified eventually, promise!
Here is literally a large align-block example straight out of the thesis that I'm currently busy working on (instead of the documentation). You can inspect the Scholdoc-rendered LaTeX code to see that you can really do align blocks reliably with this syntax.
If you output to HTML, it will just turn this same align block into a MathJax-friendly format, and hand it off to MathJax for rendering. This is what it will look like:
Where the equation numbering is place is entirely a non issue. Why? It's not being decided by ScholarlyMarkdown, but by MathJax. Go to the jsFiddle example above and change the line that reads
TagSide: "left",
To now read
TagSide: "right",
and you get it now on the left. Scholdoc simply put those default setting there for convenience; you're not tied to it at all.
Why put the number on the left side by default? Think of what happens when you have a long equation, and when you're trying to read it on a narrow screen such as a phone. If the number was on the right, it may very well be cut-off by the screen, forcing you to scroll around to find equations. Sometimes layout decisions that made sense for paper for centuries doesn't make sense for screens.
I should point out that Scholdoc is only using MathJax on the HTML side for consistency; there is no technical reason why it can't use another renderer that understands LaTeX, but so far MathJax is the only one that even comes close to supporting all standard AMS features. Note that MathJax itself does HTML/CSS, MathML, SVG, and PNG (via a node server) output, and is entirely user-configurable.
I still think equation numbers on the left is a mistake. Ideally, for smaller screens it would split across multiple lines. Or, it could choose the positioning based on screen size.
For me, MathJax can't render my equations. I use some techniques to get a vector with an arrow on the bottom. Two commands I used all throughout my thesis are:
> Ideally, for smaller screens it would split across multiple lines.
C'mon, nobody deserves to be subject to automatic TeX expression breaking. Even careful hand-tuning with help from the breqn package can often look pretty bad.
> Two commands I used all throughout my thesis are:
At the risk of turning this into TeX StackExchange, I just want to point out that these kerning manipulations are entirely possible in MathJax. The problem is that, maddeningly, they chose to have non-math commands like kern and lower work exclusively in math mode, and not outside one like in a hbox. Therefore, if you change your second command to something like this:
then it will work in MathJax. Of course, this would then break in regular TeX because you're not supposed to use kern and lower in math mode. I keep myself sober by remaining optimistic about a potential overhaul in the MathJax latex processing code.
In the meantime, I think we would benefit from a stopgap service that renders these things in an actual TeX engine running on a server into high-res PNG (easy) or SVG (doubtful). There's been quite a resurgence in effort to make latex rendering into a scaleable web service lately. I'd love to collaborate with someone from, say, ShareLaTeX on that, if possible.
That's awfully annoying of MathJax because you need two separate versions of the LaTeX, one for MathJax and another for LaTeX output.
I know that there's dvisvgm for SVG output from DVI. I think there's also a tex2svg. Also, there's LatexRender-ng. So, there's a number of tools for SVG output.
I've actually been wondering if this is possible from a theoretical standpoint. I'm thinking you can use Pandoc's LaTeX to MD conversion mode, save changes to a copy, wdiff to get the total change set, then somehow convert that into a LaTeX patch. Whether you can guarantee that this won't break anything is going to be a really challenging problem, as my head is already swimming with edge-cases. I guess we'll know the day that some genius comes up with automatically generated non format-breaking Critic Markup diffs. At least the problem should be easier than LaTeX --- look how long it took for the latexdiff tool to be what it is today, and it still breaks often when you have anything slightly exotic.
I'm currently using a workflow in my thesis where I use ScholarlyMarkdown to write individual chapters for final inclusion into an existing LaTeX book document. I find that ScholarlyMarkdown works quite well this way, and it potentially allows collaborators, since individual parts are isolated.
In any case, my point is plain and simple: I collaborate in almost everything I do, and my collaborators are my primary concern when I choose a tool. If they use notepad and Dropbox, I need to make sure my tools don't conflict with that. There's no way in hell I would ask a collaborator to learn to use git or a new markup language just to work with me.
I'd love a tool that works with this philosophy, and I feel certain anything like ScholarlyMarkdown won't catch on in my field (theoretical computer science) without such tools.
As a thought exercise, assuming for simplicity that you can isolate the parts you're authoring, so that your changes only involve additions to other people's work: what kind of additional metainfo would you need for this workflow? All defined labels and macros? Available bib entries? What else?
I think it depends on how I'm viewing the file. If I'm just editing in SM and building the tex as usual, then I don't need anything extra. If I'm trying to convert to all sorts of other documents (which I haven't ever really done) then labels, theorem environments, one-line macros, and bib entries cover almost every tex-specific thing I use.
Here is an excerpt from a typical paper's macro section [1]. As you can see they're mostly one-liners to remove the need to keep typing textup and mathbb, simple mathoperator definitions and such.
I think a good way to build this into SM is to run an existing document thorough latex and looking at the aux file. This approach would be faster and almost certainly more robust, similar to how most latex build scripts look at the fls recorder file for the list of external includes.
The only only thing you can't get will be the user-defined macros (and of course bib entries that doesn't already exist), but there is already a consistent mechanism to define your own macros in SM via the "math_def" block. They "do the right thing" in the sense that if you render to latex snippets then it wouldn't redundantly include these macros in the output.
The goal is eventually to be able to, like what Authorea is trying to do now, just "slap on" various LaTeX style templates during final rendering (similar to how static blog engines format plain md posts using templates). It's going to be PAINFUL but I think it is eventually doable with enough elbow grease. Markdown is a nice starting point because the features are so spartan that you actually have some hope of finding the "greatest common denominator" document model that can pretty much map injectively to all the major journal-specific LaTeX document-classes.
I'm imagining some kind of online component like ShareLaTeX that will become a clearinghouse for a number of tested and proven conversion paths, and can handle compiling a ScholarlyMarkdown document to different formats. This project won't go anywhere if this can't at least be done for major houses like Elsevier, PNAS, Phys Rev, etc.
This was basically my original approach. However, after working out the math syntax I realized that some things, like the double-backtick inline math, just can't be accomplished without a pre-filter. At that point I decided to just start playing with the parser code.
I also became super-convinced that some level of AST change was necessary to keep things sane, and since I wasn't able to use the existing Math and Image types anyways (they're not attributed), I ultimately just started a new AST type package namespace called "Scholdoc". Everything just evolved from there.
Basically this is forked because I've been wanting to see how much change to the AST is needed to include most of the academic-specific features. Since Pandoc's AST definition is a separate package from Pandoc itself and could possibly be a dependency of other projects, I thought it would be best to figure most of it out first and end up with just one proposal. Scholdoc has a much more limited number of input/output syntax, so it has much more flexibility when it comes to adding new document element types.
Consider this a self-motivated skunkworks project for Pandoc.