Here are some more of mine:
Unescaped @ symbols are invalid for URL paths.
What does it say about a company who can't even get URLs right?
Your comment would be just fine without that last sentence.
URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
hier-part = "//" authority path-abempty
path-abempty = *( "/" segment )
segment = *pchar
pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
> Within the <path> and <searchpart> components, "/", ";", "?" are reserved.
Still, if anyone wants to make their research results as accessible as possible, then publishing an Observable notebook is great, because it’s (a) just a hyperlink so trivial to access from any device without security concerns, need to purchase or even install and run specialized runtimes, etc., (b) very friendly for readers to inspect/modify/reuse, (c) supports rich interaction better than most alternative platforms.
I’m hoping that wasm someday soon becomes a reasonable target for optimized numerical programs written in Fortran, C, Rust, Julia, Halide, ...
I didn't say anything of the sort. That's your idea, not mine. I think Matlab and Fortran are great.
I would, however, expect new folks who don't know Fortran or Matlab to instead pick up something like Observable, if it continues to evolve in its present direction.
My point in the comment above is that, although I think Jupyter is fantastically designed, and I love the kernel abstraction, it began and is all about the notebook and graphical UI. So a thing like Observable, where they are similarly committed to the reactive interactive environment, without being committed to legacy languages (and in fact are willing to head down toward a reactive language which are far outnumbered by the languages akin to Python/R), could be very interesting down the line. I am very bullish on hybrid visual/text programming environments in data science, which have been around for a long time, but are becoming particularly more useful now that we have so many great data sources and tools to process that data.
That said, we’re explicitly designing for the reactivity of the notebook environment. Working on fundamentally reactive code in a traditional text editor is ... alright, I guess. You’re still stuck reloading the whole enchilada any time you make a change. It’s nowhere near as exciting as working on reactive code in an editor that understands and respects the reactivity, and can keep your program running and re-evaluating as you tweak it!
That said, if you're interested something less notebook-y, check out what Rich Harris is doing with Svelte 3. It's totally a not-so-distant cousin.
My main concerns with observable are:
1. Unlike Jupyter notebook, the code is non-linear. Very often it refers to some pieces.
2. It takes some clicking to reveal the code. (Also, maybe a Jupyter habits, but I am used to results below code.)
4. I am not sure if it is easy (or possible) to backup / self-host the code. (Pretty much a requirement to make sure a project won't disappear in a few years.)
BTW: Once I gave Obserbavle a try, https://observablehq.com/@stared/tree-of-reddit-sex-life (and it got really popular here: https://news.ycombinator.com/item?id=19640562).
1. The non-linearity of the code is really the whole point here. Reactivity means that you wan work on any little piece of the code at any time, and you don't have to reload the page, or re-evaluate the entire notebook. It closes the feedback loop, and speeds up the process of iterating on your ideas immensely.
2. We’ve added more keyboard shortcuts, so you can now "Shift-A" to select every cell, and then press "P" to pin every cell open at once (and then to close them again).
4. Every notebook is available for download and self-hosting. You can download the code as an ES Module, or as a tarball that you can install with npm, or load and render on a web page: https://observablehq.com/@observablehq/downloading-and-embed...
On Observable, each cell is a unit that can be evaluated independently, with two sides to it — you have the source code editor, and you have the rendered display (either a chunk of DOM or canvas that you’ve drawn, or an interactive inspector for JS values).
Now, the important thing is that the code half of the cell is often collapsed. You navigate around the notebook, toggling open and closed the code editors, and typing into them. If the code is on top, the entire cell jumps when it opens and closes, pushing the rendered display up and down. If the code is on the bottom, it emerges from the half of the cell that is always there, and the display is stable.
Ultimately, that’s why each cell has content before source code — the content is always there, and the code may or may not be visible, so working with the notebook as a whole feels much more stable and less jumpy with the rendered value as primary and the source code as secondary.
2. We are also huge believers in being able to own your code -- as such, every notebook has a download button that downloads a package with the entire dependency tree in a lock file so you can get exactly equivalent behavior. Again, this is only possible since we've started with the premise that RunKit notebooks just are Node modules.
3. Since it is just Node, every single module just works, including binary ones. In fact, every package on npm is built in -- no install, just require it.
for 1. epiphany is also non-linear, because I'd like to reveal the results and then explain pieces.
for 2. one click to the edit mode.
for 3. epiphany uses vanilla js with some apis to print results and create ui.
for 4. I try to use a plain text format, I also implemented pullrequest and versions like that of github. you could download all your data.
2. Clicking to reveal can be simply addressed with a global Expand-All toggle.
(1) Non-linear cell order (and in particular re-running all dependent cells anytime anything gets changed) is a huge improvement vs. mathematica/matlab/jupyter notebooks. Once you get used to it you will feel crippled when you try to go back. Being non-linear allows the author to optimize for whatever flow best matches the narrative, rather than ordering things based on arbitrary implementation-specific requirements of the programming environment. It is also possible to e.g. put simplified or brute-force implementations inline in the prose, and then move tricky optimized versions to technical appendices at the end of the document.
Being non-linear also allows for showing some placeholder output right away and letting expensive-to-compute output (or slow-to-fetch data) execute after initial notebook load and progressively replace the placeholders. This lets readers get started reading the prose right away without needing to wait for slow operations.
(2a) Being able to pin the source of a cell to be shown / hidden at the author’s discretion makes for an improved reader experience in the typical case (assuming a conscientious author). Nobody wants to read e.g. big blobs of redundant LaTeX source in addition to their mathematical formula output. Needing to sometimes click to show code that an author left hidden in the initial view is easy after just a tiny bit of experience as a reader. As an author I tend to mostly leave prose markup pinned closed and code cells pinned open (unless they contain too much distracting or overly technical detail) when publishing notebooks.
(1 & 2a) These two features both change incentives for notebook authors: since code can be hidden in the default view authors are not afraid to use a better implementation which includes distracting technical details; in other notebook environments I have noticed authors intentionally use worse-performing but simpler implementations, and technical details often get moved to some external library, hiding the code much more deeply than just putting it in a cell with hidden source / moving the cell to the bottom of the document.
(2b) I also prefer output below code, but the reason for the order here is so that showing/hiding the source won’t make the output jump up and down the page excessively; with a bit of experience the other order is not really an obstacle in practice.
(4) It is fairly easy, but it’s a fair complaint that it takes some trust / involves some risk to build work on top of a closed-source platform, especially for folks with strong free software ideology; even if it is possible to mirror code outside observablehq.com, it is at least mildly inconveient, and necessarily loses access to some great platform features. I personally have a lot of faith in the Observable team’s commitment to not breaking documents published on their platform or letting them disappear. But when business is involved, there’s no absolute guarantee.
It’s just not going to have all of the editor GUI, community features, etc. of the Observable platform.
> absolute guarantee that your documents will disappear
Okay sure, but that is true of every digital document.
The documents that survive from <1000 BC are almost all either carved into stone or impressed into clay tablets. Pretty well everything that was written on a flexible organic surface from 2000+ years ago has been destroyed, but scattered bits of oral history retold by generations of storytellers and some written books repeatedly copied by generations of scribes have managed to survive. A tiny number of original documents written on papyrus, animal skins, etc. have stayed partially intact.
It's good that you're aware of the basic aspects of the archival problem, but you're mistaken about the role of digitality — being digital makes it possible to copy documents losslessly, thus preserving them even when the original substrate is lost. That's the reason that, for example, the older books of the Tanakh survive from that time in some form, even though they weren't carved into stone: the alphabet is a digital technology, not an analog one, enabling verbatim copying. So digitality is an advantage for archival, not a disadvantage. (You alluded to the survival of the Tanakh in your comment, so you know about it, but somehow you got its significance backwards.)
Media longevity and interpretability are, as you say, important problems. They require significant engineering effort to solve. Also, it is possible to engineer ways to sabotage media longevity and interpretability, thus greatly reducing the chance of archival success. Making your documents' interpretability conditional on the survival of a mere commercial interest is a good example of such archival sabotage, as many authors and historians have discovered, to their sorrow, in the Geocities era.
I have the sense that you're not seriously engaging with the problem, but rather looking for a quick dismissal, so you may not be interested in the following:
It's relatively straightforward to micro-engrave megabytes of programs onto copper, glass, silicon, nickel, and similarly stable materials, which will easily survive for a million years. Gigabytes are more expensive. Clay and stone are also pretty easy, but with the straightforward fabrication techniques, they have lower data capacity because of their grain structure. PET is commonly used as an archival material, and should easily last longer than paper, but it's less trustworthy — its combustibility in air means that a PET-air system is merely metastable, not stable, and the timespan of that metastability could easily be much less than that of glass.
But even old hard disks could easily have media longevity exceeding the mere 3000-year timespan we're talking about.
(Previously I was talking about e.g. the Rhind Papyrus, not the Hebrew bible, but anyway...)
As for Observable: you could relatively quickly and straight-forwardly build yourself a mediocre version of the editor interface. It is quite likely that if the platform becomes extremely popular better archival solutions will be found. Being an early adopter of this kind of technology involves a bit more risk, especially when there is a business involved, but even when there is not. Plenty of open source file formats are now extremely inconvenient to use.
Micro-engraving is definitely feasible as a form of archival at the individual level; the engraved text will survive being lost or buried, though not melted down. Given the relative ease of micro-engraving many megabytes, though, it might make more sense as a collective project. 1200 DPI is two gigabits per square meter, and engraving a square meter of material is a lot less work than writing 300 megabytes of even prose, much less working code.
But making Lots Of Copies does indeed Keep Stuff Safe, which is why I was puzzled that you seemed to be dismissing that route in your original comment. But you can do both! Media longevity is entirely compatible with mass production. And later copying is necessary for long-term archival, though not for such short timespans as the ones we've been discussing. Media longevity is necessary to bridge the gaps between periods that are politically favorable for archival — the tragedy of the Maya codices and the khipu is due to a politically unfavorable period.
We don't have a lot of information about what the next 3000 years are like, but with modern machinery it's relatively straightforward and affordable to produce sub-gigabyte-scale archives that would have easily survived short periods like the last 3000 years, even without copying. If Ahmes had had FIB etching, we wouldn't just have his algebraic and geometric algorithms; we'd know all the 12th-dynasty celebrity gossip about which court eunuchs had crushes on which priestesses. There aren't many people working on archival media because product-market fit is really elusive with a 3000-year-long feedback cycle, although Norsam at least has an available product, though at a price point that discourages mass production.
Thanks for the information about Observablehq! I'll definitely give it a try.
What would be fascinating to me is if these tools made it significantly easier to make "Explorable Explanations" (nicky case, bret victor).
I love the Raft visualization (https://raft.github.io/) and would be ecstatic to see more stuff like that surfacing around, making it easier to grok sophisticated distributed systems. Next logical step would be to actually control said systems from the visualization itself.
Here’s a sequence of sound wave primers by Dylan Freedman: https://observablehq.com/@freedmand/sounds
Mike wrote an exploration of the Lotka-Volterra equations for simulating the dynamics of populations of predators and prey: https://observablehq.com/@mbostock/predator-and-prey
Or something physical, like Jacob Rus figuring out how a chunk of metal from a machinist’s workshop can scribe out a perfect sine curve: https://observablehq.com/@jrus/sinebar
Or something fun, with Krist Wongsuphasawat’s tapioca pearl to tea level Boba Calculator: https://observablehq.com/@kristw/boba-science
Or my personal favorite (although I can't view it in Chrome), Toph Tucker’s poignant reverie on correlation, eugenics, and the morality of applying statistics to human affairs: https://observablehq.com/@tophtucker/inferring-chart-type-fr...
 - https://idyll-lang.org/
(b) This tool is useful in a wide variety of fields that are not statistics- or data-visualization-adjacent.
Eh. Not entirely right. Many journalists, specifically those who work with data analysis and visualization, use Observable notebooks in their reporting. Other groups also use the notebooks for documentation purposes.
1 - https://en.wikipedia.org/wiki/Reactive_programming
2 - https://rxjs.dev/guide/observable
ObservableHQ.com is in the top spot in Silicon Valley and NYC, and a few spots down the first page in Madrid, after Angular Observables and RXJS Observables. So it looks like we’re doing pretty alright so far.
Working in Emacs and Vim all day, I can't tell you how immensely annoying it is to switch between mouse and keyboard when editing code blocks.
I built a tool  for Jupyter notebook code reviews which has commenting feature for GitHub pull requests (regular text comments). It would be great to have actual code edits/suggestions as review comments.
Highly related xkcd: https://xkcd.com/1271/, and highly related alt-text: "And if clicking on any word pops up a site-search for articles about that word, I will close all windows in a panic and never come back."
I have really enjoyed Observable as a literate programming platform though. Being able to mix text, diagrams, data tables, multimedia output, interactive inputs, easily modified subroutine implementations, imports of external data and subroutine libraries, ... makes for a very expressive and reader-friendly (albeit with a bit of reading learning curve) platform for writing interactive documents and for doing research.
For example, here’s a recent relatively literate-programming style notebook of mine (this one was mostly done in an afternoon for fun as a way of procrastinating from other work, not as a research project), https://observablehq.com/@jrus/munsell-spin
It’s very low-friction to set up a new notebook and just start writing, or to open an existing draft notebook and work on it. I have found this to be extremely helpful in the past year or two trying to work during my 2-year-old’s nap time.
Pen and paper and Observable notebooks are the two most important tools of my recent research efforts.
I shouldn't be surprised, that man had a lot going on. Just TeX and Big-O notation alone would be enough to permanently enshrine him in the CS heap of fame, but the hits don't stop there!
Kinda funny how much just one guy could affect research.
It's a very gradual progression into that. Running Chrome Version 71.0.3578.98 (Official Build) (64-bit) on Ubuntu 18.04.
It would be great if the HN submissions form had a field you could optionally fill out with such a text. Because if you can't tell from my example above, I have no idea what this is.
Edit: with regard to ReactiveX...well yeah. But it’s nice that it’s implemented in so many languages! (Not just RxJS) http://reactivex.io/languages.html
"Assume good faith."
I, for one, am irritated by the name and hope they either change it or that the project fades. Things are confusing enough without this kind of nonsense. Yes, I get that each cell in a spreadsheet is observing its neigbors. Yes, it's nice that you can "observe" your code auto updating. Yes it's nice that people are trying to improve upon spreadsheets as a programming model. Don't be purposely obtuse right out of the gate by naming your project after a widely used pattern, tool or technique. Call it Cells or Panopticon or something. Please.
In other news, check out my new web framework, called MVC.
The term “observable” used as a noun in a programming context seems to me very clunky and vague. YMMV.
I am much more familiar with the term “observable” in physics. https://en.wikipedia.org/wiki/Observable
> Other user interface toolkits that employ this [Observer] pattern are InterViews [LVC89], the Andrew Toolkit [P+88], and Unidraw [VL90]. InterViews defines Observer and Observable (for subjects) classes explicitly. Andrew calls them "view" and "data object," respectively. Unidraw splits graphical editor objects into View (for observers) and Subject parts.
I wish companies like these would just explain their product in simple language like a normal person.
This is what should be on the homepage!