This new product might be better, but Shimmer was publicly criticized exploitative when it launched. I saw those criticisms at the time, gave the product a fair chance anyway, and unfortunately found them to be accurate.
I used Shimmer in 2022. The app had poor UX and frequent bugs, and the core offering (weekly Google Meet sessions with a “coach”) felt like generic self-help and not personalized coaching. The promised between-session support mostly consisted of DM'd article links, even after I raised that concern directly.
The sessions themselves often felt unprofessional, with background noise, unstable connections, and poor audio quality. The coach WFM'd on their couch during call. Given the price (hundreds of dollars per month at the time), the gap between what was marketed and what was delivered was significant.
Hopefully the new product addresses these issues, but I’d encourage people evaluating it to look at Shimmer’s prior execution and customer feedback, not just the announcement.
I'm sorry you had a poor experience! 2022 was the year I got diagnosed and the app definitely wasn't nearly as good as it is now, and I'm sorry you went through those bugs. But more importantly, I'm sorry about the experience you had with your coach. We take these flags really seriously and since then, we've put in several rounds of guardrails in the hiring process, with ongoing contract renewals with our coaches, and general culture & training. Now, we only take <3.5% of qualified coaches and have far more processes in place to make sure these things don't happen and catch them if they slip through. Because we don't record the sessions or supervise them because of privacy, there definitely has been a few situations like yours and again, we do take them really seriously! We do have hundreds if not thousands of really impactful stories and progress (I've personally cried in multiple feedback calls) so if anyone is reading this, I highly recommend checking out this page too! https://testimonial.to/shimmer-care/all
Thanks for giving a pretty solid real answer here.
While I have you: any advice for how to suggest to someone else that they consider your app? I'm met with defensiveness at every suggestion I make; the app seems compelling enough to me as a neurotypical person though, and I'd love to see them try something like this.
I completely agree. Even if they completely tank I can open my obsidian directory in a text editor or command line and still use it. I would still have access to features that are common in other apps like full text search or plain file sync. Attachments are just files in the filesystem that can be opened in any image viewer. Basically if i can’t use obsidian anymore i can still use my notebook and take notes without implementing or finding new software
Any reference for obsidian's specs? The closest thing I could find is this: https://docs.excalidraw.com/docs/codebase/json-schema but it seems to be really minimal and doesn't go into detail on what properties belong in elements
I think you're talking about the trade offs between supporting features like "DOM manipulation, state management, animation, etc." and "shipping updates" out of the box, versus only storing the data as simple files and leaving everything else to the implementation.
Sqlite as an application file format is great [1], but for a knowledge base / note taking app the benefits are not worth the tradeoffs in my opinion.
Sqlite is more performant more performant and provides lots of built-in features. However, most note taking users do not have enough notes or files to benefit from that performance. Sqlite will also lock the user into the application, whereas a "pack of files" can be used in the shell as a text editor. Using markdown files + a open json format has the benefit of being supported by multiple applications (e.g. sometimes i open my obsidian vault in vscode), while a sqlite database would need a proprietary schema coupled with a single application
I prefer an open file format that isn't tied to a vendor. A "data bridge" might handle syncing and diffing more efficiently than plain files, but it is still tied to the vendor. For example, I prefer not to pay for Obsidian Sync, and I'm able to use a git plugin and storing my files on nextcloud to sync between my devices. This leverages existing tech without having to implement features from the ground up
Except the markdown files are tied to a vendor outside of trivial formatting since it's a simplistic underpecified format without extensions, so all these obsidians specify their own extensions to add complexity, which your vscode does not support (and neither would git diff help to see data change in a sea of formatting changes)
And this spec is for complicated layouts, not trivial notes you're comparing it to, so your intuition from simple notes doesn't translate to this use case
> I think you're talking about the trade offs between supporting features like "DOM manipulation, state management, animation, etc." and "shipping updates" out of the box, versus only storing the data as simple files and leaving everything else to the implementation.
I'm not sure I understand. Can you clarify?
> Sqlite is more performant more performant and provides lots of built-in features. However, most note taking users do not have enough notes or files to benefit from that performance.
For a graph with lots (thousands+) of nodes/edges, SQLite is probably capable of being more performant than a JSON file, depending on whatever specific kind of performance we're measuring. That said, to me, the most interesting thing that SQLite gives for applications when compared to flat files is data integrity, via transactions and schemas/constraints. Performance is nice but not even close to the most interesting thing about SQLite in most applications. Performance has never been the reason I've chosen SQLite over flat files for my applications.
> Sqlite will also lock the user into the application, whereas a "pack of files" can be used in the shell as a text editor. Using markdown files + a open json format has the benefit of being supported by multiple applications (e.g. sometimes i open my obsidian vault in vscode), while a sqlite database would need a proprietary schema coupled with a single application
In a sense you're right about this! I'll grant it's easier to open a JSON file in vscode and edit it if you already know vscode and JSON. That said, SQLite is in the public domain with a well-defined, stable format and there are countless free and open source database editors/viewers out there.
SQLite is self-describing, also. You open `sqlite3` and type `.schema` and it shows you the database schema. You enter a query and get some results. It's all right there. So, while a database might have a schema that was designed for a particular application, that doesn't mean you as the end user can't tinker with it, and the number of people who know SQL is rather large.
> SQLite is self-describing, also. You open `sqlite3` and type .schema
My feeling is that xml and json is always easier to parse than SQL. What I often see is that a large amount of application logic is hidden in how the tables are joined or selected. In SQL this is almost always the case and of course this is often the case in json as well to some extent but usually alot less.
In the end it comes down to that xml was not the end all of integration formats, neither are its replacements. Actually being able to read and understand what you parse without tooling is an immense help.
Sometimes you have nodes that overlap each other, so you want to control whether or not a node is in front of or behind another node.
Though yes, they could have explicitly defined a z-index or defined a convention on how the ordering should work (first nodes top and last nodes bottom or vice versa?). It's interesting to think about the trade offs between explicitly defining these things vs. leaving the application to implicitly make the choice. JSON Canvas seems to be designed to use in tandem with markdown files in a note taking app, so it makes sense why they opted for the more implicit design to be similar to markdown
Looking at their reference impl, it does seem that array index implies z-index, in which case fair enough, I suppose an array does make the most sense - they should probably document the z-indexing though!
> Nodes are placed in the array in ascending order by z-index. The first node in the array should be displayed below all other nodes, and the last node in the array should be displayed on top of all other nodes.
I agree with a lot of comments that it's minimal, but in my opinion that is a good thing. I'm a big fan of Obsidian, and of the things I like about it is the data source is all markdown files. Markdown is meant to be very lightweight and portable, and overcomplicating it will limit adoption and extensibility (imagine markdown vs pdf).
JSON Canvas seems to follow in that spirit by being very lightweight, so a lot of implementation details (i.e. how are files rendered, what file formats are supported, etc), edit tags, etc) are left open to implementation.
Markdown and JSON are meant to be non-opague file formats that prioritizes portability and human readability over other features. An application format like Sqlite has a lot of benefits over markdown, but it loses the benefits of text based formats like being compatible with git and is less portable.
What I would like to see is a convention for extending the node and edge definitions, similar to frontmatter in markdown files- something that is not required for basic rendering but is a nice-to-have for applications to consume) - that way portability between apps of varying complexity can be maximized while still allowing for more complex features that some apps might implement. Markdown has the benefit of supporting extensions (for example like tables in GFM) - apps that are not compatible can still render the unsupported markup. But there should be an explicit way to extend open JSON formats.
Some feedback off the top of my head and from reading the comments:
1. *Specifying node ordering*. Obsidian seems to just send whatever is last touched to the top, but this makes a common feature in many apps (move up/down/back/front) more difficult to impement
2.*More explicit definition for line shape*. Adding a way to "bend" a line in a specific way. Useful for creating more complex diagrams.
3. *Relations between nodes*. Group nodes contain child nodes, but the spec doesn't specify how the child nodes are defined. I would expect it to have a `children` property to nest nodes. Obsidian seems to implicitly link nodes to groups based on whether their bounds intersect. This makes it difficult to implement some common features:
a. nodes that exist outside of the bounds of its group, for example a node that "floats" just outside of the edge of the group's borders.
b. nodes that are not part of a group even though it exists within the bounds of that group.
There are many different ways for a canvas app to extend the spec to implement those features, but it seems like something that should be defined in the spec to maximize portability
4. *Extensibility.* Either explicitly support or provide a standard for defining more styles for nodes and edges, such as stroke width, stroke style, rotation, etc. It seems like "color" should be a part of this as well, rather than being an explicit property of a node.
5. *Embeds.* Supporting "embeds" as a node type. I even think the "file" node should be redefined as `embed` with a `uri` property to support different schemes (`file://`, `oembed://`, `https://`) and maybe a `mime-type` (`text/markdown`, `image/webp`). The file node's "subpath" property seems to be only relevant for markdown files, so I think that should be an extension rather than an explicitly defined.
6.*YAML* :) (Should just seemlessly convert from json, but yaml is more readable than json)
Being able to design standards that evolve over time and making tough decisions about what what to make explicit and what to leave implicit is a skill I want to improve on as a developer this year. Does anyone have any resource recommendations or best practices to recommend for me to research?
I haven’t had any issues with yaml in markdown frontmatter or openapi specs. What kind of issues do you see with list and maps that make you against yaml? I agree that for computers and consistency json is preferred. I already use a linter for my markdown files so I would do the same with yaml to keep lists and maps consistent
> You like C++ because you're only using 20% of it. And that's fine, everyone only uses 20% of C++, the problem is that everyone uses a different 20% :)
YAML isn't terrible if you only ever have to read what you wrote. Now consider that there are 63 different ways to write multi-line strings in YAML -- how many of those have you committed to memory? Yeah... now throw 10-100 developers into the mix, each with their own favorite alternative syntaxes -- good luck making sense of your YAML.
I used to feel that way, and in some sense I still do, but in practice it fits right in with my other linters so it's not any trouble.
Config language design seems to have a surprisingly "bumpy" design space, where optimizing for one thing (human readability, or human familiarity, or tooling support, or flat data, or nested data, or strong types, or DRY, etc...) necessitates tradeoffs in other areas.
In the past I had to craft yaml files. Sometimes I needed quotes for a string, sometimes I had to put in a dash in front of a key, or just not. You basically needed to have the whole schema in your head.
There can only be so much nesting before you lose track of what item belong to which parent.
Copying some yaml structures over to another level requires care, as the result might look correct, but the white space parser thinks otherwise.
I have lost hours of debugging yaml files when a dash was missing somewhere or when I needed one more leading space. The parser accepts it happily, but half of the typical javascript programs will only detect things are wrong when it has already executed on half of your spec. The other half will just run with input that wasn't intended that way.
I remember writing artillery.io test specs where all those problems pop up.
Now the good thing from JSON is JSON Schema. The latest spec allows you to specify quite advanced validations.
Yaml has no such thing.
As to your remark: Yaml for front matter is defensible, as you dont have deeply nested structures. Though, as an obsidian user you want to make sure your front-matter is conforming to your own schema. That would require writing a json spec and then have your yaml internally converted to json before handing it over to the validator.
A spec is worthless if you cannot validate against it. Json and xml have a good story there.
I concede that yaml is more human-readable than json without an editor. Correctness is the holy grail though.
JSON5 has comments. This is the major thing. A configuration which does not allow comments is not a configuration for humans, it's a serialization for programs.
Agreed. Json seems to be designed with machine interpretation as first concern. Having to wrap object keys into quotes eases parsing I guess, but for humans it is a nuisance.
> Markdown and JSON are meant to be non-opague file formats that prioritizes portability and human readability over other features
I don't think human readability is a critical feature of JSON at this point. If that's your priority, you can use YAML. Readable JSON is nice because for small files you can read or edit small sections of it, and it's easy to debug when manipulating it with machine code. But there are plenty of cases where a huge JSON file is still useful even if it's barely human readable.
My heuristic has always been: use YAML if you expect humans to create the file (or maintain large chunks of it), otherwise use JSON. For example, Kubernetes config is YAML because humans create it from scratch, and it would suck to do that with JSON. Whereas package.json is JSON because machine code initializes it and humans only make minor edits to specific fields.
In the case of this canvas format, I wouldn't expect humans to create the file from scratch, so use JSON over YAML. Then the question is, will humans even care about reading the raw JSON? Probably not. So why not use something like SQLite or Protobuf? The most compelling reason would be that humans writing code to interface with the format can use parsing tools from their language's standard library.
> I don't think human readability is a critical feature of JSON at this point. If that's your priority, you can use YAML.
Wow you have kinda lost the plot on a few things.
JSON was designed to be human readable and writable. YAML was designed to be a human readable format for the automated interchange of data between automated systems. Human writability was neither a goal for YAML nor its intended use. Like everyone else on the frakking planet, you’ve misunderstood what YAML was intended and designed for. YAML was never intended for human-written configuration storage, which is what everyone used it for the instant after they became aware of it.
YAML can bite you very hard if you misunderstand it. JSON is simply invalid if you misunderstand it when writing it.
If you don’t need human readability, use a binary format. Binary formats are so freaking fast compared to literally any structured text format, past, present, or future. High speed and low latency matter and binary formats make both of those easier.
If you need to inspect the binary data, write a viewer using the code you use to read it. It’s a lot simpler than people believe it to be. I find Protobuf to be more of a hassle than writing the code myself, and protobuf is very easy to use, and I’m quite a moron. Binary stuff is not hard.
Yep, I think the compelling reason of humans writing code is key here. SQLite would make it less accessible for people to write external tooling to integrate with an obsidian vault. There are lots of existing and open that support diffing/parsing/syncing/manipulating json, while with sqlite you have to not only know sql but support another application’s database schema, which third party developers are less likely to do
> I agree with a lot of comments that it's minimal, but in my opinion that is a good thing
The purpose of a spec is to specify, and if you don’t specify and leave things open to interpretation, then that completely defeats the purpose.
Anybody who’s worked with a poorly defined spec knows exactly how bad this can be. A good example would be the shambles that is the HL7 spec used in healthcare.
A former colleague had a phrase for this: “once you’ve seen one HL7 message… you’ve seen one HL7 message”. Which really highlights the issue of a standard that’s open to interpretation.
The issues raised (in the comments here) seem to hint at a lack of specificity. That is something that they should really look at improving.
I think overall any group that tries to come up with a standard that can unify a field should be lauded and supported. But perhaps calling this a 0.1 release, and taking the feedback on board, would be the best way forward.
> I agree with a lot of comments that it's minimal, but in my opinion that is a good thing. I'm a big fan of Obsidian, and of the things I like about it is the data source is all markdown files. Markdown is meant to be very lightweight and portable, and overcomplicating it will limit adoption and extensibility (imagine markdown vs pdf).
What Markdown got right was creating a nicely readable lightweight markup syntax.
And Markdown also demonstrated how to create a bad precedence for future consolidation by being so loosey goosey and underspecified (and with a bad reference implementation). That there is a Commonmark at all is solely because of others picking up the slack and doing the unthankful gruntwork of creating 100 if-then-else statements in a semi-formal prose format.
Looking at this with an open mind, I'm curious what benefits running SQLite in WebAssembly with a proxied web worker API layer gives compared to using localStorage or something similar.
* Using SQL has clear benefits for writing an application. You can use existing stable tools for performing migrations.
* Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
* How does SQLite / OPFS performance compare to reading / writing to localstorage?
* From what I know about web workers, the browser thinks it is making http requests to communicate with subzero, while the web worker proxies these requests to a local subzero server. What is the overhead cost with doing this, and what benefits does this give over having the browser communicate directly with SQLite?
* I remember seeing a demo of using [SQLite over HTTP](https://hn.algolia.com/?q=sqlite+http) a while back. I wonder if that can be implemented with web workers as an even simpler interface between the web and SQLite and how that affects bundle size...
> Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
I would say generally yes. SQLite is known for its performance, and with Wasm SQLite, performance is strongly related to how the file system operations are implemented. There have been some good advances in this area in the last couple of years. My co-founder wrote this blog post which talks about the current state of SQLite on the web and goes into performance optimizations:
* localStorage is small, volatile, OPFS is big/durable
* main thread <-> db vs main thread <-> worker <-> db:
- firstly, sqlite with OPFS has to run in a webworker
- even if it was possible to run in main thread, this approach allows for a code structure that is similar to a traditional architecture (frontend/backend split) and it's easy to route some request to the web worker while let over request fall through and reach backend server and not needing to worry about that in the "frontend code"
I somewhat agree, because as an end user all I had to do is disable the bundled apps (teams, etc) and switch the default browser to firefox. I agree with the criticism though, it really leaves a bad taste in my mouth to see all the extra prompts encouraging me to keep Edge, and the social media links (instagram, tiktok) in my start menu. But for someone who experienced bloatware, this stuff was pretty easy to remove / change. The main difference is that this "bloatware" came directly from Microsoft and not third parties.
Did your entire Google account get deactivated? My Google account is linked to Gmail, GCP/Firebase, Youtube, Drive, and a ton of SSO apps. Used it for the past decade, but I know now not to put all my eggs in one basket so I have backups of my most important data. Still, what happened to you is a big fear of mine.
> My Google account is linked to Gmail, GCP/Firebase, Youtube, Drive
Don't. Have separate accounts for dev stuff, entertainment, storage, social media each.
> and a ton of SSO apps
Never do so. Use a password manager (self-host if you want) and use good old username-password to sign up and log in. I once had all my passwords stored in Chrome leaked in a security breach.
A friend's Gmail was suspended that was connected to bank. He had to waste literally tens of hours in commute and waiting and meetings to change it.
>Have separate accounts for dev stuff, entertainment, storage, social media each.
FYI this isn't enough. There are thousands of reports of people having all their Google accounts terminated because they shared the same payment method/IP/cookie session data/etc. This is how bad it can get: https://www.reddit.com/r/tifu/comments/8kvias/tifu_by_gettin...
Agreed, I have a small 3 node cluster at home and I use all of those things you listed. I had to dive very deep in the details and learn a ton of new things to get it right, and I had all the time I wanted because it was just for fun and learning. It's almost like having my open source self-hosted AWS (in terms of abstraction from infra, not in reliability)
Would I host any of my critical side projects on my cluster? Probably not. Kubernetes was made with large organizations (google made it after all) in mind. As a solo developer, it's better for me to host my apps on a VM and move to AWS/Azure/GCP if I need to scale.
I used Shimmer in 2022. The app had poor UX and frequent bugs, and the core offering (weekly Google Meet sessions with a “coach”) felt like generic self-help and not personalized coaching. The promised between-session support mostly consisted of DM'd article links, even after I raised that concern directly.
The sessions themselves often felt unprofessional, with background noise, unstable connections, and poor audio quality. The coach WFM'd on their couch during call. Given the price (hundreds of dollars per month at the time), the gap between what was marketed and what was delivered was significant.
Hopefully the new product addresses these issues, but I’d encourage people evaluating it to look at Shimmer’s prior execution and customer feedback, not just the announcement.