Greetings HN community! I'm excited to share the latest iteration of my reMarkable streaming tool, designed to enhance remote work productivity. In 2021, I developed a tool that enabled me to stream content from my reMarkable tablet to my laptop, making it an invaluable asset during virtual meetings and presentations.
My newly published article delves into the details of this revamped version, discussing its architecture, components, and the iterative journey of improving user experience. As a product manager, I gained unique insights into user perspectives, which drove me to simplify the tool's activation process.
This article is a deep dive into the technical aspects of the tool, exploring how I eliminated the need for a local service and optimized network consumption. If you're curious about DIY tech solutions, optimizing remote work setups, or simply exploring innovative projects, I invite you to explore the article.
EDIT: Ctrl-C and restart with ./goMarkableStream got it working more or less.
Saving the below message if it helps anyone else
EDIT2: Awesome when it works - but getting quite a bit of spotty service. Still frequent `waiting for reMarkable screen` . SSH is showing everything normal, except for occasional:
EDIT3: Looks like limitation is one browser (and one IP address) per stream - normal considering the usage :). Still sometimes no matter what address and browser I try I get `waiting for reMarkable screen`
EDIT4: tried `nohup ./goMarkableStream &`, closed PuTTY and restarted client computer - all browsers giving `waiting for reMarkable screen`
Agreed that is completely reasonable. One stream is sufficient.
Unfortunately per EDIT4, it looks like it is not possible to leave this service running on reMarkable2 until the next time.
Any way to reset without going into SSH again?
EDIT: looks like enabling USB on reMarkable2 got the stream working. Curious though why did the web server work at all then (possibly sharing of common wi-fi?)
FINAL_EDIT:
I give up, there is no sense to this madness. Suddenly the stream starts working on same same https://10.11.99.1:2001/ where it was giving `waiting for reMarkable` canvas for last 15 minutes.
Possibly there is some sort of timeout for the old stream?
There is a timeout of one hour (after an hour you need to refresh the tab).
Adding a & at the end does not help to make it run until next time?
Do not hesitate to open an issue on GitHub.
It seems the timeout is about 15-20minutes (when a new tab or new browser starts working).
Adding & did work, I was able to go to a different computer attach USB cable and have https://10.11.99.1:2001/ work (without SSHing and restarting goMarkableStream).
An alternative I am very happy with is the SuperNote[0].
You can do screen mirroring and this is effectively really nice to quickly draw diagram during a meeting.
The only inconvenient of the approach is that the SuperNote is starting a small webserver and you basically use Firefox to access it. It is very responsive as one would expect, but this means that you need to have your laptop/computer on the same network as the SuperNote. In a home office setup, this is not an issue, but at work, your company policy may prevent this.
Anyway, be it RM2 or SuperNote, these tools are great for people who enjoy writing done ideas with pen and paper. The feeling is really different than doing it in an app or just text document. You can doodle in your notes :-)
Their whole handling of the situation suggests that the company doesn't care about any regulation and they're just as likely to lie or refuse to comply with other legal requirements for their products. When asked about source code they actually said, "well, the US is anti-China, so we don't think we should have to."
That just doesn't make me feel confident about their products. If something goes wrong is there going to be customer support, or are they going to decide one day that I don't get that because I live in the US? If I order an Onyx Boox can I trust it will even be the same hardware as someone else gets or are they going to treat labeling requirements as optional as well?
People get bent out of shape about TikTok, but TikTok has never said, "we don't think we should have to obey legal requirements because we're mad at you." And it's so transparently just an excuse for them to do what they want, their anger about anti-China sentiment in the US is not preventing from selling to the US. Convenient that it only prevents them from obeying legal requirements. I don't see how I could trust a company with these kinds of business practices, they're advertising to me that the moment it's in their best interest they'll break the law and throw me under the bus.
You are right. But I bought one knowingly and it works well.
Microsoft has been a horrible, anti competitor company built on unethical values yet I chose their Office 365 in my company. Many here use their products and take a salary from such a company.
Google has left the “don’t be evil” in the dust, with anti competitive measures and short changing employees. Hardly ethical, yet widely used in this crowd, and many here choose to draw a salary from them.
I hate the Castro brothers and what it does to Cuban people, but I do smoke a cigar now and then.
I hate Nestle, but have a Nespresso.
I hate Big Oil but drive an ICE as is only viable option.
And so on.
My point is both parts are right. Companies can un unethical or illegal things, we can stay away from their products out of principles, or cave in out of (a) having no principles or (b) being practical.
In reality I think we all cave in a bit (even Stallman and co), so virtue signaling for choosing the hard path sometimes feels hypocrital.
If your principles are such that you won’t buy a product due to a realistically immaterial instance of a GPL violation, sure. Given the number of GPL violations in the wild I refuse to believe that anyone but the most Stallman-esque among us are living to this standard.
It's not immaterial, it's literally the point of the GPL. If they don't want to release their modifications they shouldn't take advantage of the huge effort that went into the linux kernel. Maybe use a different OS and license it..
Fair enough. Everybody can try to create their own community with their own rules. You seem to have decided that you prefer a community that doesn't adhere to licensing agreements. Don't be surprised if other communities exclude you.
I can understand not being ideologically aligned with Stallman and Co.
I also agree that there must be lots of violations of the GPL out there. Software is often invisible.
That said, I don’t think there’s many big companies out there openly doing it. If get caught, they comply with the minimum effort possible, but they comply. I don’t see them being blatant or cavalier about it.
Two main reasons for me to avoid them:
1. The attitude makes them untrustworthy. If they are blatantly violating this, what else are they willing to ignore? They have obligations towards me as a consumer, for example. Will they respect those? Will they sale my data to others?
2. There’s no guarantee that they will continue being able to operate in my country. A judge could theoretically force them to close shop. So I’d rather not put my data on their product.
Imagine if we treated copyright material like this.
"You bought tickets to Avatar 2? Given the number of pirated copies online, I refuse to believe that anyone but the idiots among us are willing to pay for content."
That feels like the start of talking past each other here heh. GP is making a values statement. If a good product gets to be exempt from the values in someone’s value system, then any product is.
More likely here following the law in this instance isn’t part of your value system, and neither is free/libre software being used on its authors’ terms, so you are (internally) free to decide to buy something even if it doesn’t adhere to the GPL. If I’m right about that, adherence to these things is a (maybe very-)nice-to-have rather than a core value, whereas the GP I think is coming from a place where one of those is a core value.
> Unfortunately Onyx Boox offers one of a kind products which its competitors don't really come close.
I wonder if absent the Linux kernel they be capable of offering a one of a kind product? It would be more accurate here to say that if they could keep their code proprietary that would be nice for them, but unfortunately the Linux kernel offers them a one of a kind base to build a product on and so they need to take the compromise and release their modifications as GPL if they want to be able to build a good product.
What makes the Boox tablets unique isn't their Linux kernel, it's their customizations to Android to make the UI more usable on a slow e-Ink display. They could release their kernel sources to comply with the GPL and still keep their Android skin to themselves, just like every other Android device manufacturer does.
Remarkable 2 can't even run Android apps, the main selling point of Onyx Boox. You can't read Kindle book, Libby, whatever service your local library use, etc.
I've been searching for ereader/note taking e-ink tablets so thank you for the recommendation. I was going back and forth between the Remarkable 2 and Boox devices. What has your experience been like with regards to SuperNote's software updates? I'm weary of getting a device that won't be supported with feature, or at the very least, security updates for the next 3-5 years.
I did not face the “being on the same network” problem yet.
But I already know that implementing a native Ngrok feature is straightforward and a couple minute work. It would allow to stream over internet.
I'm not the person you're replying to, and I've not compared them myself directly, but I've heard a few people describe it as the difference between writing with pen (Supernote) and writing with pencil (RM2). At least on the Supernote side, that matches with my experience - it's fairly smooth, but it does feel like you're writing on paper rather than, say, a glass screen.
There is almost no noticeable lag while writing, and even while doing more complicated things like scrolling through my Kindle list, or resizing a block of writing, the screen keeps up very well.
Purely from my perspective, I find the smaller screen size (roughly A5 dimensions) of my Supernote a lot nicer than the more A4-proportioned Remarkable, but that's just a preference thing - I like writing in smaller notebooks in general. You can also get the Supernote in a size closer to the Remarkable if that's more what you want though.
Having used both the best description would be the Supernote a5x is like writing with a rollerball pen, where the Remarkable is like writing with a pencil.
Both are good, fluid, responsive, but slightly different
Excellent write up. This is the kind of content that I love to see here.
Great to see how chatgpt helped you along the way to learn and solve a problem you weren’t very well versed with.
Resonate with the comment that you were the developer and chatgpt was the coder ! Exactly how I felt with some of the projects.
Also indeed true that Simplicity is indeed complex .
I suppose the author chose JPEG because that's easy to turn into MJPEG which is decoded for free (since you can just hand it off to something that supports it), but I suspect that contributes a lot towards the straining of the reMarkable CPU.
However, JPEG is more suitable for photography but the graphics displayed on the reMarkable are more illustration-like and to top it off, it's monochrome. I think another common image format (such as PNG), or even a crude RLE compression would be lighter on the CPU.
Being nitpicky here, but the reMarkable is not monochrome, it's grayscale (with 16 gray levels iirc). It also has colour inks that will show in the companion app as blue or red for pens, yellow or green for highlighter. And the proprietary file format is keystroke-based, not bitmap.
You are right, this is indeed why I choose JPEG in the first place.
But this is also why I chose a client/server apporach: the encoding was done on the client (my laptop) instead of the tablet. Threrefore the encoding did not impact the CPU. I made some profiling, most of the CPU was used by the transfer of the data over the wire. This is the reason of the compression.
Oops, I misunderstood where the JPEG was being created and completely missed your own section on RLE. The resuling 200K per frame does still seem a bit high though, I'm sure that could be reduced further
Take a look at how mosh transfers deltas of terminal viewports over the wire using what it calls “SSP”. That protocol might have some advantages here, especially since you can access the state of the pre-rasterization drawn objects, not just the pixels, on the screen.
Once you do that, you may obviate the need for any transcoding or conversion to MJPEG since you can just redraw the objects on the canvas.
Also, RM2 seems to have a built in Screen Share feature. Might be worth describing the differences (besides not needing their cloud subscription service).
I will try to answer to both points:
In the first article, I described how I fetched the picture by reading the virtual framebuffer. I have not any knowledge of what’s being drawn. All I have from the beginning is a 2.5Mb byte array.
I don’t use any jpeg compression anymore in this version
And my understanding is that the native client is transmitting the vector representation to the client and the client redraws it with the same algorithm. It is only doable if you know what algorithm they use. I did a small test to decode their format, but it may change more often than the format of the picture.
Does it provide you the answer?
(thanks for the conversation)
Without knowing how often your RLE is hitting the max length of 16, but assuming it was often, a further optimization could be using one bit as a flag and to signal that the following block is pixel is either a small sequence of 1-8 pixels, or a large sequence of a multiple of 8 pixels (ie. 1 = 8x1, 2 = 8x2, 3 = 8x3).
This lets you compress up to 64 pixels into the space of no more than 2.
The problem is that it requires some analysis on the device, and I really want the code to be the less intrusive as possible.
I will have a look on how to do it in a cheap way
A naive approach that may still work well is to simply break up the image into fixed, predetermined regions. I don't believe this would be significantly more work for the server if it's already comparing pixel-by-pixel, and the average frame will probably contain updates only in one region. Even breaking it into 4 or 6 would, I think, be a significant payload reduction.
I will have a look, but at first sight it looks like the repo you mentioned is for reMarkable first version which is using a kernel based implementation of the Framebuffer.
It is a bit different for reMarkable2 as the framebuffer is managed by the main process
For anyone who didn’t click on that link: this is about the device having the same physical security as the thing it want to replace (paper). That is if someone has access to it, they can read it.
It is not about the device having some known software vulnerabilities in the usual sense when we hear about network-connected insecure device
> My initial approach was to compile the client into WASM. This seemed promising as it would let me leverage my expertise in Go development. However, I encountered several limitations that would have necessitated substantial modifications.
The main issue was with the gRPC library. The support is very limited by now.
Then the JPEG compression is slow in Go and it is CPU intensive.
And finally, I even if I could generate the Mjpeg stream, how
would I display it?
Then I though about the “canvas” mechanism, but I could not address the backend of the canvas without heavy copying between wasm and JS. And remember the size was 2.5Mb.
Anyway, I though that relying on wasm would make me implement a lot of image primitives that are natively accessible in JS (for example image rotation).
The main difference is that you don’t need any client installation now.
You simply type the address of the remarkable in the browser to get the content.
This is a nice technical achievement, but I’m not sure I understand the difference between this and using a virtual whiteboard tool, ie FreeForm, Lucida Spark, Miro, Google Jamboard, and sharing it live in a meeting.
As a project, neat work! But otherwise, it seems a rather self-centred tool and I don't think I would relish someone using it in a meeting with me. Feels like someone is making me watch them play with their toys and I would question the value. If it's just you drawing then something has gone wrong e.g. prepare and share material before a meeting.
I think it has value for being really simple and like a sheet of paper on a call.
I used an iPad mini for a long while on zoom calls until I discovered something much better received - just use PowerPoint/Slides to draw boxes and text as needed together, the interface is familiar and people can pick up on it and move forward together.
This was one of the reasons, why I went with one of the Boox devices (Max Lumi) in my case. It is Android, so adding even easier than working around their Linux distro.
And as they don't try to leverage proprietary formats, Syncthing for syncing books and notes. And NetGuard for a good measure, so it doesn't call home.
Doing less is the core value prop of the device. It is intentionally not a full Android experience. If that's what you want, there are other competing devices.
Not the OP, but I have similar feelings. I don’t want a web browser or the ability check email - I just want decent two way sync with other cloud providers.
The device doesn’t feel open to me because it’s difficult to move data out of their walled garden.,
The context is ReMarkable has a sharing service that costs $2.99/month. And subscription software is generally noxious but subscriptions more or less necessary for a device especially so.
Wow, this is a very nice tool! Although I'm a ReMarkable2 user, I'm also curious about if this tool can be extended to support the newly launched Kindle Scribe.
i don't understand: how does this server goes around NAT? Is it really suitable for overseas remote work or it is only local and you have to stream your screen itself via proper webrtc system?
This a great article for the process even if you don't have a RM2 so great hack to solve your own problem. I have been achieving something similar on my home PC setup using tldraw [1] which is a live multiplayer infinite canvas board service that I also join from a surface laptop with a stylus. I share that tab with others in a call from my PC and dump in its screen shots etc that I can markup from the surface tablet or anyone can join the tldraw session when it makes sense. Anyway its a web service not your own, you don't have the RE2 OCR and filing system or eink etc and there are a lot of other infinite canvas solution to choose from, but it's working and turned out to be simple enough for me to use more than once lol.
are your notes, memos, and calendars that sensitive? what are you writing passwords or secrets? c'mon now... let's be realistic. opsec is one thing but let's not be absurd...
My newly published article delves into the details of this revamped version, discussing its architecture, components, and the iterative journey of improving user experience. As a product manager, I gained unique insights into user perspectives, which drove me to simplify the tool's activation process.
This article is a deep dive into the technical aspects of the tool, exploring how I eliminated the need for a local service and optimized network consumption. If you're curious about DIY tech solutions, optimizing remote work setups, or simply exploring innovative projects, I invite you to explore the article.