Hacker News new | past | comments | ask | show | jobs | submit login
Writing to the Framebuffer (seenaburns.com)
180 points by __sb__ on Apr 4, 2018 | hide | past | web | favorite | 47 comments



Keep in mind that /dev/fb, on a modern system, isn't an actual framebuffer on your GPU. It's a land of make-believe, mostly supported to get the kernel console ("fbcon") working. Going back for at least 10 years, the kernel mode-setting API (KMS) is used to display buffers, and there's compatibility code which sets up a user-space buffer in the KMS subsystem [0] which is swapped to when fbcon happens through a large chain of strange events that are hard to describe. This explains why you can't write to a /dev/fb during an active X11 session and have it show up: it's a fake buffer that only shows up when KMS isn't in use.

[0] https://github.com/torvalds/linux/blob/master/drivers/gpu/dr...


That is correct. One exception perhaps is embedded devices with displays, including EInk tablets.

For instance one thing I have been working on is a library that allows writing to and partially refreshing an EInk display with low latency enabling this[0].

I am curious what would happen if I got xorg working on this and used an additional application to call the `refresh` ioctl so that the changes get displayed on the screen.

[0] https://gfycat.com/CornyHugeIndianRhinoceros


That's very awesome! What tablet and project is that?


Thanks. It is the Remarkable Tablet but the API is entirely undocumented. Took a fair bit of analysis to get it to this state but now it is actually ready for people to build their own applications upon it.

The GitHub project: https://www.github.com/canselcik/libremarkable


This is cool, you should post it as a “Shown HN”. And don’t get discouraged if it doesn’t immediately garner attention. Post it again later if you only get an insignificant amount of upvotes. Your project is front page worthy and if it doesn’t get there the first time then it’s just bad timing / bad luck (or poorly worded title, that’s a possibility as well of course).

Post it until it gets to the front page, waiting a couple of days or more between each time you post it.


Thanks, I am very glad to hear that. I will be posting it with a more detailed write-up soon, detailing the reversing process as well. That should make it an even more interesting read.


I agree, this is super cool and deserves attention.


Thank you. :)


Thank you very much! I will check out both very soon. I'm interested in a lightweight portable ssh terminal eink device and this combination might just work for me


Oh yes I've been looking for the same thing. Would like something I could connect a keyboard to and attach a tmux session.

Would be a great work machine outside during summer time.


This is super nice, would love to have a tablet like this in the future. Have been experimenting a little bit with my Kobo Glo HD which also got a browser but nowhere close to this.


Wow!


The reason why sudo wasn't working is that it runs the command with root privileges, but redirecting to a file with > is done through the shell, which is running under your normal user level privileges. If you want to redirect to a file that only root can write to, you can use the tee command. That command will write anything passed to it via stdin to the file given as an arg.

For example, echo "foo" | sudo tee /path/to/file

or any of the other ways listed here: https://stackoverflow.com/questions/82256/how-do-i-use-sudo-...


tee also prints to standard output, so I'd suggest piping to /dev/null if you're going to produce a lot of output.


My first thought was to use the sh -c method.


Or, just use 'su' to become root, then write to the device file.


su isn't an option that's available to "regular" users. In particular in an environment where sudo has been set up correctly then it may be for the reason that you want regular users to be able to perform "some" powerful functions without giving them the keys to the system.


Or you can add your user to the `video` group.

EDIT: it says so in TFA. Oops!


> one horizontal line at a time, with one byte for Blue,Green,Red,Alpha(?) per pixel (seems like 24-bit true color).

You can (and should if you are writing software not only for yourself) ask the kernel about the framebuffer layout using FBIOGET_FSCREENINFO and FBIOGET_VSCREENINFO ioctls.

https://www.kernel.org/doc/Documentation/fb/api.txt


There may be more than just one simultaneous framebuffer per display on the hardware level.

GPUs can actually scan to a display from multiple overlapping framebuffers in different resolutions and color formats in the same time. Alpha blending and rotations are often supported as well.

A bit like a very large mouse cursor. In a way, mouse cursors are also framebuffers. Or like hardware video layers in the nineties.


> GPUs can actually scan to a display from multiple framebuffers in different resolutions and color formats in the same time.

This is also how early SLI worked: each GPU has a framebuffer, one holding the "even" scanlines and one holding the "odd" scanlines. Each GPU does its own rendering work to its own framebuffer, each rendering a vertically-squashed scene, with one scene offset by one pixel vertically. The master GPU then interleaves the two framebuffers together (pulling data from the slave GPU's framebuffer over the SLI link) when outputting a field.


To add to that, "early SLI" stands for Scan-Line Interleave, not Scalable Link Interface like it is now. CRT was some crazy shit to deal with. I'm glad I never had to deal with it.

If you ever wondered why some old Youtube video looked like crap and has lines that don't match up, interleaving was why.


Usually artifacts are not the result of using interleaved source but the result of recoding done wrong. I remember reading mencoder manual about "pulldown" and other recoding options.


Eh, it's a handy abstraction. There's also no uniform memory space (multiple RAM chips, virtual memory, swap), CPUs absolutely are not the "run one opcode after another" thing that programmers pretend, and by the time you stream HTML from a server to client it's been sliced into a thousand pieces, compressed, encrypted, reordered...

We like abstractions.


Windows used a separate buffer for mouse cursor and video overlay. Seems like a good idea that helps to achieve smooth GUI performance even on a slow machine. You don't have to calculate what part of video frame is covered by other windows and you don't have to update pixels when moving a mouse cursor.


Yup, on Android this is handled by a service called SurfaceFlinger which you can easily inspect from the shell(adb shell dumpsys SurfaceFlinger).

Not sure if it's still the case but back in the ICS days your status bar, wallpaper and launcher/app were all separate surfaces that were scanned out from different frame buffers.


Worth noting that the framebuffer works really well on the Raspberry Pi; you can switch between console and X, all while using the framebuffer. It's even happy switching between bit depths (using 'fbset') at the same time.

If you want a really fast framebuffer system, get something like an old Nvidia 7600GT and use 'nvidiafb'; while this won't work with X at the same time, it's blazingly fast and gives brilliant console text output (modern GPUs are super-slow by comparison).


Is /dev/fb writing directly to the framebuffer or is there translation going on kernel side? Is the card in VESA / BIOS mode when on a virtual console without X?


I think the kernel docs[0] indicate that it's a kernel-managed abstraction?

[0] https://www.kernel.org/doc/Documentation/fb/framebuffer.txt


VESA / EFI FBs can be used, but usually only as a last resort. On devices with native KMS drivers, there is a compatibility path for /dev/fb. This is why your kernel console is in a resolution bigger than 800x600.


I have a feeling that framebuffer must be something very slow. When I used both Windows and Linux without proprietary drivers for a video card (so they use some generic VGA or VESA driver), GUI would be slower. For example, scrolling in a window was laggy. Or dragging a window around the screen. Why it could be so? Do video card vendors intentionally slow down video memory access or is VGA or VESA some poorly designed standard?

I was not using Compiz or something like this so it cannot be explained with the lack of 3D acceleration.

Also, with generic VGA driver you cannot have 100Hz frame rate on high resolutions.


In DOS, in real mode, the framebuffer was just mapped directly into memory. If you wanted to put something on the screen, you wrote to that memory address (often segment A000). For text modes, you would write ASCII characters straight into the buffer. For bitmap modes, your bytes would be interpreted as colors (or shades of gray, depending on the mode.) It was very exciting when I figured out how to use extended memory to draw high-resolution (640x480) graphics.


Brings back memories. When I started coding graphical applications, writing directly to the graphic card's memory was the only way I understood. As I didn't know any better I had to recode stuff like line, circle and sprite drawing on my own. It was super slow but a very fun way to get into coding.

Maybe it's just me but I have the impression that drawing arbitrary things on screen has become harder since then.


I am trying to run as root user :: cat /dev/graphics/fb0 in my android (oreo) phone. But showing 'No such device' error. Could anyone guide me on how to make it work so that I could take screenshot directly and do some graphics stuff on my phone directly.


note: I don't know linux...

Is it possible to get the GPU to copy kernel / system memory to the framebuffer, then read that back with user space app? I.e. can we convince the GPU or framebuffer to give us the contents of protected system memory?


On the Nintendo Wii U we were able to do this. Check out "GX2 unchecked memory read/write": http://wiiubrew.org/wiki/Wii_U_System_Flaws. Same issue with "gspwn" on the Nintendo 3DS: https://www.3dbrew.org/wiki/3DS_System_Flaws


In the absence of an IOMMU configured to block GPU accesses to non-GPU memory regions, yes - however, drivers should block any request to do so.


Well, if you're root... and configure IOMMU in such a way, etc.

Otherwise, it simply depends on system hardware architecture and drivers.

If the GPU is in a position to read and write physical memory (bus master DMA), and there's no hardware level protection like IOMMU or it's not properly configured, operating system has very little say in the matter, be it Linux or anything else.


I have this C project based on CodingTrain videos that also ships an fbdev_runner: https://github.com/mar77i/cgbp


Given how wayland works and how similar it is to the KMS fbcon abstraction it might be interesting to discuss that as well.


I wonder if there's a way to take over another VT so you don't have the console text over your images.


You can do this without taking over another VT by using ioctl(KDSETMODE, KD_GRAPHICS). This tells fbcon in the kernel to stop drawing console text to /dev/fb.


You need to read Unix and Linux Stack Exchange.

* https://unix.stackexchange.com/a/178807/5132


You can change the active VT with ioctl()s.

See how chvt does it.


I do it with tput civis, and redirecting the output to null.


[flagged]


Would you please read https://news.ycombinator.com/newsguidelines.html and follow the rules when commenting here?


No need to be bitter about this, it's not like it is especially useful to know how the graphics stack work for most applications. For example this was not in my CS curriculum at all.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: