Hacker News new | comments | show | ask | jobs | submit login
Programming the Linux Framebuffer (cmcenroe.me)
164 points by mmphosis 10 months ago | hide | past | web | favorite | 24 comments



We had trouble getting Xorg to reliably work for all customers on a live cd but after noting that the FreeBSD framebuffer always loaded and displayed info OK and could even support different resolutions we wrote a solution that would allow the frame buffer to serve as the x86/Xorg output device: https://github.com/neosmart/xf86-video-scfb

Now all X-enabled windowing toolkits can use the rock-solid framebuffer without modifications.

Linux's Xorg distribution already has a framebuffer driver for Xorg (or maybe even two different ones) that ships with it, but we didn’t have the same “works everywhere” experience we do with our custom solution (based off of netBSD’s previous work), perhaps due to framebuffer bugs on Linux or maybe something else entirely.

The drawback is that the higher the screen resolution (we weren’t happy with 800x600 due to LCD scaling blur from horrible on-controller scalers and instead decode edid data from the monitor/adapter to find the native resolution) the slower refreshes to the screen get. At high enough resolutions there is a clear order of 10-100ms delay in updates.


I'd say it's better to use DRM, it's more flexible, and you'll end up with framebuffer in the end too. It will allow you to use different pixmap formats, layers, double buffering, etc.

And even if you don't want GUI framework, and want to write your app differently, you can start with cairo and pangocairo or something similar, so that you don't have to care about low level drawing primitives.


Doesn't DRM take over the whole display, requiring you then to manage delegating buffers to clients and essentially recreating Wayland?

Also, the whole clock is basically 100 lines of fairly straightforward code in all its glory. How much would a DRM/Pango/Cairo soultion weight?


No one wants window management in the kernel, so yeah, the assumption is that there is one master that decides what buffer(s) are scanned out. That said, DRM is the only way to use the hardware compositors found on many mobile chips, which allow you to rotate, resize and blend together multiple buffers on the fly to produce the final display.

A minimal DRM example isn't much more, by the way:

https://github.com/yuq/gfx/blob/master/drm/main.c

And this does a lot more stuff that you want, like what connectors (think HDMI or LVDS internally) does my hardware have, which of those have displays connected, what modes are supported by those displays, and it can be very easily extended to do 3D rendering.

Beyond the HW planes, there is lots of stuff DRM enables that you simply can't do with /dev/fb but absolutely want, like true VSync synchronized rendering.


My somewhat equivalent code for working with double-buffered version of display code using DRM is about 200 lines of C. But that includes some abstraction and a lot of error checking/logging.

I'm not sure why you'd need to replicate anything complicated like wayland. You can use DRM API just like you'd use /dev/fb0, it just has more options/features.

As for cairo, obviously it's a heavier solution, but also more featureful, optimized and output neutral.


I guess the main question is can you simply overlay stuff on top of Linux console (presumably fbcon) with DRM? That is kinda the key thing here, and the examples I've seen have always just done full-screen stuff.


Show us the code. :)


As a non-English speaker, my obvious question is how does the PSF2 format encode fonts with more than 256 glyphs? How would one go about implementing Unicode text rendering? Combine glyphs from multiple 8-bit codepages to get the required characters?

Does the Linux system console support Unicode in the first place?

edit: the linked header file[1] explains that a single PSF2 can actually encode any number of glyphs (not just 256), and additional meta-information after the bitmaps spell out which Unicode code points OR combining character sequences each bitmap corresponds to.

[1]: https://github.com/legionus/kbd/blob/master/src/psf.h


Linux console supports up to 512 glyphs and has only minimal Unicode support.

There are terminal emulators, such as fbterm or kmscon, that use framebuffer and might have better Unicode support.


kmscon looks pretty neat.

Anyone have any experience with it?

Was it easy to set up? Does it make for a comfortable environment? Is it relatively easy to write your own applications that can output on top of the frame buffer, e.g. if you are developing a game with OpenGL and want to develop in this environment and to test your game from there as well?


Not sure if this answers your question but I already ran a Matrix screensaver in a framebuffer on Linux ~16 years ago. As per this post I believe it was cmatrix [1]. There's a screenshot on that page. If you look carefully you can notice some odd symbols.

One could also use MPlayer + libcaca + framebuffer.

[1] https://inconsolation.wordpress.com/2013/09/23/cmatrix-none-...


> Does the Linux system console support Unicode in the first place?

I have various files in Chinese and Japanese, and they display correctly for me in Linux, even under the console.


On Gentoo the font path is /usr/share/consolefonts/Lat2-Terminus16.psfu.gz

About the latency thing ... having recently switched back from OSX to Gentoo as my laptop's native desktop, Linux is great. Latency on all operations even in X11 are noticeably lower and more pleasant than even typing in iTerm2 on a top end Macbook Pro, and root on ZFS is awesome :)

Primary issues with console are UTF-8 support and irritations around mode setting, font resizing, and video chip (re-)initialization when booting and coming to/from X11, since my laptop (Dell XPS15 9560) has two graphics chips and they don't play perfectly nice together under many Linux kernel configurations. Angband latency, though, is awesome :)


Yes, X11 is very quick when you don't have compositing and special-effects layers between the client window and the display. :) For best results use an old-school, 90s style WM.

Enjoy it while it lasts; The Powers That Be (mainly Red Hat) have decided that Wayland and compositing are the future.


Small typo:

    --- before      2018-01-31 17:12:13.626560688 -0800
    +++ after       2018-01-31 17:12:17.494580179 -0800
    @@ -1 +1 @@
    -uint32_t buf = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fb, 0);
    +uint32_t *buf = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fb, 0);


/dev/fb is a legacy interface that shouldn't be used.


What's the new interface to the framebuffer? As someone who's actually used /dev/fb (on Android devices), it's pretty damn easy, which makes it a great baseline for getting some graphics on the screen.


The first thing, certainly on Android, is to forget the idea that there is one framebuffer ;)

The current interface is DRM (drivers/gpu/drm) and there specifically, atomic modesetting:

https://lwn.net/Articles/653071/


Current GPU implementations often support multiple arbitrary "framebuffers" layered on top of each other. Similar to hardware sprites from 8/16-bit era or like more recent hardware "mouse cursor". With the exception they can cover whole screen.

So you can have multiple framebuffers in the same time with arbitrary color formats (like 8-bit grayscale, 16/32-bit (A)RGB, YUV2, etc.). Bilinear scaling, alpha-blending and mirroring/90/180/270 degree rotations are often supported as well.

It's important to understand these framebuffers are not composited anywhere else, but are actually scanned to the display device in real time, on the fly.

Traditional framebuffer can of course be emulated by setting a single layer to cover full screen without alpha-blending.


Regarding Android I am surprised it even worked, unless OP has rooted their device or was using an old version.

Starting with Android N, Google has been locking down what is possible to do via the NDK and apps can no longer directly read outside their own sandboxed filesystem, so no /dev.


This was back in 2011. We modified the init script to stop the zygote process from launching, then used our own C code running as a normal Linux process to work with /dev/fb. So yeah it was rooted and it was an old version, but we were mostly outside of the Android ecosystem entirely.


I see, yep that way it surely worked. Thanks for jumping in.


Sometimes the legacy interfaces are the ones that are most likely to be present

For a single screen embedded application it might be more than enough and you don't need to load a WM, etc


nice article this. :) thanks! been wanting to start on this for a little while but didn't have time to research some basics, this sets up just nicely for what i wanted to experiment with!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: