I wrote a basic device driver for a platform I worked on. It's since been upstreamed and gotten more eyeballs, but it's still relatively simple. https://github.com/torvalds/linux/blob/master/drivers/gpu/dr...
For a few months I've been kicking around the idea of making a mario clone using this. You boot the machine and you just get the game.
EDIT: for 2D graphics, you can blit sprites in with the CPU. GPUs are mainly useful for 3D graphics, physics and raytracing. Also, I made a mario clone in Unity and Unity felt like overkill.
GPU's are quite usable as sprite engines. A Wayland compositor is essentially blitting client-controlled sprites as "surfaces", and the final render is pixel perfect (other than perhaps during animations). There's no reason whatsoever to use software rendering if hardware acceleration is available.
The really obvious one is that you might like to have a program that will work with or without hardware acceleration present.
I don't know why this works for OP, but my understanding is that the original assertion is true. Writing/Reading directly from fb0 does not work on my machine.
> sudo adduser seena video
$ sudo hexdump -C /dev/fb0
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
0000e000 aa aa aa 00 aa aa aa 00 aa aa aa 00 aa aa aa 00 |................|
0000e020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
0000f000 aa aa aa 00 aa aa aa 00 aa aa aa 00 aa aa aa 00 |................|
0000f020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
Note: I was part of a small group that worked on a framebuffer utility library around 2002ish, but haven't used it much since apart from configuring embedded devices.
Makes me wonder why more people aren't abusing this. I can see the gold, silver and platinum level open source sponsors shoving some big logos down your throat by writing directly to the framebuffer during your next yarn add.
(I realize it is more nuanced than that, since it requires elevated priviliges to write to the device ... but hey how many sudo curl bash scripts have you seen out there in the wild? time to pwn the framebuffer)
> writing to the framebuffer in an inefficient way
Fun fact you can memory map /dev/fb0 and make changes to it directly.
Decoding video frames is much more expensive, than rendering them. The source of the slowdown could be software decoding the video frame. Where the X video player might have been using a specialized chip for this, i.e. GPU or Intel graphics, depending on the setup.
So it's mostly a fun gimmick/curiosity. I wrote a realtime 3D rasterizer that renders to fbdev a while back. It was a fun project, but useless.
You can't write to fbdev because that's not what X11 is using to display buffers to the screen, it uses the newer kernel mode-setting (KMS) API. It's only when you switch away to a VT that the "KMS master" drops its session, the kernel switches back to fbcon, and you can see the "raw fbdev" buffer again, which is usually a fake framebuffer. If you then write to the fbdev framebuffer, you'll see your changes, but fbcon will come in and overwrite them with the kernel console, unless you ioctl(KDSETMODE, KD_GRAPHICS); to turn off fbcon from drawing.
Download old qemu.
Load into jslinux.
And execute the following commands in xterm.
gzip -dc qemu-0.9.1.tar.gz | tar xf - -C/
mkdir /dev/shm && mount -t tmpfs tmpfs /dev/shm
SDL_NOMOUSE=1 qemu -k en-us -m 4 pi.vfd
Necessary condition, Xorg must work with the fbdev driver.
This means when I get back to my host, i have to relogin and all my windows/apps are gone. There seems to be no way to save/restore the state of X11 (i know apps themselves wouldn't save state). Is there something fundamental about how display managers tie into the GPU that makes it impossible? Is it possible to configure X11 and display managers to use software rendering only?
There's a good explanation of this on StackOverflow:
I recommend using fbcat for taking screenshots: https://github.com/jwilk/fbcat
When you setup a swap chain for GUI rendering, for example, you're just doing a glorified malloc call. It's not coupled to a hardware port. It's not really even all that special. And then like any other color buffer, you can use that color buffer in other ways, writing it to yet other color buffers.
All that said the display output pipeline itself still is more sophisticated than this. Not necessarily by all that much, but it is. Particularly in that there's not necessarily "one" buffer that drives the monitor. There's this idea that composition can happen "on the fly" during scan-out by dedicated hardware, which are done via multiple planes. How many planes can be used then depends on the hardware, but for example most Android phones have 4-8 hardware planes. Desktop hardware tends to have fewer such overlay planes (the power efficiency gains don't tend to matter much when you have infinite power), but modern Intel integrated GPUs I believe have 3 planes. This is particularly useful as one of those planes is for video. This means that when you're watching a video, if things are working properly, then the GPU can be basically powered off entirely.
Also, I would personally say that I feel modern APIs are getting a bit too complex. There's a lot of surface area and platform-specific details that you wish were portable but aren't and you unfortunately have to worry about. The amount of boilerplate code you need just to do a Hello World on something like Vulkan seems daunting. It's kind of nice to be able to build your own framework that you can completely wrap your head around and tailor to your own needs in every way. Then, all you need to worry about in terms of interfacing is how to display pixels at the other end.
sudo cat /dev/urandom > /dev/fb0
This should work just fine (with a /dev/null redirection so your terminal doesn't get garbled):
cat /dev/urandom | sudo tee /dev/fb0 > /dev/null
echo "cat /dev/urandom > /dev/fb0" | sudo sh
sudo sh -c "cat /dev/urandom > /dev/fb0"
It even tells you that:
> -bash: /dev/fb0: Permission denied
From https://stackoverflow.com/a/82278/5208540 , here's how to "redirect with sudo":
Use "| sudo tee /dev/fb0"
I recommend instead using "|sudo dd of=/dev/fb0", or, if you have moreutils installed, "|sudo sponge /dev/fb0"
The "new standard" has existed since 1976, which had boldface, underline, italics, faint, underline, reverse, concealed, strikethrough, and blinking; which gained things like overline by the 1980s; and which has been widely adopted for over 40 years.
And the Linux built-in terminal emulator is not emulating a VT100. The idea that terminal emulators emulate VT100s is generally wrong. Almost always they are doing what VT100s never did or could do, and conversely often not doing what DEC VTs did (such as supporting Tektronix, VT52, printers, windowing, multiple display pages, sixel, locators, ...). The Linux built-in terminal emulator has several significant differences from a VT10x, not the least of which is that it is not monochrome. The Linux built-in terminal emulator's closest relative is, rather, the SCO Console.
(Interestingly, SCO's manual relates that the SCO Console actually did have underline et al., when used on MDA hardware.)
The real problem was that boldface and italics quadruple the kernel font memory requirements, in a system that is still trying quite hard to limit itself to 512 glyphs. It's surmountable, but in many respects pointless. The headlined article is quite apposite, as there exist quite a range of terminal emulators that run in user space and realize using the frame buffer.
In user space one can be a lot more free with memory, loading multiple fonts simultaneously for example, and handling the whole of Unicode. And of course one can implement a whole bunch of the ECMA-48:1976 attributes and colours, including some of the more recent extensions such as the kitty underlining variants, AIXterm colours, XTerm colours, and ITU T.416 colours (where "more recent" for the latter three transalates to "from the 1990s").
That'd be awesome, but it'd need to be green, flash bright when drawing and reverse-flashing the screen on erase. ;-)
> The real problem was that boldface and italics quadruple the kernel font memory requirements,
In ancient times we OR'ed a character with itself shifting it one pixel horizontally. Italics were usually done with shifting the top part of the cell to the right and the bottom to the left. Some terminals could shift things by half a pixel, which made for great screen fonts (I did that on Apple IIs).
The same way overlines, underlines and strikethrough are trivial to generate without adding any glyphs to the font.
But nowadays people expect to provide actual font files for weight and slant changes, as boldface is not actually overprinting and italics are not oblique. This starts to matter at cell sizes bigger than 8 by 8, and it isn't an 8 by 8 world nowadays. You'll notice that my terminal emulator reads different font files for bold/faint/italic combinations if they are given. (On one test machine I am currently feeding it a combination of UbuntuMono-i and UbuntuMono-n, with unscii-16-mini and Unifont 7 as fallbacks.)
It is not alone. The FreeBSD kernel's newer built-in terminal emulator (vt) loads two fonts, one for normal (medium) weight and one for boldface.
I would prefer better, specifically created font styles (as the good terminals had), but I'd be totally happy with whatever gave me support for character styles. I swear that if I could figure out how to add them, I would.
That and smooth scrolling. ;-)
I'd need some guidance (I have no idea of where to start), but I'd totally do it.