(e.g. if you somehow managed to introduce a constant DC-offset of +0.05, with the shown step size of 0.2, these tests would probably never pick it up, modulo rounding.)
That said, these tests are great for asserting that specific functionality does broadly what it says on the tin, and making it easy to understand why not if they fail. We'll likely start using this technique at Fourier Audio (shameless plug) as a more observable functionality smoke test to augment finer-grained analytic tests that assert properties of the output waveform samples directly.
If the precise values of these floats is important in your domain (which it very well may be), a combination of approaches would probably be good!
Would love to hear how well this approach works for you guys. Keep me updated :)
Obviously any time you're working with floating-point sample data the precise values of floats will almost always not be bit-accurate against what your model predicts (sometimes even if that model is a previous run of the same system with the same inputs as in this case); it's about defining an acceptable deviation. I guess what I'm saying is that for audio software, a peak-peak error of 0.1 equates to a signal at -20 dBFS (ref DBFS@1.0) (which of course is quite a large amount of error for an audio signal), so perhaps using higher-resolution graphs would be a good idea.
(Has anyone made a tool to diff sixels yet? /s)
For this, just thinking about sound, I wonder if you could invert the reference wave form and add it to the test to see how well it cancels? Then instead of just knowing there was a diff, you could get measurements of the degree of the diff.
4.9600227 0.094451904297 -0.014831542969
4.9600454 0.089172363281 -0.0092468261719
4.960068 0.087493896484 -0.0065612792969
4.9600907 0.090179443359 -0.0028686523438
4.9601134 0.093963623047 0.0060729980469
4.9601361 0.095367431641 0.020538330078
4.9601587 0.094299316406 0.035186767578
4.9601814 0.09228515625 0.045013427734
4.9602041 0.089691162109 0.051422119141
4.9602268 0.086059570312 0.058929443359
This was generated using
sox somefile.wav somefile.dat
- The quantization of the graphs is a feature to add some tolerance to the tests. I admit this is a mixed blessing.
- This is a lot more opaque to someone looking at a text file of the test output than what is described in the post.
I agree though that you probably want to augment this with some form of assertion about noise level to check the high frequency smaller components.
--pb, CTO of Fourier Audio Ltd.
But it's 2021, and not only is this not possible, there is not even a path forward to a world where this would be possible. It's just not an option. Nobody is working on this, nobody is trying to make this happen. We're just sitting here with our text terminals, and we can't even for a second imagine that there could be anything else.
It's sad, is what it is.
I can't comment on that directly, but I will say, it's pretty damn cool to see GnuPlot generating output right into one's terminal. lsix is also pretty handy as well.
But yeah, I agree, I'm not a fan of all the work that has gone into "terminal graphics" that are based on unicode. It's a dead-end, as was clear to DEC even back in '87 (and that's setting aside that the VT220 had it's own drawing capabilities, though they were more limited). Maybe sixel isn't the best possible way of handling this, but it does have the benefit of 34 years of backwards-compatibility, and with the right software, you can already use it _now_.
0 - https://en.wikipedia.org/wiki/Sixel
1 - https://saitoha.github.io/libsixel/
2 - https://github.com/csdvrx/sixel-tmux
3 - https://news.ycombinator.com/item?id=28756701
4 - https://github.com/csdvrx/sixel-tmux/blob/main/RANTS.md
5 - https://github.com/hackerb9/lsix
6 - https://en.wikipedia.org/wiki/VT220
If you have any doubt, look no further than this thread: the sixel format is attacked not for any technical reasons, but for its age, RIGHT HERE ON HN:
>> "That's a protocol that's a good forty years old, and even that is not supported. And I can see why, why on earth would you want to be adding support for that in 2021? What a ridiculous state of affairs."
What's ridiculous is, with so many examples and quotes, some people still thing I must be "emotional" (I had a long discussion here... https://news.ycombinator.com/item?id=28761043 ) or that a few million colors is not sufficient for the terminal (!)
There is none so blind as those who will not see...
When implementing a program that outputs sixels, you are better off looking elsewhere. SDL1.2-SIXEL is a good choice in general, if you are writing C or don’t mind using the C bindings for your preferred language.
I’m not aware of text editors supporting sixels, which could make preparing the tests a challenge. Certainly, you could imagine a text editor supporting them, but I’m not aware of one that does personally.
I will concede that for your specific use case, an off the shelf ASCII plotting library probably involves less custom tooling.
Sixels will work: they are fast enough to allow youtube video playback !!!
The problem is NOT THE FORMAT, the problem is the lack of tooling: links and w3m are among the rare text browsers that can display images in the console.
It's just a matter of the browser sending the image to the terminal in some format it can understand, but if that hasn't be thought about as a possibility (say, for text reflow issues) it's going to be far more complicated than just adding a new format, as you will have to work both on say the text reflow issues (ex: how do you select the size of the placeholder, when expressed in characters?), on top of the picture display issues.
Said differently, it would be easier to have console IDE that supported graphics if any format whatsoever (sixel, kitty...) was supported by a console IDE; we could then argue about the ideal format.
Arguing about the ideal format BEFORE letting the ecosystem grow using whatever solution there is only results in a negative loop.
It's like if a startup was arguing about the ideal technological stack even before trying to find a product market fit!!
Personally, I do not care much about sixels, kitty or iterm format - all I want is to see some kind of support for a format that's popular enough for tools using it to emerge.
Yes, it would be better if that supported format was the option that had the greatest chance of succeeding, but right now, that is a very remote concern: first we need tools, then if in the worst case they are for a "bad" format, we can write transcoders to whatever format people prefer!
Right now, there is rarely any "input" to transcode (how many console tools support say iTerm format?), so we have a much bigger bootstrapping problem.
> an off the shelf ASCII plotting library probably involves less custom tooling
With a terminal like msys2 or xterm, no custom tooling is required: just use the regular gnuplot after doing the export for the desired resolution, font, and font size.
gnuplot is far more standard than plotting library that often require special Unicode fonts on top of requiring you to use their specific format.
34 years old, actually. I guess we can go ahead and deprecate the x86 instruction set, tcp/ip, ASCII, C, tar, and many other tools and standards that are old.
> and even that is not supported.
xterm supports vt430 emulation. I use this semi regularly. I believe mintty also supports sixels, plus a handful of others. The libsixel website has a full list.
> And I can see why, why on earth would you want to be adding support for that in 2021?
You might want to read your own post ( https://news.ycombinator.com/item?id=28856005 ).
What’s your great idea as opposed to sixels?
With graphics being everywhere in 2021, I wouldn't call this situation "sad," I'd think a lot more critically about why.
To start with, fixed-width text is significantly easier to work with than graphics.
Nothing's stopping anyone from writing a CI tool that outputs to HTML with embedded images. The bigger question is why it's uncommon.
And so we have a lot of text editors, diff tools, efficient compression, tools like sort and uniq: the whole unix ecosystem.
So if you transform sound to text, you can then use text tools to compare the output to catch differences. A simple serialization of numerical sample values would have caught the bug, but I agree that having a way of visualizing the output is nice.
Command line input, programming, etc. is also still mostly done with text, because it's easy to transform. Of course, you can imagine working at a higher level with objects (like powershell does IIRC), mimetypes, etc.
Why are we not focusing our energy on making something that is actually up to date?
There is a reason the Unix way of bytestream-based shell and pipes is still useful and present these days to the point that That Other OS is now embedding Linux in it.
Also, these ancient terminals often had some interesting typography options that are encoded in the ANSI standard that most modern terminals don't bother (line attributes that generate wider and taller cells are one such example).
These formats may be more desirable than more modern and complete ones such as PostScript for other reasons. I wouldn't advise implementing a terminal capable of rendering PostScript graphics because it's one more way to infiltrate malware in your computer by rendering untrusted inputs (There are a lot of RCE opportunities in exploiting vulnerable decoders).
Nobody tries to make actually interesting new operating systems anymore. OS research today is just "let's implement unix with $security_feature", nobody is actually trying to make computers more powerful or fun to use, or design a system based off of a first-principles understanding of what a computer should be.
God I wish I was born in the lisp machine timeline
An OS facilitates communication between programs running on a computer. Unix lets those programs communicate by sending characters of text to each other. You could just as easily imagine an OS that lets them communicate by sending images, audio/video, 3d models, etc. An OS can be way more than what you think it is. To detox your brain from this unix worldview, spend some time in a VM and play around with amigaOS or opengenera. Those were actual coherent OSes with an actual view of what a computer should be and how it should behave. Unix isn't.
> reimagining of the whole standard application package which seems a much larger project than "merely" an OS.
By OS, I don't mean kernel. I mean the base set of software that lets you interact with your computer and do interesting stuff with it.
This blurry line is present in other environments as well. In the Apple Lisa, installing a program resulted in new templates in the Stationery folder. In Smalltalk, installing a program adds its class definitions to the system as independent entities you could use in your own programs.
Not all operating systems are the children of Unix and VMS.
`True := False`
Should crash any vintage ST/80 workstation.
That's iterm's own implementation. There's also sixel, as pointed out by another comment.
We have. They are called "browsers". You might be even using one right now!
Instead of diffing ASCII-rendered waveforms, save the arrays and diff the arrays (and then use any kind of numerical metric on the residual). Scientist programmers have all sorts of techniques for testing and debugging software that processes sampled signals.
Or you can use Braille to get 2x4 mosaics, but they usually look terrible.
The same group wants to include a couple others, from different platforms, but the Unicode Script Ad Hoc Group is concerned the new batch may not be as meaningful as the first one.
This is both tedious and makes it very hard to debug test failures (especially with cases like crossfades, pan laws, and looping). I love the idea of having a visual representation that lets me see what's going wrong in the test output, and I'm definitely going to try to implement some similar tests.
I'm also curious what the state-of-the-art is for these sorts of tests. Does anyone have insight into what e.g., ableton's test suite looks like?
I don't know, but if I were to make an educated guess, maybe rendering stuff to actual audio files is a common approach? That way when something goes wrong, they can inspect it in a standard waveform editor?
I think the most useful thing for me is I can call it from lldb and immediately dump buffers to my terminal while debugging.
Sure, maybe you don't need that much resolution for what the use case is. But it's the equivalent of looking at a graph and squinting your eyes to blur it.
edit: Yes. I miscalculated the dot density.
/me slaps forehead
Yes. I miscalculated the dot density. :-(