ChatGPT?
Seriously though, this is such a weird reply and doesn't fit at all with the account's previous comments. Also has that kind of not-quote-right feel that a lot of ai generated content has
Works on the web and colorized ASCII output can be edited. It can be a bit fiddly though: need to manually turn off Wobble and adjust the pixel sampling size to get a good result.
That's really fun. I love it! One recommendation: it might be nice to add a mode where it doesn't wobble, but retains the cuteness that results from thickening and rounding corners.
Great job reminds me of Telnet Matrix but with colour :) (Edit also just found this https://ascii.theater/ )
I wrote something similar a few years ago for a retro computer image converter using a fixed font from that computer, it also did dithering prior to character conversion as it resulted in finer gradients for photos.
'What has been, it is what will be, And what has been done, it is what will be done. So there is nothing new under the sun.'
Nicely done. We used to print birthday banners and pictures using EBCDIC & ASCII on continuous-paper band- then dot-matrix printers, in a very similar fashion, recognizing the 'density' of characters, once printed.
I wonder if you could do something creepy by starting with a standard log tail output and then slowly introducing more and more visible patterns before, I dunno, a scary face appearing out of the text.
A standalone archive packaged with all dependencies.
Installing yet another language package manager (not to mention having to navigate all the anti-features like telemetry they seem to come with these days) is a major pain in the ass. Not everyone is a javascript developer.
Then there is also the online requirement that comes with those package managers. Not all system have or should have an internet connection.
Yeah, what happened to it. Or not happened. I npm/yarn link my own cli repos everywhere because setting up anything else takes hours and is a mess. I even develop python scripts with `nodemon main.py` (lol) because only n-guys know what people need, it seems.
Very cool stuff! I wrote something vaguely similar recently that displays images in the terminal using Unicode block elements and 24 bit ANSI colors, but I just assume two pixel per character. I support scaling and animated GIFs: https://github.com/panzi/ansi-img#readme
But those character based logos somehow look more impressive. My thing just looks low-res. XD
Those ascii art headers don't look correct on my phone. I'm using Firefox on Android, so that might only affect a limited group of people. But I think it should just be working with a <pre> tag and a monospace font, right?
A word to the site operator: the examples page is not rendering in a monospaced font for me (iOS with lockdown enabled), perhaps try including a safe css fallback monospaced font?
This is awesome. I love ASCII (and ANSI) art, but recently have been working on creating forms for Space Station 14. Sadly, SS14 does not use a monospaced font for papers.
I have been using ASCGEN2, which lets me specify a font, but this seems much nicer. Does anyone know if there's something similar that lets you specify a font and try to find the best fit?
It’s nice how / forms a neat edge on parts of the Disney logo. I imagine that effect must be sensitive to grid alignment. It would be helpful to have a live preview for choosing the most aesthetically pleasing alignment.
I wondered about alignment too. The homepage mentions that it tries various shifted grids and picks the best one, I guess based on some metrics of best fit.
This. I walked into the computer room with my mom when I was a little kid. There was a dude printing out a pinup and taking pictures of it as the lines fed. My first exposure to porn.
but it feels to me like an 80's version of a unix app that needed plain ascii because it couldn't copy the PC line drawing characters, maybe like this (random example):
Definitely worse because not every character maps to a pixel. The use of different characters is designed to represent spaces that have groupings of different coloured pixels in different configurations.
You could try to output everything using the ASCII block character and that would give you a close approximation.
On stock rPi5 running this takes > 3 seconds. Three seconds to render 370 x 370, 8-bit/color RGBA image to ASCII on a 2.4GHz CPU. And this is my lead-in to rant about neofetch, which takes about 0.2 seconds to run on the same Pi (see below), which would also be the time it would slow down opening a shell should I put neofetch into my .profile. Lastly, it takes cat to cat output of neofetch to /dev/null about ~0.01 seconds, which also is the time that neofetch should probably take to run (and really, this tool too).
$ time ascii-silhouettify -i neofetch-1.png > /dev/null
real 0m1.817s
user 0m3.541s
sys 0m0.273s
$ time neofetch > out.txt
real 0m0.192s
user 0m0.118s
sys 0m0.079s
$ time cat out.txt > time
real 0m0.001s
user 0m0.001s
sys 0m0.000s
Surely the use case for this tool is to precompile your image into ASCII and then just output that on every shell start up, right? There’s no reason to convert the image every time.
I would assume that performance wasn't the prime concern, but rather the accuracy/appearance of the generated image. Most people aren't putting this in their shell startup, just as most people aren't putting an ffmpeg encode command in their shell startup.
And I would assume neofetch is relatively slow because getting some of the system information is relatively slow. e.g. to get the GPU name it does "lspci -mm":
% time lspci -mm >/dev/null
lspci -mm > /dev/null 0.03s user 0.03s system 2% cpu 2.993 total
% time lspci -mm >/dev/null
lspci -mm > /dev/null 0.03s user 0.01s system 76% cpu 0.053 total
Guess it's faster that second time due to kernel cache or whatnot, but 50ms is still fairly slow. And that's only the GPU.
The algorithm involved is actually very hefty: for each cell of a 9px by 15px grid over the image, compare each pixel of the cell to its equivalent pixel in each of the 95 ascii characters. To solve for optimal grid alignment, it repeats this for each of the 9x15 possible positionings of the image under the grid.