Fundamentally, GREASE works quite similarly to UC-Crux, our tool for under-constrained symbolic execution of LLVM.
Essentially, GREASE analyzes each function in the target binary by running it on a slate of fully symbolic registers.
When errors occur (for example, if the program reads from uninitialized memory), GREASE uses heuristics to refine this initial symbolic precondition (e.g., by initializing some memory) and re-runs the target function. This process continues until GREASE finds a bug, or concludes that the function is safe under some reasonable precondition on its inputs. The blog post introducing UC-Crux [https://www.galois.com/articles/under-constrained-symbolic-e...] describes this algorithm in considerable detail."
Fascinating!
It's almost like the ability to run a given function inside of its own Virtual Machine / Virtual Environment and setting parameters/constraints for execution, that is, defining pre-conditions and post-conditions which determine successful (defined) or unsuccessful (undefined) behaviors!
Let me speculate that future programming languages -- might have abilities like this by default, that is, implicitly baked into them...
In short, I like the set of ideas espoused by this tool, and future compiler/language writers would do well to consider them when they write their future compilers/languages...
(Phrased another way, it would be like the ability to run any function of a program inside of its own custom Bochs [https://bochs.sourceforge.io/] environment, set pre and post constraints, run/test as many of these as needed, and return back to main program/environment reporting any constraint violations... or something like that...)
No it won't (most likely). VTracer (which the authors compare with) is fast, runs in browser via wasm, consumes way less resources and can even convert natural images very decently.
But the model seems cool for the usecase of prompt to logo or icon (over my current workflow of getting a jpg from flux and passing it through VTracer). I hope someone over at llama.cpp notices this (at least for the text-to-svg usecase, if not multimodal).
Author of VTracer here. Finally being able to comment on hackernews before the thread got locked.
Would be interested in learning about your workflow. Is it a logo generation app?
I feel like this is an example of "Machine learning is eating software". Raster to vector conversion is a perfect problem, because we can generate dataset of infinite sizes and can easily validate them with vectorize-rasterize roundtrips.
I did have an idea of performing tracing iteratively. Basically by adjusting the output SVG bit-by-bit until it matches the original image within a certain margin of error. And optimizing the output size of the SVG by simplifying curves if it does not degrade the quality. But VTracer in its current state is oneshot and probably uses 1/100 of the computational resources.
VTracer seems to perform badly on all the examples. I suspect it can be drastically improved simply by upscaling the image (via traditional interpolation, or machine learning based) and picking different parameters. But I am glad that it was cited!
Thanks for noticing this, and yes I have also noticed what you're pointing out, but workable for many use cases. I use this workflow for making images for marketing or web (so images are more artistic than photo realistic generations to begin with). Think of stuff you can find on undraw, but generated by image models from prompts. Then run them through VTracer. The reproductions are not perfect, but are often good enough (can be slow depending on how sharp you want the curves, and often very large file sizes as you mentioned). Then make any changes in inkscape and convert back to raster for publishing.
> logo generation app
For logo generation, I would actually prefer code gen. I thought of this problem when reading about the diffusion language models recent (if there is lots of training data available in form of text-vector-raster triplets).
(That, and we warmly welcome Frink to the club of computer programming languages! (as we once did for Perl, Java, PHP, Rust, Clojure, Go, Zig, Nim, Python, etc. etc.!))
>"Once I learned the above two things, here’s what I did:
o Instead of trying to get into big accelerators, I started off by joining smaller founder communities like LOI (Canada), OnDeck, Mercury Raise. They give you enough perks, network, traction to start building up momentum.
o Switched to delivering services over software. Instead of selling an SEO software for $200/mo, I delivered outcome based services instead for $2K/mo. Within 3 months, I was at $10K MRR. If I had continued doing software, it would have taken many more months to ever get close to that number."
Well done!
I especially like the "join smaller founder communities" idea -- that one is a keeper!
It has an FPGA! It has an Optical Connection! (Well, "24 optical channels (48 fibers) up to 16.3 Gbps per channel, 850nm" -- to be precise!)
What's not to love? :-)
Anyone else know of any other boards combining FPGA's and optical interfaces out there? (Would love to see many more boards in this category!)
(Would also love to see a future computer where the bus is replaced by a backplane of optical connections, where each optical connection can be proxied/interposed, thus advanced system bus debugging could subsequently take place...)
To detect the single photons we need open source single photon detectors. There are several ways of detecting very low light:
o Photon Multiplier Tube (PMT), the classic way, a glass tube with several stages that when supplied with high voltage will multiply electrons generated by the incident photon. These tubes are available for quite cheap on internet or even full luminometers with photon multiplier tubes can be found.
o Charged Coupled Device (CCD) with low noise when cooled down are the classic way to get a very low light and reproducable image. These are used in special microscopes and ofter super expensive.
o Electron Multiplying CCD (EMCCD) combine the multiplying effect of the multiplier tube with the CCD sensor. And thus are even more expensive.
o sCMOS – CMOS (Complementary Metal Oxide Semiconductor) chips replaced CCD in consumer and even professional photo cameras. With the sCMOS, a scientific grade of CMOS, this technology is now entering the market of scientific imaging and hopefully bringing down the prices.
Then there are two relatively inexpensive and still sensitive semiconductor devices that are interesting for photon detection:
o Avalanche photodiodes (APD) are small discrete detectors that exploits the photon-triggered avalanche current of a reverse biased p-n junction to detect the photons. The higher the reverse voltage (80-200 VDC) the bigger the avalanche multiplication (acts like an internal amplification). With a relatively simple circuit providing high voltage at limited current and a high gain, low noise amplifier, these devices are very fast and sensitive photo detectors. The OpenAPD is an open source device for such a circuit.
o Single Photon Avalanche Diodes (SPAD), special avalanche photo diodes, which operate above the breakdown voltage in so called Geiger mode. When a photon hits the detector an on going avalanche is triggered that turns the diode “on”. The diode is then reset “off” to be ready for the next photon. Thus these elements create a digital signal (one click per photon, thus the name of Geiger mode I guess). The set and reset of the diode can be achieved with a passive quenching (just a resistor in series). Or with an active avalanche-quenching circuits. The OpenSPAD implements such an active quenching circuits as described in scientific literature.
o Silicon Photomultipliers (SiPMs)
Every SPAD in SiPM operates in Geiger mode and is coupled with the others by a metal or polysilicon quenching resistor. Although the device works in digital/switching mode, most of SiPM are an analog device because all the microcells are read in parallel, making it possible to generate signals within a dynamic range from a single photon to 1000 photons for a device with just a square-millimeter area."
>"Laser optical pickup units that are used in CD / DVD and BlueRay players to read the disc are amazing pieces of engineering with built in lasers, optical stages, filters and mirrors. These small units, that can be found in almost any electronics dumpster, are a perfect starting point for building amazing scientific instruments."
[...]
"...we started working on building a DIY Laser Tweezer (optical trap) built only from a DVD pickup and a webcam."
Services like Cloudflare, Akamai Technologies, Fastly, and Amazon CloudFront are not only widely accessible but also integral to the global internet infrastructure. In regions with restrictive networks, alternatives such as CDNetworks in Russia, ArvanCloud in Iran, or ChinaCache in China may serve as viable proxies. These CDNs support millions of websites across critical sectors, including government and healthcare, making them indispensable.
Blocking them risks significant collateral [commercial, commerce] damage, which inadvertently makes them reliable pathways for bypassing restrictions."
(There's also TCP/IP (Internet) via HAM radio (packet radio) and/or StarLink (or more broadly, satellite Internet)...)
Observation: If a large enough commercial corporation has an interest relating to commerce (in whatever area), then if that commerce conflicts with a government block (foreign or domestic) of whatever sort, then the large commercial interest, given enought time, will usually (*) win (they can usually hire better Lawyers, foreign or domestic...)
> There's also TCP/IP (Internet) via HAM radio (packet radio)
I get the idea and the spirit behind using ham radio to evade censorship, but...
- you're not allowed to run encrypted content over ham packet radio, at least by regulations, plain HTTP is fine but anything SSL is not... don't be a dick and ruin the fun for everyone else.
- ham radio comms is, outside of emergencies such as widespread blackouts or natural disasters, supposed to only be between ham radio operators themselves - no message-passing for others.
- at least in the long-range bands that you'd actually use for cross-country communications, bandwidth is scarce - and you may disturb a lot of people by doing that, or by just blasting around with huge transmitters... Monday late evening in Germany, try to listen in on 80m, there's so damn many Russians on there with extremely powerful transmitters.
Ham radio frequencies are scarce enough as it is and politicians, particularly in authoritarian countries, already aren't happy about it (in North Korea, for example, it's banned and it's one of the rarest countries to DX with). Please don't make life for hams more complex than it already is by abusing what it stands for.
Fundamentally, GREASE works quite similarly to UC-Crux, our tool for under-constrained symbolic execution of LLVM.
Essentially, GREASE analyzes each function in the target binary by running it on a slate of fully symbolic registers.
When errors occur (for example, if the program reads from uninitialized memory), GREASE uses heuristics to refine this initial symbolic precondition (e.g., by initializing some memory) and re-runs the target function. This process continues until GREASE finds a bug, or concludes that the function is safe under some reasonable precondition on its inputs. The blog post introducing UC-Crux [https://www.galois.com/articles/under-constrained-symbolic-e...] describes this algorithm in considerable detail."
Fascinating!
It's almost like the ability to run a given function inside of its own Virtual Machine / Virtual Environment and setting parameters/constraints for execution, that is, defining pre-conditions and post-conditions which determine successful (defined) or unsuccessful (undefined) behaviors!
Let me speculate that future programming languages -- might have abilities like this by default, that is, implicitly baked into them...
In short, I like the set of ideas espoused by this tool, and future compiler/language writers would do well to consider them when they write their future compilers/languages...
(Phrased another way, it would be like the ability to run any function of a program inside of its own custom Bochs [https://bochs.sourceforge.io/] environment, set pre and post constraints, run/test as many of these as needed, and return back to main program/environment reporting any constraint violations... or something like that...)
Anyway, an interesting set of ideas!