Decompilation does not preserve semantics. You generally do not know whether the code from the decompiler will be compiled to semantically equivalent binary that you initially decompiled.
My test harness loads up the original DLL then executes that in parallel against the converted code (differential testing). That closes the feedback loop the LLM needs to be able to find and fix discrepancies.
I'm also doing this on an old Win32 DLL so the task is probably much easier than a lot of code bases.
I am applying differential/property based testing to all the side effects of functions (mutations) and return values. The rust code coverage is also used to steer the LLM as it finds discrepancies in side effects.
It is written up in my link - please bear in mind it is really hard to find the right level to communicate this level of detail at - so I'm happy to answer questions.
Many of the decompiled console games of the '90s were originally written in C89 using an ad-hoc compiler from Metrowerks or some off-branch release of gcc-2.95 plus console specific assemblers.
I willing to bet that the decompiled output is gonna be more readable than the original source code.
Not related to what I was saying. Compilation is a many-to-one transformation & although you can try to guess an inverse there is no way to guarantee you will recover the original source b/c at the assembly level you don't have any types & structs.
A process for using LLMs to do brute-force decompilation of binaries and conversion to another programming language - including testing to prove equality.
This process is targeting an old computer game but there is nothing preventing this being run on any binary.
1. I work mostly in Rust so I'll answer there in terms of async. This library [0] uses queues to manage workload. I run a modified version [1] which creates 1 writer and n reader connections to a WAL backed SQLite and dispatch async transactions against them. The n readers will pull work from a shared common queue.
2. Yes there is not much you can do about file IO but SQLite is still a full database engine with caching. You could use this benchmarking tool to help understand where your limits would be (you can do a run against a ramdisk then against your real storage).
3. As per #1, I keep connections open and distribute transactions across them myself. Checkpointing will only be a problem under considerable sustained write load but you should be able to simulate your load and observe the behavior. The WAL2 branch of SQLite is intended to prevent sustained load problems.
I went through the same mental process as you and also use num_cpus [0] but this is based only on intuition that is likely wrong. More benchmarking is needed as my benchmarks show that more parallelism only works to a point.
You can see how the transactions work in this example[1]. I have a connection `.write()` or `.read()` which decides which queue to use. I am in the process [2] of trying to do a PR against rusqlite to set the default transaction behavior as a result of this benchmarking so hopefully `write()` will default to IMMEDIATE and `read()` remains DEFERRED.
Whilst I love the idea of SQLX compile-time checked queries it is not always practical to need a database connection to compile the code in my experience. If it works for you then thats great but we had a few tricky edge cases when dealing with migrations etc.
Also, and more fundamentally, your application state is the most valuable thing you have. Do whatever you feel makes you most comfortable to make sure that state (and state transitions) is as well understood as possible. rusqlite is that for me.
Weren't the compile-time connections to DB optional btw? They could be turned off I think (last I checked, which was last year admittedly).
My question was more about the fact that sqlx is integrated with tokio out of the box and does not need an extra crate like rusqlite does. But I am guessing you don't mind that.
SQLX has an offline mode where it saves the metadata of the SQL database structure but then you run into risk of that being out of sync with the database?
Yeah I just drop this one file [0] into my Tokio projects and I have a SQLite with single writer/multi reader pool done in a few seconds.
In other abuses of SQLite, I wrote a tool [0] that exposes blobs in SQLite via an Amazon S3 API. It doesn't do expiry (but that would be easy enough to add if S3 does it).
We were using it to manage a millions of images for machine learning as many tools support S3 and the ability to add custom metadata to objects is useful (harder with files). It is one SQLite database per bucket but at the bucket level it is transactional.
It builds a single Python WASM module with all dependencies included (they use VFS) and a Dockerfile to make the process easy (and actually worked first go). It does produce large files though: wasi-python3.11.wasm 110MB
This post demonstrates how to implement plugins for Rust using QuickJS in WebAssembly for safe arbitrary code execution. It relies heavily on on the work done by Shopify with Javy (https://github.com/Shopify/javy) but makes the deployment process easier.
I hope it is useful.
This post demonstrates how to implement plugins for Rust using QuickJS in WebAssembly for safe arbitrary code execution. It relies heavily on on the work done by Shopify with Javy (https://github.com/Shopify/javy) but makes the deployment process easier.
https://reorchestrate.com/posts/your-binary-is-no-longer-saf...
I am able to translate multi-thousand line c functions - and reproduce bug-for-bug implementation
reply