Hacker Newsnew | past | comments | ask | show | jobs | submit | seddonm1's commentslogin

I delivered a talk at Rust Sydney about this exact topic last week:

https://reorchestrate.com/posts/your-binary-is-no-longer-saf...

I am able to translate multi-thousand line c functions - and reproduce bug-for-bug implementation


Decompilation does not preserve semantics. You generally do not know whether the code from the decompiler will be compiled to semantically equivalent binary that you initially decompiled.

My test harness loads up the original DLL then executes that in parallel against the converted code (differential testing). That closes the feedback loop the LLM needs to be able to find and fix discrepancies.

I'm also doing this on an old Win32 DLL so the task is probably much easier than a lot of code bases.


What are you tracking during the runtime tracing? Or is that written up in your link?

I am applying differential/property based testing to all the side effects of functions (mutations) and return values. The rust code coverage is also used to steer the LLM as it finds discrepancies in side effects.

It is written up in my link - please bear in mind it is really hard to find the right level to communicate this level of detail at - so I'm happy to answer questions.


That's fine, that answers my question.

Many of the decompiled console games of the '90s were originally written in C89 using an ad-hoc compiler from Metrowerks or some off-branch release of gcc-2.95 plus console specific assemblers.

I willing to bet that the decompiled output is gonna be more readable than the original source code.


Not related to what I was saying. Compilation is a many-to-one transformation & although you can try to guess an inverse there is no way to guarantee you will recover the original source b/c at the assembly level you don't have any types & structs.

Hi Ben. I published an article about this problem this week (and did a talk at Rust Sydney).

What you need is differential, property testing. I’m sure it would work for you (you can skip the first half as you already have the source):

https://reorchestrate.com/posts/bringing-a-warhammer-to-a-kn...


What about s3 stored in SQLite? https://github.com/seddonm1/s3ite

This was written to store many thousands of images for machine learning


A process for using LLMs to do brute-force decompilation of binaries and conversion to another programming language - including testing to prove equality.

This process is targeting an old computer game but there is nothing preventing this being run on any binary.


Thanks.

All good and valid questions.

1. I work mostly in Rust so I'll answer there in terms of async. This library [0] uses queues to manage workload. I run a modified version [1] which creates 1 writer and n reader connections to a WAL backed SQLite and dispatch async transactions against them. The n readers will pull work from a shared common queue.

2. Yes there is not much you can do about file IO but SQLite is still a full database engine with caching. You could use this benchmarking tool to help understand where your limits would be (you can do a run against a ramdisk then against your real storage).

3. As per #1, I keep connections open and distribute transactions across them myself. Checkpointing will only be a problem under considerable sustained write load but you should be able to simulate your load and observe the behavior. The WAL2 branch of SQLite is intended to prevent sustained load problems.

[0]: https://github.com/programatik29/tokio-rusqlite [1]: https://github.com/seddonm1/s3ite/blob/0.5.0/src/database.rs


Thanks for your answer.

For 1, what is a good n? More than NUM_CPU probably does not make sense, right? But would I want to keep it lower?

Also, you dispatch transactions in your queue? You define your whole workload upfront, send it to the queue and wait for it to finish?


I went through the same mental process as you and also use num_cpus [0] but this is based only on intuition that is likely wrong. More benchmarking is needed as my benchmarks show that more parallelism only works to a point.

You can see how the transactions work in this example[1]. I have a connection `.write()` or `.read()` which decides which queue to use. I am in the process [2] of trying to do a PR against rusqlite to set the default transaction behavior as a result of this benchmarking so hopefully `write()` will default to IMMEDIATE and `read()` remains DEFERRED.

[0] https://docs.rs/num_cpus/latest/num_cpus/ [1] https://github.com/seddonm1/s3ite/blob/0.5.0/src/s3.rs#L147 [2] https://github.com/rusqlite/rusqlite/pull/1532


Valuable info and links, instant bookmarks, thank you!

If you don't mind me asking, why did you go with rusqlite + a tokio wrapper for it and not go with sqlx?


Whilst I love the idea of SQLX compile-time checked queries it is not always practical to need a database connection to compile the code in my experience. If it works for you then thats great but we had a few tricky edge cases when dealing with migrations etc.

Also, and more fundamentally, your application state is the most valuable thing you have. Do whatever you feel makes you most comfortable to make sure that state (and state transitions) is as well understood as possible. rusqlite is that for me.


Thank you, good perspective.

Weren't the compile-time connections to DB optional btw? They could be turned off I think (last I checked, which was last year admittedly).

My question was more about the fact that sqlx is integrated with tokio out of the box and does not need an extra crate like rusqlite does. But I am guessing you don't mind that.


SQLX has an offline mode where it saves the metadata of the SQL database structure but then you run into risk of that being out of sync with the database?

Yeah I just drop this one file [0] into my Tokio projects and I have a SQLite with single writer/multi reader pool done in a few seconds.

[0]: https://github.com/seddonm1/s3ite/blob/0.5.0/src/database.rs


Thanks again!

I'll be resuming my effort to build an Elixir <-> Rust SQLite bridge in the next several months. Hope you won't mind some questions.


In other abuses of SQLite, I wrote a tool [0] that exposes blobs in SQLite via an Amazon S3 API. It doesn't do expiry (but that would be easy enough to add if S3 does it).

We were using it to manage a millions of images for machine learning as many tools support S3 and the ability to add custom metadata to objects is useful (harder with files). It is one SQLite database per bucket but at the bucket level it is transactional.

0: https://github.com/seddonm1/s3ite


I have been following and playing with this repository: https://github.com/singlestore-labs/python-wasi/

It builds a single Python WASM module with all dependencies included (they use VFS) and a Dockerfile to make the process easy (and actually worked first go). It does produce large files though: wasi-python3.11.wasm 110MB


Yes! Single store is a great team. We are currently using some of their work for this Python release, like libz


This post demonstrates how to implement plugins for Rust using QuickJS in WebAssembly for safe arbitrary code execution. It relies heavily on on the work done by Shopify with Javy (https://github.com/Shopify/javy) but makes the deployment process easier. I hope it is useful.


This post demonstrates how to implement plugins for Rust using QuickJS in WebAssembly for safe arbitrary code execution. It relies heavily on on the work done by Shopify with Javy (https://github.com/Shopify/javy) but makes the deployment process easier.

I hope it is useful.


The Materialize team manage a fork of https://github.com/sfackler/rust-postgres with the changes required to consume from the Postgres WAL: https://github.com/materializeInc/rust-postgres.

Here is a comment and link to some code which seems to work: https://github.com/sfackler/rust-postgres/issues/116#issueco...


Thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: