Hacker Newsnew | past | comments | ask | show | jobs | submit | bieganski's commentslogin

their costs are bound to compute anyway, they don't mind huge compensations also - it's not much of a cost saving to re-build, even cheaply, inhouse Slack or whatever?


"preventing firearms printing", aka "securing big companies' income from spare parts selling with 500% margin"


i wish there was an additional column in the table, that says "what problem does it solve". oh, and 'it's written in rust' does not count.


“It’s written in Rust”

Actual LOL. Indeed. I was working for a large corporation at one point and a development team was explaining their product. I asked what its differentiators were versus our competitors. The team replied that ours was written in Go. #faceplam


The Rust rewrites can become tiresome, they have become a meme at this point, but there are really good tools there too.

An example from my personal experience: I used to think that oxipng was just a faster optipng. I took a closer look recently and saw that it is more than that.

See: https://op111.net/posts/2025/09/png-compression-oxipng-optip...


If a new tool has actual performance or feature advantages, then that's the answer to "what problem does it solve", regardless of what language it's in.


Bingo.


> but there are really good tools there too.

That's the problem. How good they are ? Who can tell ? The basic UNIX tools didn't come in "one day" like most of these "rust tools".


That is a differentiator if your competitors are written in Python or Ruby or Bash or whatever. But yeah obviously for marketing to normal people you'd have to say "it's fast and reliable and easy to distribute" because they wouldn't know that these are properties of Go.


You can write slow unmaintainable brittle garbage in any language though. So even if your competition is literally written in Bash or whatever you should still say what your implementation actually does better - and if it's performance, back it up with something that lets me know you have actually measured the impact on real world use cases and are not just assuming "we wrote it in $language therefore it must be fast".


> You can write slow unmaintainable brittle garbage in any language though.

Sure. You can drive really slowly in a sports car. But if you're looking for travel options for a long distance journey are you going to pick the sports car or the bicycle.

Also I have actually yet to find slow unmaintainable brittle garbage written in Go or Rust. I'm sure it's possible but it's vastly less likely.


> Also I have actually yet to find slow unmaintainable brittle garbage written in Go or Rust. I'm sure it's possible but it's vastly less likely.

Citation needed. /s


No. The differentiator is whatever benefits such an implementation might deliver (e.g., performance, reliability, etc.). Customers don’t start whipping out checkbooks when you say, “Ours is written in Go.”


That is what the post you responding to is saying


I’m still responding to the first sentence. It’s not a differentiator, even to smart engineers who understand the programming language in question (e.g., Go or Rust). Whenever an engineer leads with the programming language, I know that it’s just their favorite. You can write great Python, Ruby, Go or Rust, or you can write crappy Python, Ruby, Go, or Rust. Yes, some languages make more sense for certain environments (probably don’t want to do kernel programming in Ruby, for instance), but the programming language is never the differentiator itself. It’s what you make happen with it.


> You can write great Python, Ruby, Go or Rust, or you can write crappy Python, Ruby, Go, or Rust. ... the programming language is never the differentiator itself

This is called "the fallacy of grey". Very common.

https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallac...


TypeScript is the one true language that will bind them all in servitude.


Many of the entries do include this detail — e.g. "with syntax highlighting", "ncurses interface", and "more intuitive". I agree that "written in rust", "modern", and "better" aren't very useful!


Some of this just makes me think that they are compared against the wrong tool though. E.g.

> cat clone with syntax highlighting and git integration

doesn't make any sense because cat is not really meant for viewing files. You should be comparing your tool with the more/less/most family of tools, some of which can already do syntax highlighting or even more complex transforms.


Yup, I made that same point in another comment. Out of interest, though, how do you get syntax highlighting from any of those pagers? None of them give it to me out of the box.


Pipe it through vim ? /s


Also using a non GPL license does not count.


A lot of those tools are also usable on windows thats why i like them.


It seems for most of them, the primary goal is improved UX.


that's really cool. to gain traction i would start with reimplementing all the tools from https://github.com/iovisor/bcc/tree/master/libbpf-tools in PythonBPF.


This is actually what we plan to do too!


it would be nice to have some peripheral drivers implemented (UART, eMMC etc).

having this, the next tempting step is to make `print` function work, then the filesystem wrapper etc.

btw - what i'm missing is a clear information of limitations. it's definitely not true that i can take any Python snippet and run it using PyXL (for example threads i suppose?)


Great points!

Peripheral drivers (like UART, SPI, etc.) are definitely on the roadmap - They'd obviously be implemented in HW. You're absolutely right — once you have basic IO, you can make things like print() and filesystem access feel natural.

Regarding limitations: you're right again. PyXL currently focuses on running a subset of real Python — just enough to show it's real python and to prove the core concept, while keeping the system small and efficient for hardware execution. I'm intentionally holding off on implementing higher-level features until there's a real use case, because embedded needs can vary a lot, and I want to keep the system tight and purpose-driven.

Also, some features (like threads, heavy runtime reflection, etc.) will likely never be supported — at least not in the traditional way — because PyXL is fundamentally aimed at embedded and real-time applications, where simplicity and determinism matter most.


Are you planning on licensing the IP core? It would be great to have your core integrated with ESP32, running alongside their other architectures, so they can handle the peripheral integration, wifi, and Python code loading into your core, while it sits as another master on the same bus as the other peripherals.

Do you plan to have AMBA or Wishbone Bus support?


Thanks — yes, licensing is something I'm open to exploring in the future.

PyXL already communicates with the ARM side over AXI today (Zynq platform).


> The available Linux debuggers are gdb and lldb. They both suck. They crash all the time, don't understand the data types I care about, and they don't make the data I need available at my fingertips as quickly as possible.

Okay, so this is the author's answer to the most important question: "why?"

For me this is a serious issue, making strong statements without any single backing example. If you experience crashes, please report to the maintainers - i guarantee that you won't be ignored. You say that it's missing some data that you need? Fine, but be precise - what data?

Otherwise it sounds like a "yeah, let's make a project for fun and pretend that it's serious by finding sort-of-justification/use case" (i'm not telling that it is the case for you - i'm telling that it sounds like it, based on your own description).

Also, would you feel nice if i put in my project's README a note that the project of you, the one that you put your effort to, "sucks"?


There are almost 4000 open issues on gdb reported more than five years ago.

In this conversation are reports of an annoying bug that requires a user patch gdb and it's existed for almost twenty years.

It was years before anyone was even assigned because of a bug in their bug tracking system, and they haven't addressed any further comments over the decades.


I'd rather the time taken enumerating the deficiencies of existing debuggers be spent building something better instead. I doubt many people that have used gdb/lldb need convincing that that's possible. If it is needed, Microsoft's windbg (far from what's possible but light years ahead of gdb/lldb) is a proof by construction.


Bit of a possibly unpopular take but I very much disagree. The syntax of windbg/cdb is just... Really bad. I feel like I'm writing assembly and the mnemonics don't actually align with what the command does. So it's difficult for me to get confident with it. At least the commands in gdb make sense (and I wish gdb supported windows executables, but alas...)


windbg has no sane library api, which makes it by construction unsuitable for automation and inspection.

I'd personally prefer, if we would have the options of multiple time-travel debugging sessions based on synchronization points (if needed) being overlapped to single out bugs in interesting areas (not covered by static or dynamic analysis). Debugger would be essentially a virtual program slicer being able to hide unnecessary program parts or reduce it. However, so far we neither have the options for piece-wise time-accurate multi-threading recording, nor slicing overlays nor in-memory reducers (ideally also reducing AST). I may be mistaken on "piece-wise time-accurate multi-threading", since I've not checked what is possible with hardware yet.

Heck, we dont even have overview data for common bug classes and cost for verification or runtime-checking in academia etc.


> while there's also a push to statically link everything

could you elaborate? who is pushing and what?

dynamic linking has it pitfalls, often it's a pain, but it has big big profits as well.


> > while there's also a push to statically link everything

> could you elaborate? who is pushing and what?

I don't know of a more general movement, but Go developers seem very eager/proud about the single-binary thing. It can make deployments, particularly updates, much less issue prone.


Well, if you're deploying in a container where the only useful userspace program is your http server web API, embedding the whole clib and cpplib just for a few functions, it is smaller and simpler to deploy to use static linking.


Aye, a container with the binary and the right versions of the supporting libraries is essentially static linking with extra steps.

Containers offer some tooling for resource management and such, though that is basically wrappers and other syntactic sugar dressing up OS facilities like resource groups so isn't anything you can't do with a statically linked binary too.


And dynamic libraries can be prelinked to regain some performance: https://linux.die.net/man/8/prelink


related url, for syscall intercepting made easy: https://github.com/bieganski/asstrace/

see `pathsubst` example.

unfortunately set of use cases for `ptrace`-based solution is limited, due to high performance overhead.


can someone explain to me, isn't it the kernel's responsibility to preempt the CPU-heavy userspace thread (noisy neighbour in our case) after fixed slice of time anyway?


If a sleeping low-latency task becomes runnable due to events (network, storage, timers) then ideally it would start running straight away, preempting a throughput-oriented task.


Is this not the case in Linux land? On Windows 7, I could run a CPU based bitcoin miner that pegged every core (including hyperthreads) to 100% and still browse the internet with zero stutter, latency, or slowdown, because the Windows scheduler had no issues giving whatever time was needed by the UI or Window Event processing loop and just feed whatever was leftover to the number crunching app.


I think they have priority boosting for apps that are in focus. It doesn't work for tasks in general though. Network/disk throughout takes a hit when your cpu is pegged, for instance.


This is not true. I frequently run heavy compile jobs on my Windows machine that peg all cores to 100%. If I don’t tell Visual Studio to run the compile tasks with background priority, they will starve the window server of resources a s make interacting with the system incredibly slow. I would not want to use such a machine for browsing.


Part of that was likely due to Windows doing its graphics at kernel level instead of in userspace.


sounds like an interesting direction, but I don't understand why should we have it coupled to specific tool (pwndbg)? Why not implement a BinaryNinja plugin to dump all user-defined names (function names, stack variables), together with an original (stripped) binary to the new ELF/.exe file, with symbol table and presumably with DWARF section?


I've developed a Ghidra extension that exports object files. I've considered generating debugging symbols in order to improve the debugging experience when reusing these object files in new programs, but I keep postponing that feature for various reasons.

Executable formats have at least one and often multiple debugging data formats which are very different from each other: ELF has STABS and DWARF version 1 to 5, MSVC has at least COFF symbols and PDB (which isn't documented)... Even discarding the old or obsolete stuff, there's no universal solution here. gdb+pwndbg seems to side-step this issue by integrating the debugger with Binary Ninja.

Projecting reverse-engineered information into a debugging data format would also be a technical challenge once you go past global variables and type definitions. Debuggers already have a terrible user experience when stepping through functions in an optimized executable ; I doubt that reverse-engineered debugging data would be any better.

Toolchains also don't do a lot of validation or diagnostics on their inputs and I can tell from experience that writing correct object files from scratch is already quite tricky. I expect that serializing correct and meaningful debugging data would be much harder than that.

Doing this at the native executable level has the obvious advantage of working out of the box with standard tooling, but it would be a lot of work. I've already taken 2 1/2 years to make an object file exporter that's good enough for my needs and I'm still balking at generating DWARF debugging data every time I've considered it. I'm resigned to a terrible debugging experience and so far I've managed to muddle through it.


> Debuggers already have a terrible user experience when stepping through functions in an optimized executable ; I doubt that reverse-engineered debugging data would be any better.

Actually, I suspect it could be a world of difference. The main failures of optimized debugging are that code motion makes line number tables more of a suggestion, the source code values may disappear in favor of other related values (e.g., SROA or changing A[i] to p++), live ranges of variables may shrink, and debugging information may only support variables in stack slot or other specific locations. If you generating debugging based on decompiled code, you can control the output code to make the first two problems more or less go away, and so you only really have to worry about narrow live ranges (can't do much about that) or debuggers not supporting the features you need (which, given DWARF, is a very real possibility).


> PDB (which isn't documented)...

What about https://github.com/Microsoft/microsoft-pdb ? Not technically documentation, but the closest thing to it.


I was mostly aware of https://llvm.org/docs/PDB/index.html, which paints a rather complex high-level picture.

At any rate, while my extension has exporters for both ELF and COFF I'm personally using only the former. The latter is user-contributed and I don't know much about the Microsoft toolchain and file formats.


that's already available in binaryninja out of the box, via the (seemingly undocumented) 'plugins' -> 'export as dwarf' menu item.


Not really, export as dwarf only gives types and symbols, it does not generate dwarf pc -> source line information, which this plugin provides via a custom context pane. Manually running the export every time a pc step occurs would also be quite painful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: