Hacker News new | past | comments | ask | show | jobs | submit | KallDrexx's comments login

It might not have been slowing down much in that time due to a thing called Ground Effect. Since the wheels weren't down, the flat body of the bottom of the aircraft + wings would have actually reduced drag and cushioned the plane for a bit, causing it to not slow down as much as you would assume.

This is awesome, and I really want to learn enough verilog to do a tiny tape out VGA chip design.

But man, 8bitworkshop really burned me out on trying to do verilog in a web IDE, and trying to set up verilator properly for local simulation with a proper IDE became such a hassle.

Ended up moving away from verilog for the moment back to normal software projects. I really want to get back in, but I really don't want to spend my limited coding time fighting IDEs and tooling.


Welcome to Hardware Design! The open-source toolchains come from hell. The closed-source options are supremely expensive and not much better

They don't call it EasyWare


Try SpinalHDL or Chisel and never look back.


I was worrying about debuggability if I started with a higher level Verilog transpiler and didn't have a good foundation of Verilog experience.

But maybe I should just do it.


a cheap FPGA kit is the best way to start learning IMHO.

Xilinx (now AMD) and Altera (now Intel) free tools are quite good.


Maybe I'm reading something wrong but the discussion this HN posting is about sounds very much about trying to make a Linux subsystem and API in Rust, so that Rust's type system can enforce conformance to safety mechanisms via its abstraction.

That's fundamentally different and harder than a driver being written in rust that uses the Linux's subsystems C APIs.

I can see a lot of drivers taking on the complexity tax of being written in Rust easily. The complexity tax on writing a whole subsystem in Rust seems like an exponentially harder problem.


You could just write rust code that calls the C APIs, and that would probably avoid a lot of discussions like the one in the article.

But making good wrappers would make development of downstream components even easier. As the opponents in the discussion said: there's about 50 filesystem drivers. If you make the interface better that's a boon for every file system (well, every file system that uses rust, but it doesn't take many to make the effort pay off). You pay the complexity tax once for the interface, and get complexity benefits in every component that uses the interface.

We would have the same discussions about better C APIs, if only C was expressive enough to allow good abstractions.


Two wrongs don't make a right.

AI being weaponized on the application side hurts us all, and leads to an environment that's extremely difficult to get noticed in as a job seeker. Yes, networking helps but that's not always possible, especially if you are trying to change industries or have generally worked with smaller companies where your networking doesn't expand out much.

It also leads to less diverse environments where hiring managers only look at resumes given by friends or friends of friends.


Theres the open source Digital (https://github.com/hneemann/Digital) which can run simulations but then export Verilog. If you have an ICE40 based GPU then in theory you can then use open source tools (like apio) to get that onto your FPGA. I've seen some impressive fpga tasks being generated by that.

I'm early in my learning of FPGA and have done nandgame and some other non-HDL circuit learnings. I have gone back and forth if I want to design my project via HDL or via something like Digital. There's not an easy pro/con either way.

For example, everything I've read is that the verilog block diagram tools kinda create isn't behavior, which makes optimization by FPGA compilers hard, because it can't automatically infer optimal feature gate usage if you are giving transistor level HDL vs behavior HDL. Likewise, it's not totally clear if block diagramming tools can facilitate test bench behavior, which I think I want to prevent regressions.


This is my dream!

The last year I've been working on a 2d focused GPU for I/O constrained microcontrollers (https://github.com/KallDrexx/microgpu). I've been able to utilize this to get user interfaces on slow SPI machines to render on large displays, and it's been fascinating to work on.

But seeing the limitation of processor pipelines I've had the thought for a while that FPGAs could make this faster. I've recently gotten some low end FPGAs to start learning to try and turn my microgpu from an ESP32 based one to an FPGA one.

I don't know if I"ll ever get to this level due to kids and free time constraints, but man, I would love to get even a hundredth of this level.


You probably know this already, but for anyone else curious about going down that road: For this type of use, it's definitely worth it to constrain yourself to FPGAs with dedicated high-bandwidth transceivers. A "basic" 1080p RGB signal at 60hz requires some high-frequency signal processing that's really hard to contend with in pure FPGA-land.


That's good to know actually. I'm still very very early in my FPGA adaption (learning the fpga basics) and I am intending to start with standard 640x480 VGA before expanding.


Despite my day job involving rust, I struggled pretty hard getting Rust working adequately for my little GPU project on both pi pico and an esp32s3. Both had debugg-ability issues, missing features that would have required advanced FFI integrations to be added, and things just wouldn't even work. I spent a week on it making no progress, and with only an hour an evening I had to do anything it was frustrating.

I then spent 2 days brushing up on C and got things up and running with ESP-IDF painlessly. I've been iterating really fast on C just fine, and the few times I wish I had Rust's features have been eclipsed by the advanced ESP-IDF api's I've needed to use that didn't have support in the Rust HALs.


My experience wasn’t stellar but it wasn’t like yours. There’s of course no comparison between the speed of getting up and running with the official sdk and toolchain, but it wasn’t so miserable.


If you believe the FTC in their current active anti trust case, the reason for this is that if Amazon sees your product for cheaper elsewhere (even if it's the same after shipping), they will down rank your product in search results and add friction to the buying process.

Since no manufacturer can risk having no Amazon sales, this means the manufacturer requires a base price that matches Amazon.

Amazon recoups free shipping cost through additional merchant markups.

This means consumers pay increased prices everywhere, and we are all indirectly paying the free shipping cost even if you walk into a store.


I don't have numbers, but there seem to be a constant stream of new programming books coming out by manning, pakt, and O'Reilly.

So it seems to me just that it's not that people don't like books, they take longer to produce and this have less visibility to those not looking explicitly for them


Compared to Kahn Academy, which also has a proven track record, is free, and has been around for a long while.


If your time is worth even $1/hour, the "free" Kahn Academy option will be far more expensive than this program.

I respect what Sal Kahn built, especially in the early days, but it's just not anywhere near as time-efficient.


Is there any data to back that up? Just because it's a paid resource doesn't mean it's better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: