Hacker News new | past | comments | ask | show | jobs | submit | reflexe's comments login

Not like us is pretty new (may 24), not sure any proper llm could have been trained on it. All big openai models know nothing about 2024.


Nothing a simple "Browse the Web" plugin/tool before replying can't fix ;)


Maybe i am missing something but while it is interesting, I dont think it has any real security impact.

Since the threat model is that the attacker and the victim are connected to the same router via the same wifi network, not isolated from each other, in a case where you are using wifi in psk for example, the attacker can already sniff everything from other clients.

Therefore, you can spoof packets by just responding to them directly. It is a lot simpler and takes a lot less time (since you just need to respond faster than the server with the right seq and port numbers). Once you are in the same network you can do even crazier stuff like arp spoofing and then let the victim think that you are the router and convince it to send all of its packets to you (https://en.m.wikipedia.org/wiki/ARP_spoofing)

Edit: on a second thought, maybe in a use case where the victim and the attacker are in different wifi networks (or just configured to be isolated ), the attacker should be able to perform a denial of service for a specific ip:port by sending RST and then ACK with every possible source port.


Also only works with non encrypted conns (ftp, http), that one should not be using. And like you say on open or PSK networks you can do worst stuff (if isolation is not enable arp spoofing the default G will be way worst then this)


wxHexEditor is great but not really maintained and sometimes crashes (it even has a builtin prayer to save you from crashing https://github.com/EUA/wxHexEditor/blob/master/src/HexEditor...) A good replacement is ImHex (https://github.com/WerWolv/ImHex). Which does the job really well.


ImHex Looks amazing, but I couldn't get it to work on my system last time I tried. Not a pre built version and not compiling it myself. So I wrote myself a simple hex viewer. Only a viewer, don't need an editor. All other hex editors that I could get to work on my system where really disappointing. Either they couldn't handle large files (>2GB), or they lacked features, like decoding the bytes at the current location as various integer types, had very cumbersome controls for navigation, or displayed important information like the current offset in uneditable labels (status bar) and even didn't give it enough room for large files so it got cut off! Did they never use their own program? Anyway, my viewer only has a terminal interface, so you can always select and copy any text it displays. Also has IMHO handy controls to jump around to absolute and relative offsets. See: https://github.com/panzi/rust-hox But don't look at the ugly code. I just cobbled it together somehow because I needed exactly that.


I noticed hexyl wasn't on your list: https://github.com/sharkdp/hexyl

Your software seems to be in the same vein as hexyl. I can't personally vouch for how well it handles large files cause it's been a while, but I suspect it'll do alright.


Is that an actual viewer with navigation and all, or is it just like xxd, but with Unicode? Dumping gigabytes to the terminal isn't what I want.


I've actually looked at hexyl and wxHexEditor now and added comments on those to the README of my own hex viewer.


Not sure i am following, what problem your product is trying to solve? helping to write tests/run the tests/just organizing tests as a part of the CI pipeline? How is it different than just running tests? (Or is it the platform to run tests on?) If you are trying to do CI for silicon, then what is your target market? From my experience, companies that design their own silicon are usually big enough to have their own custom pipeline for testing and verification and it would be quite difficult to convince them to switch. Smaller companies get help from larger companies in development and verification.

Do you have any tooling that won’t require the developer to write tests? (E.g. something that will ‘work’ with no effort from the developer’s POV - kind of sonarqube for vhdl/verilog)

In any case, good luck. Glad to see some HW-related startups.


Hey, thanks!

CI is one component of our platform. Most other CI tools are pretty agnostic about how tests are structured, though. We also integrate a way to structure your tests into groups so you can control when each test is called. For example, if one test out of 500 fails, it's super easy to rerun that one test with verbose logging and wave dumping enabled. We then also track test pass/fails over time, have tools to leave comments for coworkers on waveforms and logs in the browser like in Google Docs, etc.

Out of curiosity, what do you mean by "Smaller companies get help from larger companies in development and verification"?


In my experience in two HW companies that developed their own ASICs (one as a startup and one as a publicity traded company), we never developed any chip fully by ourself. In all of the cases there was another large company who helped to make the project work so we will actually end up with wafers.

If you are not at the scale of NVIDIA/intel and release a new silicon every other month, it is not worth it to recruit so many people for a relatively short period. However, I am not fully sure how involved they were in the pre-silicon verification process, but at least in some cases they were very involved in the development.


That's not correct. I've worked from start-ups to semiconductor giants. Always the first option to develop everything in house, if you can find the talent. This is pretty much industry standard.


What ASIC/semi start up that you know of is developing everything in house? That is absurdly complex and hundreds of millions of dollars...


Pretty much most of them. They might buy a small IP or two here and there, but for the rest everyone develops their design mostly in house. It's not 100s of millions, that's a ridiculous amount of money unless you are designing like a huge CPU or TPU or so. We design (can't give company name) quite large chips with complex analog and digital in 7nm and 5nm as a start-up and our seed funding was less than 20 million. This is kind of bare minimum funding for a semi start-up anyhow.


From my experience, the biggest footgun with shared_ptr and multi threading is actually destruction.

It is very hard to understand which thread will call the destructor (which is by definition a non-thread-safe operation), and whether a lambda is currently holding a reference to the object, or its members. Different runs result different threads calling the destructor, which is very painful to predict and debug.

I think that rust suffers from the same issue, but maybe it is less relevant as it is a lot harder to cause thread safety issues there.


> which is by definition a non-thread-safe operation

yes, but at this point, since the reference count is reaching 0, there is supposed to be only that one thread accessing the object being destroyed, so the destruction not being thread-safe should not be a problem.

If otherwise, it means there was a prior memory error where a reference to the pointed-to object escaped the shared_ptr. From there the code is busted anyway. By the way it cannot happen in Rust.

> Different runs result different threads calling the destructor

What adverse effects can happen there? I can think of performance impact, if a busy thread terminates the object, or if there is a pattern of always offloading termination to the same thread (or both of these situations happening at once). I can think of potential deadlocks, if a thread holding a lock must take the same lock to destroy the object (unlikely to happen in Rust where the Arc object would typically contain the object wrapped in its mutex and the mutex wouldn't be reused for locking other parts of the code). There isn't much else I can think of, what do you have in mind?

> whether a lambda is currently holding a reference to the object, or its members

This cannot happen in Rust. If a lambda is holding a reference to the object, then it either has (a clone of) the Arc, or is a scoped lambda to a borrow of an Arc.


Looks like this is is not the only problematic example, for example: https://demo.corgea.com/338 Makes sure you don't try to get ctf.key (but not .env for example). Another issue: https://demo.corgea.com/531# The LLM makes up a usage of shell=True despite the original “vulnerable” code not using it.

Well, at least they are showing a real demo and not some made up results.

I think that overall the idea has some potential, but not sure we are there yet.


Thanks for the feedback!

For the first one the SAST scanner reports to us issues based on lines and issue type, so we generate fixes isolated for that issue. We do not generate fixes for other vulnerabilities in the same file for the same finding in the same because we want to have one fix to one finding. There might be another issue reported on another issue, and we plan on allowing people to group fixes in the same file together.

Not sure if I'm missing something on the shell=True. It's in the vulnerable code, which is why it changed it. You have to scroll to the right in the code viewer. https://github.com/RhinoSecurityLabs/cloudgoat/blob/8ed1cf0e...

Is there something I'm missing?


For the first issue: I understand. Thanks.

As for the second, There is no shell=True for me in the demo but it is present in the code you sent. So maybe it is just a bug in the presentation somewhere.


Scrolling to the right should work, but you'll need to do so on each code editor section. We should combine scrolling of these two windows to be in sync.

We'll also take a look at what's causing this. It might be a browser issue.


They scroll in sync for me, but long lines seem truncated in iOS 16.2 Safari. No visible code on that second linked page includes the string in question.


Thanks for sharing! Will look into it :)


Same here, must be a bug in the view, for me it's missing the closing parenthesis as well.


Actually, in its root it is based on simd and prefetching. In short, each part of the packet processing graph is a node. It receives a vector of packets (represented as a vector of packet indexes), then the output is one or more vectors, each goes as an input to the next step in the processing graph. This architecture maximizes cache hits and heats the branch predictor (since we run the same small code for many packets instead of the whole graph for each packet).

You can read more about it here: https://s3-docs.fd.io/vpp/24.02/aboutvpp/scalar-vs-vector-pa...


I can certainly imagine some SIMD concepts in that. Particularly stream-compaction (or in AVX512 case: VPCOMPRESSD and VPEXPANDD instructions)

EDIT: I guess from a SIMD-perspective, I'd have expected an interleaved set of packets, a-la struct-of-arrays rather than array-of-structs. But maybe that doesn't make sense for packet formats.


The NIC gives you an array (ring buffer) of pointers to structs (packets). Interleaving them into SOA format would probably cost more than any speedup from SIMD.


Yeah, but its difficult to write a SIMD / AVX512 routine if things aren't in SOA format.

I can see how this approach described is "vector-like", even if the vector is this... imaginary unit that's parallelizing over the branch predictor instead of an explicit SIMD-code.

This "vector" organization probably has 99.999%+ branch prediction or something, effectively parallelizing the concept. But not in the SIMD-way. So still useful, but not what I thought originally based on the title.


A ring buffer of pointers to structs is friendly to gather instructions. That said, the documentation shows a graph of operations applied to each packet. I'd expect that to lead to a lot of "divergence", and therefore being non-SIMD friendly.

(also, x86-64 CPUs with good gather instructions are rare, and sibling comments show that this is aimed at lower end CPUs. That makes SIMD even less relevant.)


Most packets follows the same nodes in the graph. You have some divergence (eg. ARP packets vs IP packets to forward), but the bulk of the traffic does not. So typically the initial batch of packets might be split in 2 with a small "control plane traffic" batch (eg. ARP) and a big "dataplane traffic" batch (IP packets to forward). You'll not do much SIMD on the small controlplane batch which is branchy anyway, but you do on the big dataplane batch, which is the bulk of the traffic.

And VPP is targeting high-end system and uses plenty of AVX512 (we demonstrated 1TBps of IPsec traffic on Intel Icelake for example). It's just very scalable to both small and big systems.


I have been developing a product that uses vpp in production for a few years now. It is very cool to see how much you can squeeze out of cheap low power CPUs. You can easily handle tens of gbits in iMIX with a a few ARM cortex A72s.

Vpp has very good documentation: https://s3-docs.fd.io/vpp/24.02/ A very cool unique feature is the graph representation for packet processing, and the ability to insert processing nodes to the graph dynamically per interface at some point in the processing using features (https://s3-docs.fd.io/vpp/24.02/developer/corearchitecture/f...)


VPP has been shown to run at 22.1 Mpps on a single core of Gracemont (the efficient / Atom core in Alder Lake), and 42.3 Mpps on 2 cores. (Intel E810 4x25 NIC, DPDK 22.0, VPP 22.06, GCC 9.4.0, RFC 2544 test with packet loss <= 0.1%.

The same core will do 14.99Gbps of IPsec (aes-128-gcm, 1480 byte packets) using VPP, largely because it supports (VEX-encoded) VAES.

While these aren't ARM cortex A72s, they're quite close (cheap low power) for Intel.


Maybe it is just my mood, but this blog's full screen newsletter subscription banner on mobile is much more infuriating than whatever Jitsi did (screenshot: https://imgur.com/a/fSEyZ8x)


It is only available for enterprise (which is unreasonably expensive: 21usd vs 4usd per month per seat). We asked our devops to develop something similar using our existing ci infra, took them a few days.


What did they use? A GitHub Action that runs on PRs and creates other PRs with merged changes?


I looked into that. Basically a GitHub action that goes over all of the prs that marked as "automerge" and updates them to master. It is not a perfect solution, but it works perfectly well for our monorepo.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: