Hacker News new | past | comments | ask | show | jobs | submit | archivator's comments login

Take a look at Core Feature #2 in this post - https://deepflow.io/ebpf-the-key-technology-to-observability...

It looks like it's using tcp flow tuple + tcp_seq to join things.


I'd love to see a size comparison of the DWARF encoding vs the binary search map. I have a strong suspicion that there's a neat perfect hash solution to this problem - perfect hashes can encode keys in single digit #bits, and you get faster lookups.


Let me check! I don't our approach to be significantly more compact than DWARF's. DWARF is very compact despite being so much more expressive.

Funny that you mentioned perfect hashing, I thought about this, but didn't go for this approach as it would have too many drawbacks for our use-case:

- It would further increase the complexity of the unwinder, which is already not trivial. Ideally, we would like to reduce the surface area of things that can go wrong.

- Implementing perfect hashing in BPF might be tricky for several reasons. First it might take quite a bit of precious instructions, but also, it would force us to ship a compiler to generate the BPF code in the fly rather than shipping it pre-compiled. We really want to avoid this.

- Last, it would force us to generate entries for every program counter, while thanks to using binary search we can omit redundant entries. DWARF expressions would suffer from a similar issue.


> Note that seccomp has limited visibility into recvmsg / sendmsg args because bpf can't dereference syscall arg pointers.

BPF programs attached to syscalls (via kprobe or fentry) can read arguments via helpers (bpf_probe_read_{user,kernel}). Seccomp uses "classic BPF" which has no concept of helpers or calls.


The FDA's report on third party servicing says this:

> The currently available objective evidence is not sufficient to conclude whether or not there is a widespread public health concern related to servicing, including by third party servicers, of medical devices that would justify imposing additional/different, burdensome regulatory requirements at this time. Rather, the objective evidence indicates that many OEMs and third party entities provide high quality, safe, and effective servicing of medical devices.

https://www.fda.gov/media/113431/download

From my personal experience poking at a CPAP machine, there's nothing magical about it. All the sensors and active elements I could track down are available from the respective manufacturers in large quantities. The CPU is a freaking off the shelf STM32F4 with the jtag header still on the board. This is not some impossible to debug hyper-integrated design.


This articles conflates a lot of things but it also has the priorities somewhat wrong.

1) fsync cost. Yes, fsyncs are dangerously slow in any Android app. (SQLite for example is a common culprit. Shared Prefs are another). HOWEVER, it's possible that flushes cause reads to be queued behind them (either in the kernel or on the device itself) which is even worse because

2) Random read cost is super super important. Android mmap's literally everything and demand paging is particularly common AND horrendous as a workflow. To add insult to injury, Android does not madvise the byte code or the resources as MADV_RANDOM, so read-ahead (or read-around) kicks in and you end up paging in 16KB-32KB where you only wanted 4KB.

Also, history has shown custom flash-based file system on Android to be a world of pain. yaffs, jffs have some pretty atrocious bugs/quirks. I'd much rather see the world unify on common file systems, optimized for flash-like storage, rather than OEMs shipping their own in-house broken file "systems" (I'm looking at you, Samsung).


Why can't f2fs be that common file system?


I just read the F2FS paper and it seems very well-designed to match the physical properties of flash, plus some interesting capabilities to keep hot/cold data separate. If there's something wrong with F2FS, let's fix it. This seems like a far better place to start from than any filesystem designed around the assumptions of a spinning disk.


It's in the mainline Linux kernel now, it's hardly some proprietary obscure vendor thing.


That's fair, it's a better state than the previous attempts.

Still, it's not as tested as, say btrfs and ext4. Can't wait to see its particular quirks.


While it no doubt impacts performance in some cases, MADV_RANDOM probably isn't the correct choice for a lot of circumstances (the classloader tends to do a linear scan of JAR files, for example).


Except that's not how class linking works on Android :)

In particular, everything is compiled down to lookup tables and hash tables within each odex/oat. Your point still stands but the hit is much lower than you would think and given the slow speed of the superfluous reads, it ends up being a net positive for A LOT of cases.


Well, during verification, you very much do a linear scan as I described. Of course, you only verify once, so that minimizes that use case.

The way odex files are structured, there is actually a fair bit of data sequentially organized (for example dependencies), even with the indexing. The odex format does seem to have some elements that anticipate read-ahead (e.g.: those hash tables, dependencies...).

That said, there is a real question about proper tuning of read-ahead for flash memory (like, perhaps 4k or even 0-byte read-ahead is the right thing to do in general ;-). It's not like it is hard to abuse it.


> Android mmap's literally everything

Why is that so?


allows code & resources to stay as clean pages because clean pages can be swapped out even though there's no swapfs whereas dirty pages can't (since there's no swapfs).


Why would you want it NOT to be so?


> and in a mobile device OS and application SW are tightly coupled

I call bullshit. There's no reason Google can't update everything AOSP-y in /system - libc, libart, libwebkit etc.

> Google (and Apple and Microsoft) can totally do it for devices that manufactures and maintains on its own

That's a low bar. When you buy a Dell laptop, you continue to receive updates from Microsoft. This is the bar we should hold Google to.

As for the certification process, surely having one update that ships to N models is easier to test than N updates shipping to N models?


Except it's not that simple in the Android world. Someone explained it really well a couple days ago: https://news.ycombinator.com/item?id=13057605

The basics are that every phone out there uses a forked Linux kernel patched to hell to get it working. Since none of the drives are upstreamed it's unmaintainable.

The linux kernel does not have a stable driver interface so shipping updates to phones is a LOT of work.


This doesn't explain why they can't upgrade user-space applications and libraries. It's very rare for a user-space application upgrade to require a kernel update on any major operating system.


> and in a mobile device OS and application SW are tightly coupled I call bullshit. There's no reason Google can't update everything AOSP-y in /system - libc, libart, libwebkit etc.

That's not the point. Even if it were so, it's still responsibility of the manufacturer to integrate it in its own firmware and push the update with the carrier's approval.

You are comparing a laptop to a smartphone, which makes no sense, the smartphone has to connect to cellular network to be useful, and it's the carrier that establishes the rules for the update process.

I agree that it should work as you say for devices with no cellular connectivity, such as WiFi only tablets, where no other parties other than the OS and device manufacturers are involved.


No that's not how it works. Apple can push any iPhone firmware updates they want to without carrier approval.


You are right in the case of Apple, but I don't think it is the normal process.

Although my experience in this matter is limited, this is was I was able to find:

https://www.quora.com/Why-is-it-that-Apple-can-push-out-upda...


Trader Joe's has Unpasteurized Orange Juice which does taste like freshly squeezed juice. I wonder how real that is.


Last I looked, no, but the immediate cause of nondeterminism I saw was the zip entry timestamps in the apk. I didn't bother looking further down the chain.


Damn Interesting did a full episode on the entire event - http://www.damninteresting.com/the-zero-armed-bandit/

It's really fascinating stuff.


I love how much longer the documentation is than the actual file. That said, I'm also somewhat wary of using forked pgsql and ubuntu images.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: