I don't think security is that simple, it's not a yes/no binary. There are many directions of attack, and no single mechanism could ever stop them all. In particular, static analysis tools don't catch much in the way of security. Apple doesn't even see source code so they can't even verify that code was compiled from an Apple approved compiler.
The biggest safety precaution against something like this is app sandboxing, which severely limits the amount of damage that a malicious developer can do.
Apple doesn't even really try to do this sort of analysis. They do a quick pass to check for private API usage and such, but otherwise app review is all about checking presentation and functionality to make sure you comply with Apple's rules, e.g. making sure you aren't exposing fully-functional web access in a child-rated app, or mentioning the word "Android" anywhere.
This is a common misunderstanding, and it seems to be one that Apple is happy to spread. Whenever the merits of app review are discussed, some people bring up the security advantages of it. But the fact is, there are none, as XcodeGhost demonstrates nicely. iOS's security is due entirely to the strict sandboxing for third-party apps. App review just lets Apple control what kind of content can be in the store.
The "malware" can do two things: send tracking information to a server, and show popups (which may be spoofed as login dialogs). Both of these are entirely possible to be intended by the app authors, and in that case, it would be considered a feature. So it is almost impossible for anyone other than the app authors to find out it has been infected.
Thing is, userspace networking is a lot like the GPU business. Often times, the software is free or even OSS, but the hardware is proprietary.
I know Intel has software for their NIC's to enable userpace networking, but I don't think any HFT uses it. It may be all right for hobbyist experimentation. I left the industry about a year ago, so my information is a bit out-of-date, but my former employer used either Solarflare cards with the OpenOnload stack (very good cards and awesome software) or the Mellanox CX series with VMA stack (amazing hardware with mediocre software. VMA is now open-source, so perhaps that situation has improved).
Note that these cards are a few thousand dollars each, so out-of-reach for hobbyists.
I can't comment on HFT because I have no experience on that, and their focus is latency rather than throughput.
For the latter (which matters in routers, for instance), netmap runs on everything (either natively or with some emulation). The intel 10g cards (which DPDK of course supports) are around 3-400$ i think, the 1G cards start around a few 10's of dollars (not that you need userspace networking at 1G).
This stack may be open source, but it is handcrafted to work only with the Solarflare NICs. Having looked at it seriously, we decided that it wasn't worth trying to port this to our own devices and therefore wrote yet another one. This time, without the pretence of "open".
There are one or two. The most prominent is Netmap (https://code.google.com/p/netmap/) another is the Intel Dataplane Development Kit (DPDK - http://www.dpdk.org/. Both of these allow you to handle millions of real packets, per second from standard software. No lua coding required.
Extremely well; 170ns is the length of the incoming market data packet, and the reply starts immediately on end of incoming packet. It takes more than that just to DMA a packet into main memory from PC gbit interface.
That is very cool. Where would you go from there? It seems if you're already so fast that you can start replying as soon as all the data is received, there isn't much more you can do. Or are there techniques for going beyond that?
The bit of the video where he talks about writing the order to the wire and then flubbing the checksum if he doesn't actually want to send the order was nuts. I work in the industry and I had heard of people using FPGAs on switches but not that particular technique. Thanks for posting that.
This is usually the problem, almost every single option I've seen is based on dedicated hardware. This is fine for many, but not for those of us running in cloud hosts and other general-purpose locations.
You can't possibly have a de-layered system on virtualised cloud hardware!
Besides, dedicated hosting for a 1U or 2U rack machine is not expensive. If you have enough traffic that it's worth building your own TCP solution, you already have enough servers that you're spending a considerable amount of money on.
Sure you can: as a random example, AWS supports SR-IOV today: you can directly communicate with a NIC from a userland linux process if you set everything up right.
The general idea is to have hardware with direct virtualization support (which is increasingly available on commodity hardware), then have a 'control plane' of layered, virtualized syscall APIs that configure a 'data plane' of virtualization-aware hardware. Permitted I/O operations occur just as if they were on bare metal, with asymptotically zero performance overhead, because you can process an arbitrary amount of data without invoking any code at the OS/hypervisor/cloud provider layer.
For example, my rented, virtualization-aware CPU allows me to run any non-privileged code that stays within a certain block of address space; my rented, virtualization-aware NIC allows me to send and receive any ethernet frames that match certain header bits; and my rented, virtualization-aware disk allows me to read and write to a certain range of LBAs. The nth-layer OS or cloud host or whatever can come in and alter these permissions at will but it need not examine every single syscall to see if it conforms to policy.
I'm aware of that, but if we could reduce dedicated hosting to a single machine through simpler code, we can feed the information from there into a cloud system. It helps a lot if the machine can be a general purpose one, since that reduces hosting costs, maintenance overhead and risk.
Definitely -- +1 to general purpose machine and doing this on normal hardware without special NIC's. However, as soon as you dump the OS, the machine would no longer be suitable as a shared tenancy/cloud host. It might be useful for some sort of dedicated service offering (that would be a cool AWS feature and allow things like Vyatta that're tied right into the kernel and SDN), but not for general purpose cloud hosting. The hypervisor/container/OS are still needed to enforce roles, manage resources, etc.
I agree, though as bcoates indicated, cloud hosts are adapting to this demand. While bare-metal access is indeed very unlikely to happen in a shared tenancy environment, there will be at least some efforts towards lower-level access.