If memcached papers have one thing in common, it's an uncanny ability to get the comparison software to run as slowly as possible. 100k ops/sec/core is what you get when using a single client connection with blocking I/O. Using more clients (as per a normal prod setup) or pipelining queries is more like 1m ops+/core, with writes scaling worse than reads. In production it's easy to get some level of pipelining (multigets, clustered keys, etc), since you're rarely just fetching a key and then blocking.
A much better FPGA paper would show scales of at what level the syscall overhead becomes most of the CPU usage, as well as any measured latency improvements. I think some of the other papers address latency at least.
In reality it hardly matters. If you're hitting memcached hard enough with tiny values for it to matter, ensuring keys are clustered and pipelined is a lot less maintenance overhead than deploying FPGA's.
Every corner case that could be found in s/w was always the topic of a excited benchmark. Also, the old trick of 'hey, let's drop all the matches on the floor in our h/w or FPGA, while getting a huge number of matches in s/w and making the s/w guys look ridiculous'.
Every time I read a paper touting a great new speedup on FPGA (over some crap s/w implementation) I'm reminded of that old joke about the Texan visiting Israel and telling the owner of some small farm that "he can get on a tractor and ride for days without getting the the boundary of his property." The Israeli nods sympathetically and says "Yes, I too used to have a tractor like that".
So Excuse my ignorance, apart from AWS or Azure Scales, what would anyone uses Memcached on FPGA?
* I wouldn't mind if the system was simple plug and play and has all the benefits, cost saving without the headache. But very rarely are any technology deployment without any headache or hassle free.
Using a single AWS F1 (FPGA) instance, our Memcached accelerator achieves over 11 million ops/sec at less than 300 microsecond latency. Compared to ElastiCache, the AWS-managed CPU Memcached server, our Memcached accelerator offers 9X better throughput, 9X lower latency, and 10X better throughput/$.
We need to batch multiple requests per Ethernet packet to get around packet per sec rate limiting on AWS. See more details here:
If anyone is interested we would love to hear from you, we will be showing off an online demo later this week.
FPGAs are great for processing data at 10Gbps line rate with low latency. They are also good for compute tasks like compression and encryption.
Happy to talk more at email@example.com if you are interested. Congrats on getting a seed round from Intel Capital!
The answer is - it depends. Unfortunately, we have not found a "golden" combination of settings yet. If you have a highly congested design, synthesis does help a lot, but not all cases. There are correlations between the settings, so if A is good, B is good, A+B could be bad. Seeds belong to a category of techniques that we classify as random. For example, for Vivado, although Xilinx removed the seeds feature, we created a technique to trigger randomness in the placement using a property of Vivado.
What we do is not new in the sense that settings exploration has always been around. But with cloud compute resources and ML approaches, it really enables timing closure and optimization methods in a cheaper and more disciplined fashion.
We are also very interested in users of OpenCL/HLS/C. The translated RTL is often not as optimized/readable as what an RTL designer will do if he/she does it directly in RTL. Our tool (InTime) can be a good boost to the performance of such RTL.
edit: r4.4xlarge as per the link. 16vcpu? You should be able to beat on latency but beating on throughput means elasticache is misconfigured, likely. Or you're putting on way too much set traffic (think I saw you set the bench to 1:1 ratio of gets to sets?)
The interesting part is the FPGA could still do much more computation (for example, compression or encryption) while maintaining the same throughput due to hardware pipelining. We described this concept further in the blog post I linked to.
The only reason why you can claim 9x latency is because you've saturated the worker threads. You should still win on latency even if it were properly bottlenecking on the network, but 9x throughput and 9x latency is completely false as a capacity limit in this test.
The other issue is 100 bytes isn't typical. It's common but almost every user has a varied workload. Deploying FPGA's for the larger cache values ends up being a waste. I designed a new storage system based off of offloading larger cold keys to flash, even.
100-500 byte values are the majority of requests at companies like Facebook and Lyft for their key value clusters. For large value sizes the network interface becomes the bottleneck so FPGAs won’t be able to help.
On a large enough machine I've gotten it up past 55 million read ops/sec. It's quite good at read throughput.
I'm also familiar with the cache clusters at major companies.
In the latest version of memcached have you added support for batching/pipelining multiple requests per packet? Because this was crucial for achieving high requests/sec in this example.
Were the 55M requests/sec coming from another machine? Even with small 100B values you would need a minimum of a 44 Gbps network link. How many cores were required? In our benchmark we wanted a fair comparison between instances of similar price and RAM size.
Odds are pretty good it's left at the default of 4 worker threads... so on a 16 vcpu instance that's not going to reach great heights. Since it's a 1.4.x version (years old), it's missing some newer features that both help in average latency and memory efficiency. Or rather, a lot of them are there but disabled by default.
Memcached has allowed pipelining since it was created. For the ASCII protocol, packing multiple responses into single packets is done via a straight multiget. You can send multiple requests in a single packet for any protocol and any command.
My stress utility (https://github.com/memcached/mc-crusher) has options for pipelining requests, and using multigets ascii packed get responses. I test to the limit of lock scaling for each individual subsystem.
The 55M test required running mc-crusher via localhost, there's no network that can go that fast. My point is you're limited by the network throughput, not the CPU. In that particular 55M test, all cores were used, but ~7-8 of them were used by mc-crusher... so the real limit for the machine is even higher. It did have a lot of cores. 48ish?
You can still do apples/apples with instance sizes... but given everything I know about this thing, unless those cores are extremely slow, hitting 11m ops/sec shouldn't be an issue. Or at least, with minimal fiddling it should hit 6-8m, which doesn't give you a crazy 9x figure.
You do need to stop doing 1:1 get/set ratio though. Sets don't scale very well since I've generally never had complaints about the speed. I'd say a highly conservative test would be 5:1 get/set. Production workloads are typically even higher than that. (that said I do intend to speed them up more, it's just lowish priority.. the LRU locks are highly granular, so spreading sets across different slab classes can help mutation perf a lot).
One main constraint here is that we are using AWS virtual machine instances on the cloud. My guess is your previous experience is with physical servers. The FPGA performance is also significantly better when you can use the physical board with a direct ethernet connection, pipelining isn't required in this case the FPGA can handle minimum sized ethernet packets at line rate.
Another question, in your experience is compression/encryption used much with memcached? Because this is another area where the FPGA can compute much faster.
Just signed up for a personal AWS account and manually started an r4.4xlarge for target and c5.4xlarge for source (same CPU's and networking capability?, but it wasn't allowing me to just start two r4.4xlarge...).
got it up to 15M hits/sec for pure mget test.
First a few significant differences:
1) Your value size is 10B which completely changes the results. Let’s keep the value size at 100B, which is more realistic.
2) The ratio of gets to sets significantly affects the requests per sec. We were assuming 1:1 ratio when we did our measurements. Increasing the percentage of gets really speeds up req/sec. We didn’t observe this effect on elasticache. Is this a recent improvement in the github version of memcached?
3) Your benchmark is using multiple keys in the same get command. What memtier does is pipeline multiple get commands each with one key. This seems more realistic.
4) We pipelined 16 get commands per packet while your configuration had 50 keys per get command.
I was able to reproduce the same setup as we had with ~1.2M req/sec with your mc-crusher benchmark using the following config. This has 1:1 get to set ratio with pipeline 16 and value size 100B.
I used the github memcached on an r4.4xlarge. I ran memcache-top on the server instance to measure the requests per second, showing about 750k gets/sec and 600k sets/sec.
With a ratio of 10:1 gets to sets I’m seeing about 3.5M req/sec which seems better than elasticache.
1) sure, 100b, but that will just make it easier for the CPU version to hit the packet rate limit. I dialed it down to show just how fast the key rate is. Your entire proposal was that CPU bottlenecked the NIC, and it does not. Also, most people have 100b keys, nevermind the values.
2) 1:1 was never realistic. It's not even remotely realistic; as I said earlier 5:1 would be pessimistic. In reality the instances which have get rates in the millions tend to have 100:1 or better ratios due to the nature of the data they're caching.
Yes, the newer LRU algorithm doesn't grab LRU locks on the read path, so it'll scale with the number of CPU cores. As I said in earlier comments, the sets don't currently scale, especially if you're hammering the same LRU (which is again, unrealistic). If you just do a pure set load you'll land somewhere between 900k and 1.5m ops/sec.
3) I did both single-get-pipelined and packet-pipelined benchmarks; also absolutely not. Clients are designed to use the multiget mode when multiple keys are being fetched from the same server. This benefit is lost with the binary protocol (which will be fixed at some point).
4) Try an mget with 16, it won't be too far off, though you might have to add one more mc-crusher thread.
In your last test, you're simply overloading it with sets. If you want to mislead people with a test like this, go ahead; but I'll point it out.
3.5M/s isn't too bad.
Memcached really isn't a great target for your sort of work. I love the idea of FPGA offload, but trying to advertise your thing as superior by making up your own rules is going to get called out.
1) The popularity of redis is absolutely damning in general. if people are okay with the performance of a single CPU database with all-over-the-map latency profiles, the odds of you finding enough customers with extremely high rate memcached pools to sustain a business are essentially zero. You'd be solely tricking people who think they need it.
2) You are not facebook. Nobody is facebook but facebook. 100b is not representative. It's not even representative of facebook's load.
What's worse, even for a more common case, if 99% of requests are 100b, the average size of an item might be 8k. Which doesn't mean that there are a bunch around 8k, but there could be a few thousand items that are 50k-500k+ in size, getting hit 1% of the time, or even 0.1% of the time.
500x the bandwidth of a 100b request for the same processing overhead. It's almost always something they need: a request might fetch a couple hundred items from memcached, with just a couple of them being large.
This ends up making RAM be the greatest expense in the system. If so few users really need this performance, and the newer versions of memcached have a much higher perf ceiling, the extra features it has to drive down RAM usage are more valuable.
The best cost/power savings most users can do is find a way to get more RAM attached to fewer CPU cores: to be frank a r4.4xlarge would suit better with 8 cores. Or find ways push larger cold values into flash, freeing up RAM for those 100b values to be served quickly.
1) Actually your earlier test confirms our point that the CPU does not saturate the 10Gbps network with small value sizes. For example, in your 10B value example you got 15M req/sec with 16 cores. This rate is 1.2Gbps (15M x 10B * 8), well below the network limit of 10Gbps. The FPGA would still be ~8X faster at line rate.
2) Regardless of the get/set ratio the FPGA will hit line rate. However, thanks for pointing out that GET requests are much faster in the latest version of memcached, we didn't know that. Looks like the FPGA is only 3X faster for this 10:1 Get to Set workload.
3) Users would prefer if there was no pipelining at all. We only used pipelining to get around packet/sec limitations on AWS. If the FPGA was connected directly to the network we could hit line rate without any pipelining.
We aren't trying to mislead, we are just showing what's possible with the FPGA on AWS: line-rate processing of incoming requests at close to 10Gbps.
The cool part as I mentioned is that the FPGA is still under-utilized so we could add encryption without affecting requests/sec at all because hardware cores execute in parallel. Another idea is to compress the data on the fly.
Agreed that RAM is the expensive part, which is why we picked a CPU instance that had similar RAM to the FPGA instance. Yes we have heard of users caching data to SSD to save cost.
I've plenty of experience with both hardware and virtual machines, I just don't use AWS myself much. I can get something like 800k read ops/sec from a 4core raspberry pi2, and I hope the AWS instance isn't that terrible.
./mc-crusher conf/someconfigfile ipaddress port
https://github.com/memcached/mc-crusher/blob/master/conf/asc... - this is a decent read test with pipelining (give the test a few seconds to get through its sets). The inbound requests are pipelined, but it'll still send each get response in individual packets. This is what I use to test syscall/interrupt overhead.
https://github.com/memcached/mc-crusher/blob/master/conf/mge... this is the same thing, but with mgets. I'd copy the set line from ascii too:
can vary the value_size to and mget_count to see how that changes things. You can also pre-warm with the 'bench-warmer' script that comes with it, or remove stop_after and adjust usleep to adjust get/set ratios.
Watch top on the client host, and if mc-crusher is capping out its CPU cores, add more lines to the test but with the (confusing, sorry) threading enabled:
That puts the first two tests on the "main" thread, then spawns an extra thread for the third test. you can keep copy/pasting that last line until the client or the server are saturated.
edit: sorry, the enc/compression question:
1) compression is typically done in the client to reduce bandwidth overhead. It's not very useful in the server.
2) encryption is becoming more popular, but doesn't currently exist much. The mainline OSS doesn't even have TLS support yet. Almost all use cases are on internal networks. FPGA's could potentially help there... aes-ni on intel cpu's isn't awful though.
On our side, maxing the IOPS was the easy part. The hard one was to marry the protocol with converged/deterministic Ethernet with RDMA. We were split in between "one request, one frame/packet burst" vs "all requests are somehow smartly aligned with frames by stateful logic." The first one was surprisingly susceptible to performance artifacts due to varying round trip latency of few microseconds, thus it was possible to get packets in transit being dropped due to receiving NIC (a top tier hardware) being momentarily overloaded.
You have an advantage of being DC provider independent, and can jump the AWS ship whenever you want. Alibaba's solution will be tied to its infrastructure with its very expensive RDMA capable network.
Google Alibaba China F1 or F2.
As for the offering, the idea is that end the clients will only have to deal with SDK and libs on instances, not raw RDMA or anything related to internal infrastructure. This is how much I am allowed to say besides the fact of its existence.
From their current experience, and that of other hosting providers, not many people who go with F1, F2, or other FPGA instance actually do reap any benefit, and some drop mid-way. That's why they want to get more people using them pass the "toying with it stage." The "Herokuification" (god, Heroku sounds beyond hilarious in Russian) is there to let people use the common APIs while getting benefit from FPGA performance, without dealing with things outside of average webdev area of expertise.
We have been told by cloud providers that the FPGA cannot be directly connected due to network security concerns. Since there is no easy way to control how the arbitrary hardware programmed by users on the FPGA will interact with their network. Microsoft has been using FPGAs directly connected to their network (called a bump-in-the-wire architecture) for the FPGAs used in their datacenter (see Project Catapult for details). But these FPGAs are not programmable by Azure users yet.
Would that even be possible, or does it boil down to the halting problem (how can you know a sequence of transactions is malicious)?
My understanding of formal verification is a) that it doesn't eliminate bugs, it just reduces their probability by a percentage, and b) that it's academically very attractive (which is why HN is constantly hearing about it) but practically very expensive in terms of investment/return, in the sense that formal verification requires formal modeling and so forth in order to exist in the first place, and that requires actual design lockdown, and suddenly everything got Complicated™ :)
You could possibly get away with an FPGA-targeting compiler (low-relevance related conversation: https://news.ycombinator.com/item?id=17151024) set up in a stripped-down form, but you'd need to lock it down a bit tighter than https://www.corsix.org/content/malicious-luajit-bytecode (a bit old, but illustrative of a point), and then the question would be whether that would negate the performance benefit of the FPGA in the first place.
The thread I linked the comment from had some interesting info on FPGA vs GPU (eg, https://news.ycombinator.com/item?id=17150985)