

Try my new hosted Linode memcached service (BETA)  - ritonlajoie

Dear Hners !<p>I did it, I'm opening it, for you to see and test. Well, it took me one week but hey, that was fun to hack into memcached. So, I'm opening a service for Linode users (Dallas data center only for now but more to come !) that lets you share a memcached server.<p>The idea is kind of an experiment : you share _one_ instance. I don't provide you with fixed and different memcached servers, but you read it correctly : only one. This one is having a 200MB bucket for now but will be upgraded if needed.<p>How it works: you connect your memcache client to the linode which is running the memcached and I let you in if you registered to the beta. You can use the 200MB bucket as you wich, provided that the maximum expire (TTL) is set to 30 minutes (1800 seconds). Don't ask more, it won't work :)<p>Well, for the security, you'll probably be happy to know that you can't read/delete/modify keys that are not yours.<p>You can see this service as a super bursting and free (for now) super service !<p>In the future, if it works, I plan to make it a paid service but surprise, I would charge only 1 or 2 dollars/month. This way, you can temporary burst a 200MB (more coming !) memcached server for nearly free.<p>Hope you enjoy it and I'm opened to any suggestion for improvements. To participate to the beta, please visit this webpage which will explain you what you can/can't do , and the way to get it.<p>http://www.henri.pro/2010/09/25/memcached-shared-instance-beta/<p>Have a nice week end everyone ! (As usual I'm on #startups my nickname is henri if you want to chit chat)
======
dlsspy
You could get rid of some of your limitations and make things easier on
yourself by using the same technology we use to run memcached at heroku:
<http://github.com/northscale/bucket_engine>

In its current form, it's binary only (because we do vertical multi-tenancy
there and have no control over multiple applications being on the same
instance). It'd be pretty easy to make an IP address based ACL mode for the
thing and then you could run binary or ascii just as easily.

Advantages:

* Key containment (e.g. you can do flush_all) * You don't have to hack up your own memcached server * Binary protocol is a bit easier on the server with things like large multi-gets because they don't cause so much request swell

We've also got some basic management stuff for creating and manipulating
instances (independently of auth, since we let SASL deal with that).

Disadvantages:

* You'd have to hack in your IP addr -> bucket mapping * If you want TTL limits, you'd need to hack that in, too

Of course, I imagine both are quite easy and would be welcome contributions to
the project. :)

Let me know if you're interested and need any help.

------
paraschopra
Obligatory question: why would I use a shared memcache when having memcache on
own server is as easy as sudo apt-get install memcache?

~~~
vyrotek
I would consider using it if its cheaper than turning on another node within
my cloud environment. One thing less to manage.

edit- Just realized you have to being using Linode as well. Now I question
this too. I thought this was going to be Memcached as a service.

~~~
paraschopra
Memcache as a service will defeat the whole purpose of having cache in memory.
Latencies will kill you.

Memcache will only make sense if it is hosted in the private LAN.

~~~
sounddust
_Memcache will only make sense if it is hosted in the private LAN._

I think the point is that he's only offering it to Linode users in the Dallas
data center, which is where his service is located. Therefore the latency will
be in the sub 1ms range, comparable to a private LAN. There may be other
issues with this approach, but latency isn't one of them, at least in this
implementation.

~~~
paraschopra
Yes, I agree. I was replying to the other commentator who asked memcache as a
service. Memcache on public network will kill the objectives because latency
would then be the main issue.

------
ritonlajoie
Clickable [http://www.henri.pro/2010/09/25/memcached-shared-instance-
be...](http://www.henri.pro/2010/09/25/memcached-shared-instance-beta/)

------
jacquesm
The whole point of memcached is that it is _right next to your web server_ ,
accessible through loop-back devices and such.

To stick it on the other side of a wire is defeating the purpose, that will in
a great many cases increase the time of the request beyond what it would have
taken to re-create the original request.

Not to rain on your parade, but I think this is less than useful for
production.

~~~
ritonlajoie
Hi jacques, Regarding the request time, if you stay in the datacenter, the
response time is very very low. (to the ms).

Regarding the production, right now it's not intented to replace a real
dedicated memcached server.

~~~
jacquesm
You should do a benchmark of some simple page that uses no caching, memcached
locally and memcached remote but in the same datacenter, that would be
interesting information.

We don't use Linode, but we do use memcached to store 'partials', and for now
I'm skeptical about running that memcached on the other side of a wire,
latency tends to fluctuate quite a bit in a busy DC, so you may have to run
your test over an extended period of time to get good data.

It helps if your machine is equipped with two ethernet interfaces, one for
'local' traffic and one that is facing the outside world.

~~~
mtigas
I'm not sure how Linode does it, but I've worked on dedicated machines that
have separate, _internal_ routes. (Slicehost VPSes have this option available
-- unmetered -- too.)

When compared to heavy processing on large database queries, memcached -- even
with a few extra ms of latency -- can still be a major performance win.

Although this largely depends on the applications you're making. I've been
working with GIS applications lately, where a large distributed cache (most
memcached clients automatically support sharding) is about the best you can do
without a large hardware budget.

~~~
wwortiz
Linode allows internal routes as well, and it is unmetered (it just has to be
in the same datacenter, though that should be a given).

------
sandGorgon
Why dont you go one step further and offer shared NoSQL instances - for
instance, Redis - which can pretty much double up for what memcached does..
but also so much more.

One step even further would be, people can spin up on-demand instances of
whichever NoSQL server they need - Cassandra, Voldemort, Redis, Riak... you
name it.

~~~
ritonlajoie
Hi, well, that's a nice idea that I thought also. But the thing is, I'm full
time somewhere ! Lets see if this Linode limited one works, and if there is a
demand I will surely think seriously about it.

edit: just to be clear, I'm not offering one instance of memcached per person.
It's only one instance for all.

------
tlack
Here's a related idea: Someone should start a very large Redis instance, say
8GB of RAM on an EC2 node, with shared reads (i.e., all clients can read all
keys), but protected updates. Each app would use a prefix on its keys. This
could be used as an open message passing bus between apps that need to
communicate.

------
olegkikin
What's the point of that? I'd get better performance from a disk cache, rather
than connecting to some distant server with crazy latency.

------
bhiggins
Having a remote cache gets rid of one of the few reasons for having a cache at
all.

~~~
sjtgraham
I think at this stage it's people with apps hosted in the Linode datacenter in
Dallas being targeted, i.e. on the same private network as this service.

