In fact, when using it with Rails, it is loaded only in development environment. With other frameworks you should take care of what middlewares you use in which environment.
Edit to add: You don't even need to click on the link, you just need to view an image whose src I can manipulate. Ugh. Seriously: do not install on any environment anywhere.
I should add though that even if this thing gets an XSRF token and it's secure, you might as well take the passwords off your SSH keys if you're running this, because you're coughing up an unprotected remote shell to anyone who can talk to a dev server once you turn this on.
The reason this is so dangerous is that it needs none of that. All it needs is for your development machine to have access to the internet.
I open up a project, enable this, and run rackup locally.
I then view your Twitter stream, where you've embedded a crafted link behind a URL shortener.
Because that link is executed by me, you've now remotely executed code on my machine. Assuming I'm anything like most Rails shops you can probably get to a number of other machines through my machine.
If anyone has used the Seaside framework, can you throw some light on how this compares to the in-page editing that Seaside provides (as per what I've heard).
One of your friends is named Firefox, and he does not take orders from you, he takes orders from me.
Turn on Rails.
Open Firefox, type http://localhost:3000/a-malicious-string into the top bar. Hit enter.
Observe how that gets you a malicious string to your web server. This particular string will 404. I can construct much more interesting strings.
Now, notice that step where I told you a URL to type in? Pretend that, instead, I had control over some element of a webpage you were looking at. Any element would do. Say, you come to a blog where I control an image embed for my avatar, and I embed <img src="http://localhost:3000/a-malicious-string />. Firefox is going to make the HTTP request I desire without you having to do anything suspicious and without me having to ever talk to localhost:3000 directly.
After I have a malicious string in your web server, this application lets me execute arbitrary code as your web server. Things get very, very interesting then, in the "may you live in interesting times" sense of interesting. For example, one interesting thing I could do is read your SSH private key, which you use to connect from development to staging or production. Another thing I could do is read your database credentials from database.yml, and maybe even just open up extra ActiveRecord connections directly to those databases and start executing arbitrary SQL. Still another thing I could do is install a keylogger or rootkit on your local machine and wait to get whatever credential I need to totally compromise your company. There are many, many options.
Do not install this anywhere.
You can turn off the console debugger though if it's making you uneasy by doing this:
app.run('localhost', debug=True, use_evalex=False)
I'm not doubting you...I'd simply like to understand. Thanks.
That's pretty much the exact designed intent of the software. Read the Repl code, specifically, Repl#call at line 59 or so.
You'll note the code has recently been enhanced with check_legitimate, which was designed to patch the issue I raised on this thread (and via email). I am going to very carefully avoid commenting about whether check_legitimate actually works as designed. My main worry is that pronouncing it done well would make me professionally responsible when it gets cracked, and committing to endless rounds of "Actually, that won't really work" throws good hours after bad in pursuit of securing a bad idea which is architecturally insecurable.
Do not allow your web application to execute arbitrary code. It will not end well.
The most voted comment is from patio11 about it being a very bad idea. A lot of discussion and hand-waving happened before finally CRSF was even mentioned. Once the author implemented a CSRF fix, the conversation seemed to quiet down. So is this kind of web-based access on the road to being safe and usable now? Many popular VPS hosts offer out-of-band access through web-based consoles such as Anyterm and Ajaxterm. Are those acceptable simply because they've taken care of the CSRF, XSS common issues? They allow arbitrary code on the remote boxes so how are they different from a security concern? I have no idea so I just threw out SSL even though it probably makes no bit of difference. I simply don't have the background or experience to know. Is it time to be concerned about how proficient the hosts are in securing their hypervisors and bridged networking for guests? If they do pose a concern, then why is there not more brouhaha about it to get some attention on the matter? Linode and Slicehost have a lot of customers...
To just say it is a bad idea and move on and not mention other implementations seems to leave a bad taste. Someone also asked about Werkzeug(of the Python demesne) as well...
There is some buzz about doing cloud to cloud attacks but I haven't heard anything that's been realized yet. I have also heard that there are issues with data staying resident on local disks on EC2 after machine termination, but I don't know if that's the case.
At a minimum, after running this you are one Twitter shortened link away from losing your development machine. The link will probably show you a cute cat picture just like any other one. You'll only find out you lost the machine later.
At worst, you're one of the 80% of companies that keeps development machines in your data center protected by a firewall/VPN. Now you're a cute cat picture away from attackers with direct TCP access to your database servers. Wee!
If the entire dev environment is my macbook, and the app is localhost:3000, and the line to enable this thing exists only in the development environment initializer, and I leave it commented out all the time except for those rare cases where I want to inspect session variables and controller state or some other thing that can't really be done in irb, then what's the risk?
At least, that's how I would use it...
EDIT: Nevermind, patio11 explained it much better in his response above. I still think this could be a useful tool with CSRF protection.
So what's the win here?
Even with CSRF protection, you still have to worry about who can talk to port 3000 on your machine. There's a "rails server" running on my Macbook pretty much every day. I bought a whole separate Macbook because I was worried about the attack surface that Rails runs with by default. Adding a remote shell to the mix doesn't seem like a win.
I'm not sure.. This is the first one that I've ever heard of. Well, remote debugging has been around for years for plenty of platforms, and this is the first remote debugging tool for Ruby that I have seen, and embedding it into Rack is kind of cool and useful. And sure, probably not very secure in version 0.0.5.
That doesn't mean "All remote debugging is bad and insecure, and you're stupid for even considering this," which sadly is the tone that people are taking. I can think of several ways that this thing could be made more secure. The gem initializer could take a private key, and the backtick command could prompt you to enter the public key, rather than just dropping you right into a console and executing your arbitrary code.
> So what's the win here?
Yes, normally I would just ssh to the remote machine and and "bundle exec rails console" too. OR far more likely, there is no remote machine, I just open another terminal window and keep a console running. (sorry I said "irb", though I meant "rails console" -- In my mind they are basically the same thing).
The win (at least, to me) is that I can inspect parts of the rails stack that change as the user (me) is interacting with the app, such as request details, session contents, controller variables.
>I just keep an SSH screen with "bundle exec rails console" running. I'm of the impression that this is what most people do.
>I bought a whole separate Macbook because I was worried about the attack surface that Rails runs with by default.
This seems a bit incongruous, don't you think? I'm not trying to pick a fight, but I read stuff like this and I just feel sad...
Who runs an internet-accessible development machine? Maybe I give the rest of the dev world too much credit.
you're one of the 80% of companies that keeps development machines in your data center protected by a firewall/VPN
I've been in Dev/Ops for about 20 years, and been working the internet since the web took off in '94. I have never, ever heard of such a thing, and would freak out if I did. 80% of companies? Maybe you've got stats to back this up, and my experience is merely anecdotal -- but I've never heard of a single instance of something so obviously stupid. SQL injection vulnerability, sure. XSRF, XSS vulnerability, sure. Running a database server on the same host as your app server, with the webserver running as the DBA user? Sure, it happens.
Certainly someone is bound to misuse this gem as well. But that doesn't mean it's "a very bad idea". It's a very good idea. It's just that it takes knowledge and experience to run a web site, and some people have to find out the hard way.
You're a dev/ops guy, I'm an appsec security assessor. I think we just look for different things. Trust me, this isn't just common it's standard practice.
It's a clever bad idea. Minimal payoff. Maximum risk.
Incidentally: I'm not sure I'm in love with this "everyone has to find out the hard way" notion. No, they don't. We can just tell them: DON'T RUN STUFF LIKE THIS, IT WILL HURT YOU. You have 20+ years of experience. Most people on HN do not; they see this, think "ooo, shiny", and install it. As a practitioner yourself, you have a responsibility --- in fact, if you're keeping that CISSP cert current, an obligation --- to step in and help that not happen.
By the way: hey, you asked.
I apologize for not having an underground cold war bunker with which to develop my software on. Until i get one, I'll stick with my one and only machine (which, as you can tell, has internet access) for doing development.
EDIT: And because of it, i'd never put this code on a machine i run.
I am reminded of Sony's plain text captcha. There are stupid people out there.
Actually I'm a CISSP and was CISO for a major financial web site for >5 years. I look at things the way you do, trust me.
It's just that I try to temper my security reflexes. Just because something might be potentially dangerous doesn't means it's automatically a bad idea.
Everybody runs development machines that can (a) be spoken to by machines with human-driven browsers and (b) can speak to other machines.
Indeed we do. And if you can't trust that environment to be private and secure, then you've got bigger problems than a web console. I wouldn't stop at shutting down SSH; I'd disconnect entirely. Hell, I'd quit.
If your dev environment is exposed to people who are malicious, you're screwed. Web console is not going to hurt you, because it's too late -- you've already been hacked.
My condolences on having to get the CISSP. ;)
The root of the problem lies at the feet of a few factors:
A) HTTP is a relatively "trusted" protocol, which means you should be very, very careful about running local services over HTTP (on any port) that can do bad things™ to your computer.
B) http://localhost:3000/ isn't difficult to guess. If you know someone is a Rails developer, there is a good chance that embedding requests to that URL will hit a Rails dev environment at some point, and that's where this thing is going to end up running.
C) Eval, in general, is just about the right length of rope to hang yourself with. Use sparingly and with great respect for its ability to completely hose you. Passing anything sent as HTTP params to eval is just asking for it.
Knowing the above, consider an example like Patrick (patio11) outlines above.
Let's say this takes off and people start using it. This means that some percentage of Rails devs have this running. To attack their machines, I need only to trick their machine in to making a request to 'http://localhost:3000/malicious-string-here.
How might I accomplish that? I like Patrick's suggestion of an img tag src attribute. There are plenty of forums that treat img tags as safe, or provide some means of embedding images with arbitrary src attributes.
So let's say I head over to railsnewbforum.com and embed the malicious image in my sig. Then I start happily posting useful information in every thread on the board. Assuming this webconsole takes off, how long until a vulnerable dev hits a page with my malicious signature code and gets pwnd?
Can this be made safe? Probably to some degree, but then it would involve many of the security measures involved with using the Rails console over an ssh session or something similar.
IMO, it's not worth the risk.
I wouldn't use it for obvious reasons, but I like the demo.
Dear christ, please do not ever put this near production.