Archive for the 'rails' Tag

On Rails mass-assignment, Github and the apocalypse

Tuesday, July 10th, 2012

This post has been originally written for the Headshift Labs blog and is posted here with permission.


Not many days ago, a user exploiting a Ruby on Rails vulnerability on Github gave rise to a big turmoil in the developer community. Let’s look into it a bit more.

A brief introduction

Feel free to skip this section if you already know what the issue is and who is involved.

Ruby on Rails and Github

If you are not familiar with Rails or Github, it’s worth unraveling this tangled bit first: Github is home to many open source projects. Among those projects, Github is hosting the Ruby on Rails project (source code, discussions, bug tracking and so on).

Ruby on Rails (Rails in short) is a framework to create web applications. Github is built with Rails.

This is why Github is the centre of this story: on one side, a vulnerability in Rails makes it vulnerable too. On the other side, discussions about Rails vulnerabilities happen on Github.

Attributes mass-assignment

Attributes mass-assignment is a feature in Rails. Assuming you have a User with name, surname and role attributes, you could set them all by passing a parameter to a single call. This is very handy for web forms: you can pass the form contents (let’s say name and surname) to the User and set all attributes in one fell swoop.

The problem is, a malicious user may submit the form after adding any field to it, thus setting an attribute that was considered protected (e.g. role=admin). Pretty bad.

Blacklists vs. Whitelists

The two basic approaches to solving this are a blacklist and a whitelist. With a blacklist, we specify all the attributes we want to protect (e.g. role), thus making mass-assignment impossible for them. Any other attribute won’t be protected.

With a whitelist, on the other side, we specify the attributes we don’t want to protect (e.g. name, surname) and everything else is protected by default.

In general terms, blacklists are less likely to get in your way when developing, but whitelists are more secure.

White hat

A white hat hacker is someone who actively looks for security holes and vulnerabilities without a malicious intent. They break security in order to highlight issues that need to be fixed.

Piecing the facts together

Here is a brief rundown of what happened.

March 2nd, 2012

On March 2nd, Egor Homakov posted on the Rails issue tracker, reporting a vulnerability in attributes mass-assignment: issue #5228.
Homakov, as a proof of concept, created a new issue with a date 1001 years in the future: issue #5239.
(Github later claimed that, at this point, Homakov had responsibly disclosed the vulnerability and they were working to fix it).


While responding to Homakov’s initial thread, the Rails team pointed out that what he had reported was a well known vulnerability (already addressed in the Rails security guide), that the issue had already been discussed in the past and the consensus was that it was not to be fixed.
In short: the burden of securing the application is down to the developers, not to the framework.

March 5th, 2012

On March 5th Homakov upped the ante: he exploited the vulnerability again, only this time to add his public key to the Rails organisation on Github, becoming a project administrator. As a proof of concept, he committed a file to the repo.
No harm done, but a powerful show-off: the admins for a repository are more or less all-powerful (they can commit, edit discussions, even destroy the repo).


As a consequence, Github suspended his account and posted a rather dry message about the issue.

Later on, following some more investigation, Github reinstated Homakov’s account and published a second post, with some deeper insight and a link to their newly drafted Responsible Disclosure policy.
In short: white hat hackers are welcome, attackers exploiting vulnerabilities are not.

If you haven’t had enough, you can head over to Ars Techica for another take on these facts.

On trade-offs

The tenet that is informing my thoughts on the matter: security is a trade-off. By quoting security guru Bruce Scheier:

The question to ask when you look at security is not whether this makes us safer, but whether it’s worth the trade-off.

(See his TED talk or indeed any of his writings for more information on this idea).

The Rails devs obviously thought the trade-off should be in favour of making apps easier to write.
Indeed, as soon as you add security, you make access more cumbersome. In this case, more boilerplate code to be written and potential headaches when an update fails because an attribute could not be set.

On the other hand, experience demonstrates that defaults are important. I believe it wouldn’t be an exaggeration to estimate less than 50% of developers secure their applications thoroughly. Especially in a framework like Rails, which favours convention over configuration.

In fact, Rails has indeed resolved other trade-offs in favour of security.
A good example is protect_from_forgery (which is enabled by default). Of course, the cost for the developer is much lower in that case, so finding the trade-off is not hard (increasing security by default).

All in all, I can totally understand the Rails team’s decision. Whitelisting attributes (or even just blacklisting them) is a lot of work. The prevalence of this vulnerability, however, is making the security cost higher (it is number 4 in the OWASP list of top application security risks for 2010).

Especially now that it’s out in the open, I believe Rails core should address it by enforcing security (whitelisting) by default.

On whitelisting and suspending users

In the wake of the event, many people (as Homakov himself did) have argued that whitelisting is the only way to go (and that it should be a default). As it is probably clear at this point, I tend to agree.

However, I also think this poses a question of coherence: if whitelisting is the right approach, Github cannot be blamed for applying that concept in dealing with the matter.

In my opinion, suspending Homakov at the time was akin to whitelisting: the first priority during an attack is to defend against it (thus resulting in the account suspension). Only after that and the subsequent vulnerability mitigation can you reopen what you need to (his account was later reinstated).

It’s easy to see he was a white hat, in hindsight. But I don’t believe it would have been possible to make the same call when the attack happened.

On the apocalypse

An attacker gaining admin rights is a deeply scary prospect and this has been a very visible hack. It’s no surprise that apocalyptic scenarios have been painted about the future of projects hosted on Github.

My belief, however, is that damage would have been somewhat limited in this case: the way git commits work makes it pretty hard to change past history with no one being the wiser. Also, its distributed nature mitigates the impact of someone deleting the main repository.
This won’t prevent an attacker from injecting malicious code in a new commit, but those are usually reviewed by the team before they make it into a new release.

All in all, there’s no doubt an attacker maliciously exploiting this vulnerability might have done some harm, but I believe invoking the apocalypse for all projects hosted on Github is mostly empty sensationalism.

Note, however, that the situation may be far worse for other vulnerable Rails applications, depending on how they are implemented (which models are vulnerable, what permissions are available to admins and so on).

What to take out of this

The dust has just started to settle, but it’s never too early to get good lessons out of a situation like this.

The first take is: your Rails applications probably need an audit. Put some time aside and make sure you’re not the next developer/company to fail spectacularly because of what is now a well-known vulnerability (with well known countermeasures).

The second take is: security is a trade-off. In the wake of the hack, it’s easy to think that we should run and harden applications against all possible attacks. Rest assured that this protection comes with a cost and sometimes the cost is just too high compared to the benefit.

An additional point to consider is how Github managed the disclosure of the vulnerability. This can give us a starting point in thinking how companies should listen to the conversations happening on their platforms and what kind of insights they can get out of them. We’ll have a post in the future to look into this in more detail.

Developers are a pragmatic bunch: let’s not give in to panic and keep thinking instead ;)

Cucumber to test mobile in a Rails project

Friday, July 6th, 2012

This post has been originally published on the Headshift Labs blog and is reposted here with permission.


Recently we had to add support for mobile devices to a Rails application we developed. We decided to go with the excellent suggestion we found in the Railscast about Mobile Devices. The gist of it is to add a method to the ApplicationController to set the mime-type to :mobile whenever the request user-agent is recognised as a mobile browser (see the corresponding ASCIIcast for the full code).

We have also used the format_fallback gem in order to avoid a proliferation of views and partials. In fact, in our application many of the partials are exactly the same in both the html and the mobile version, although they look different thanks to CSS and Javascript. What the gem does is to fallback to the html view whenever a mobile view is not found, thus avoiding the issue of having to duplicate each .html view in the application as a .mobile file. The gem is still a bit rough, but it fits our bill for this application.

Given the importance we give to BDD and TDD, however, this only gets you halfway on the road towards a proper implementation: testing for the mobile version of the application is still missing.

The most straightforward way to test the application using Cucumber is to make sure the test request headers include a user-agent string, thus mimicking a mobile device.

There are a couple of tutorials out there on how to do this (all pointing to comments for a blog post that has now disappeared without a trace). They provided a good foundation to start from, but ultimately we found out there’s an easier way to do it.

The core of this solution is a handful of lines, adding support for custom headers to Capybara’s default rack_test driver. The following code will make an add_headers method available to your steps.

# features/support/capybara_headers.rb:

module CapybaraHeadersHelper
  def add_headers(headers)
    headers.each do |name, value|
      page.driver.browser.header(name, value)

Now that the basic support is in place, we can write some steps to make our life easier.

# features/steps/mobile_steps.rb:

module MobileStepsHelper
    "iPhone" => "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_0 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8A293 Safari/6531.22.7",
    "Android" => "HTC_Eris Mozilla/5.0 (Linux; U; Android 4.0; en-ca; Build/GINGERBREAD) AppleWebKit/528.5+ (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1"

Given /^my user agent is "(.+)"$/ do |agent|
  add_headers({'User-Agent'=> agent})

Given /^I have an? (.+)$/ do |phone_name|
  add_headers("User-Agent" => SAMPLE_AGENT_STRING[phone_name])

Notice that, for the sake of brevity, I’ve included only a couple of user agents in the SAMPLE_AGENT_STRING hash. You can find lots more around the Web.

We can then write a feature such as:

Given I have an iPhone
When I visit the homepage
I should see content tailored to my mobile