Archive for the 'github' Tag

On Rails mass-assignment, Github and the apocalypse

Tuesday, July 10th, 2012

This post has been originally written for the Headshift Labs blog and is posted here with permission.


Not many days ago, a user exploiting a Ruby on Rails vulnerability on Github gave rise to a big turmoil in the developer community. Let’s look into it a bit more.

A brief introduction

Feel free to skip this section if you already know what the issue is and who is involved.

Ruby on Rails and Github

If you are not familiar with Rails or Github, it’s worth unraveling this tangled bit first: Github is home to many open source projects. Among those projects, Github is hosting the Ruby on Rails project (source code, discussions, bug tracking and so on).

Ruby on Rails (Rails in short) is a framework to create web applications. Github is built with Rails.

This is why Github is the centre of this story: on one side, a vulnerability in Rails makes it vulnerable too. On the other side, discussions about Rails vulnerabilities happen on Github.

Attributes mass-assignment

Attributes mass-assignment is a feature in Rails. Assuming you have a User with name, surname and role attributes, you could set them all by passing a parameter to a single call. This is very handy for web forms: you can pass the form contents (let’s say name and surname) to the User and set all attributes in one fell swoop.

The problem is, a malicious user may submit the form after adding any field to it, thus setting an attribute that was considered protected (e.g. role=admin). Pretty bad.

Blacklists vs. Whitelists

The two basic approaches to solving this are a blacklist and a whitelist. With a blacklist, we specify all the attributes we want to protect (e.g. role), thus making mass-assignment impossible for them. Any other attribute won’t be protected.

With a whitelist, on the other side, we specify the attributes we don’t want to protect (e.g. name, surname) and everything else is protected by default.

In general terms, blacklists are less likely to get in your way when developing, but whitelists are more secure.

White hat

A white hat hacker is someone who actively looks for security holes and vulnerabilities without a malicious intent. They break security in order to highlight issues that need to be fixed.

Piecing the facts together

Here is a brief rundown of what happened.

March 2nd, 2012

On March 2nd, Egor Homakov posted on the Rails issue tracker, reporting a vulnerability in attributes mass-assignment: issue #5228.
Homakov, as a proof of concept, created a new issue with a date 1001 years in the future: issue #5239.
(Github later claimed that, at this point, Homakov had responsibly disclosed the vulnerability and they were working to fix it).


While responding to Homakov’s initial thread, the Rails team pointed out that what he had reported was a well known vulnerability (already addressed in the Rails security guide), that the issue had already been discussed in the past and the consensus was that it was not to be fixed.
In short: the burden of securing the application is down to the developers, not to the framework.

March 5th, 2012

On March 5th Homakov upped the ante: he exploited the vulnerability again, only this time to add his public key to the Rails organisation on Github, becoming a project administrator. As a proof of concept, he committed a file to the repo.
No harm done, but a powerful show-off: the admins for a repository are more or less all-powerful (they can commit, edit discussions, even destroy the repo).


As a consequence, Github suspended his account and posted a rather dry message about the issue.

Later on, following some more investigation, Github reinstated Homakov’s account and published a second post, with some deeper insight and a link to their newly drafted Responsible Disclosure policy.
In short: white hat hackers are welcome, attackers exploiting vulnerabilities are not.

If you haven’t had enough, you can head over to Ars Techica for another take on these facts.

On trade-offs

The tenet that is informing my thoughts on the matter: security is a trade-off. By quoting security guru Bruce Scheier:

The question to ask when you look at security is not whether this makes us safer, but whether it’s worth the trade-off.

(See his TED talk or indeed any of his writings for more information on this idea).

The Rails devs obviously thought the trade-off should be in favour of making apps easier to write.
Indeed, as soon as you add security, you make access more cumbersome. In this case, more boilerplate code to be written and potential headaches when an update fails because an attribute could not be set.

On the other hand, experience demonstrates that defaults are important. I believe it wouldn’t be an exaggeration to estimate less than 50% of developers secure their applications thoroughly. Especially in a framework like Rails, which favours convention over configuration.

In fact, Rails has indeed resolved other trade-offs in favour of security.
A good example is protect_from_forgery (which is enabled by default). Of course, the cost for the developer is much lower in that case, so finding the trade-off is not hard (increasing security by default).

All in all, I can totally understand the Rails team’s decision. Whitelisting attributes (or even just blacklisting them) is a lot of work. The prevalence of this vulnerability, however, is making the security cost higher (it is number 4 in the OWASP list of top application security risks for 2010).

Especially now that it’s out in the open, I believe Rails core should address it by enforcing security (whitelisting) by default.

On whitelisting and suspending users

In the wake of the event, many people (as Homakov himself did) have argued that whitelisting is the only way to go (and that it should be a default). As it is probably clear at this point, I tend to agree.

However, I also think this poses a question of coherence: if whitelisting is the right approach, Github cannot be blamed for applying that concept in dealing with the matter.

In my opinion, suspending Homakov at the time was akin to whitelisting: the first priority during an attack is to defend against it (thus resulting in the account suspension). Only after that and the subsequent vulnerability mitigation can you reopen what you need to (his account was later reinstated).

It’s easy to see he was a white hat, in hindsight. But I don’t believe it would have been possible to make the same call when the attack happened.

On the apocalypse

An attacker gaining admin rights is a deeply scary prospect and this has been a very visible hack. It’s no surprise that apocalyptic scenarios have been painted about the future of projects hosted on Github.

My belief, however, is that damage would have been somewhat limited in this case: the way git commits work makes it pretty hard to change past history with no one being the wiser. Also, its distributed nature mitigates the impact of someone deleting the main repository.
This won’t prevent an attacker from injecting malicious code in a new commit, but those are usually reviewed by the team before they make it into a new release.

All in all, there’s no doubt an attacker maliciously exploiting this vulnerability might have done some harm, but I believe invoking the apocalypse for all projects hosted on Github is mostly empty sensationalism.

Note, however, that the situation may be far worse for other vulnerable Rails applications, depending on how they are implemented (which models are vulnerable, what permissions are available to admins and so on).

What to take out of this

The dust has just started to settle, but it’s never too early to get good lessons out of a situation like this.

The first take is: your Rails applications probably need an audit. Put some time aside and make sure you’re not the next developer/company to fail spectacularly because of what is now a well-known vulnerability (with well known countermeasures).

The second take is: security is a trade-off. In the wake of the hack, it’s easy to think that we should run and harden applications against all possible attacks. Rest assured that this protection comes with a cost and sometimes the cost is just too high compared to the benefit.

An additional point to consider is how Github managed the disclosure of the vulnerability. This can give us a starting point in thinking how companies should listen to the conversations happening on their platforms and what kind of insights they can get out of them. We’ll have a post in the future to look into this in more detail.

Developers are a pragmatic bunch: let’s not give in to panic and keep thinking instead ;)