Thursday 28 February 2013

Create shared cookie across sub domains (asp.net)

Let's say you have www.domain.com which writes a cookie. You need that cookie to be accessible by a sub domain of your original site: other.domain.com. To enable this, the cookie that is written must not include the specific sub domain in the domain property of the cookie (www). The cookie's domain must be sub domain-less, and begin with a period (.). This then permits the browser to include the cookie in requests to all sub domains, permitting each sub domain to read the cookie content.

Basically, the cookie's domain must go
From this: www.domain.com
To this: .domain.com

The most obvious way to achieve this is to hard code the domain value when writing the cookie - but that always feels wrong and it doesn't help when running on localhost. Nor is it desirable if you want to change your domain.

The answer I came up with is an extension method that makes use of the requested url. That way, you don't ever have to worry what domain you're running under: you'll always get just the the sub domain safe version. It also takes care of a locally hosted domain. Enjoy.



Edit 1: Ed has pointed out the comments that asp.net will read the most explicit cookie first. So to be wary of implementing this approach in a site with already existing cookies that do include the sub domain.

Edit 2: Be wary of this approach when using an integrated cloud hosting provider such as AppHarbor. AH Applications are given a url that varies only in the sub domain on the main AppHarbor domain. E.g the application FooBar on AppHarbor is hosted by default on foobar.apphb.com. Using the technique above would allow any other site hosted in AppHarbor so read the client cookie! To mitigate this you can apply your own hostname to the application and make it canonical so that your site cannot be accessed from the original url. Indeed, its also a good reason not to put sensitive information in the cookie!

Sunday 10 February 2013

Availability Theatre: The Four Nines Comedy

What's absurd about setting 'four nines' (99.99%) as a target for system availability - from a DevOps point of view - is that it completely misses the point of what making and maintaining a system with high availability is all about. After all, when DevOps is given a four nines target what we are really being asked is to ensure that the system possesses the quality component of being available. In order to measure quality in terms of availability, we need to talk about the resilience of the system.

Resilience in a system means that it is not fragile - that it can cope with failure. That means hardware failure, failure within the system and the failure of external systems it might depend on. If we are being asked to build-in high availability, then design choices must be taken to ensure that availability is not at risk because of hardware or software failure. In order to achieve high availability we must instead target building systems tolerant of all forms of failure.

This shift in emphasis from 'always up' to 'resilient' causes a change in the way we architect a system. We make different design choices. We think about exception handling rather than logging. We think about scalability in the application rather than buying a load balancer. We think about caching data rather than taking back ups. We think about automated tests rather than call-out rotas.

Take the world wide web as an example. I would argue that the http protocol has helped deliver so much interoperability because it was designed with the understanding that systems need to cope with failure. That's why there is a sophisticated caching model built in. Sure, http is abstracted at widely usable level, but crucially, it is architected to be loosely coupled. The 404 status code did not come about by accident (!). It is a fundamental acceptance of the fact that sometimes links will be broken. And that a resource that contains a broken link can still be accessed. This is resilience. This ideology is much more honest and practical than pretending that our applications will be up forever.

But this is what we do anyway, right? Any decent DevOps team can translate the four nines requirement into proactive rather than reactive system architecture? Maybe, maybe not. I think it is at least worth pointing out the difference, because here's the comedy in targeting four nines: Lets say you have some system maintenance to perform that would improve the system for its users. Or add some business value, or even decrease the likelihood of system failure, but performing the maintenance will cause some down time that will take you below the 99.99% you promised. Would you perform this upgrade and improve the system? Or would you adhere to your number-in-the-sky target? What is the point of limiting system capability because of some arbitrary number? Why make the business wait until next month when you have another four and a half minutes in which to perform it? Some joke.

Perhaps the reason why management teams favour the four nines is because it is an easily understood unit of measurement. There's no unit of measurement for resilience. However, I do like the idea of a chaos monkey. What we need is a standardised chaos monkey unit of measurement.

I suppose the moral of the story is that managers should try to describe just how important availability is and DevOps should listen carefully. However, DevOps should try to satisfy the availability requirement with emphasis on resilience. Management should be sensitive to these system characteristics and look beyond the statistics. DevOps should never promise to deliver and maintain systems that have a pre-assigned number to the amount of time it will be available. You don't know the future and should not pretend that you do. If you do, you're probably wasted in DevOps.