Saturday, 22 December 2012

RESTful CRUD Services

Notes from the book Rest In Practice. There's plenty more in the chapter that is not mentioned here, this is just the stuff I found useful/interesting and worth putting down.


Safe and Idempotent


  • Safe means the action carries no side effects. GET is safe.
  • Idempotent means that the outcome is the same no matter how many times you call the action. GET, PUT, & DELETE are all idempotent.

Response Codes


  • 201 Created - when the action created a request. Should respond with a Location header with the address of the new resource.
  • 204 No Content - when the action completed successfully and the client should expect no response body.
  • 400 Bad Request - when the request did not meet requirements, or provide all necessary information required to complete the action.
  • 405 Method Not Allowed - when the URI template does not support the method, irrespective of the status of the resource it represents. An Allow response header should be returned that specifies a comma separated list of methods that are permitted.
  • 409 Conflict - when the method performed on a URI is at odds with the current status of the resource. E.g, trying to PUT on a URI representing a resource that is deleted.
  • 412 Precondition Failed - when the attempted action fails an If-Match or similar header. See Conditional Request Headers below.

POST vs PUT vs PATCH


  • When you POST, you are asking the server to create you a resource. The response should contain a Location header with the URI of the object created.
  • PUTting on a URI instructs the server to update the entire resource with the representation supplied in the request body. i.e. to overwrite the resource entirely.
  • PATCHing is asking to overwrite a specific part of the resource, so the request body is smaller and the rest of the existing resource remains as is.

Entity Tags


An ETag response header is used to identify the version of the resource. Typically, a hash of the response body of a GET request of a resource is used as it is a shorter, unique representation of the entire state of the resource at that moment in time. A version number or date time stamp could also be used.


Conditional Request Headers


An If-Match header can be provided with a PUT request (for example) as a way of telling the server to only perform the associated action if the client is operating on the latest version of the resource. If the value of the ETag returned with a GET request is used as the If-Match header of a PUT request, we are telling the server to make sure that the current ETag of the resource matches the If-Match value the client provided before carrying out the operation.

Should the ETag not match, then the server should return a 412 Precondition Failed response code. This implies that the client should perform a GET request to obtain the latest ETag of the resource, so that the client is aware of the latest version of the resource before making changes. This tries to enforce changes being overwritten by sending change requests from an out-of-date client.

Also supported is an If-None-Match request header which should also act on the ETag value. If-Match and If-None-Match should also support Wildcard (*) character matching. Additionally, If-Modified-Since and If-Unmodified-Since since request headers are available, to be compared to the value of the Last-Modified response header. These operate on date-time values and are specific to the second.


Thursday, 13 December 2012

ClickOnce Automatic Update with Continuous Deployment

Problem

We need a task-scheduled desktop application to force update itself on start-up every time it runs, so that it always executes the latest version. There may or may not be updates available for the application. This should be part of an automated deployment process, such that the developer does not need to remember to 'bump' the version number on every release.


Solution

A console or windows forms application published as a ClickOnce application has the ability to solve this. We just need to configure the application accordingly. The other element of the solution is Continuous Deployment using Team City. This will provide the automatic versioning.


Click Once

Add the following to the .csproj file to be deployed in the first property group. The following settings worth noting:






<IsWebBootstrapper> & <BootstrapperEnabled>
Gives the application the ability to download other requirements of your application.


<Install>
States that the application will run from the installed files (as opposed to downloading a new executable each time - this is not possible in this scenario as we need the task to be automated).


<InstallFrom>
Tells the application where to look for installation source files. Could alternatively be CD-Rom based.


<UpdateEnabled>, <UpdateMode> & <UpdateRequired>
Needed to make sure the application will check for updates and to do so prominently on screen when updates are required.


<ApplicationVersion>
Tells the application which version it is. This setting will be overridden in when configuring continuous deployment. 


We could run the publishing wizard at this point and install the application locally. However we need this to run on a client/target machine. This deployment aspect needs to be handled by Team City.


Team City

We will assume a Build Configuration is already in place and that the files are made available from your source control. We can then start by adding a build step to 'Publish' the project in the ClickOnce format.


Build Number configuration under General Settings
Here we are setting up the build number to comply with [major].[minor].[build].[revision] format required to apply to .net assemblies. This build number will also be used to tell the published ClickOnce application which version it is currently running.



Assembly Info Patcher Build Feature
Under Build Steps you will need to enable the Assembly Info Patcher as an Additional Build Feature. This will set the assembly version of our project (and all projects in our solution) to be the Build Number set by Team City.





Build Step 1 (Publish)
Here, the only important setting is to specify the target of the MSBuild task is to 'Publish' the project we want to deploy.




System Properties under Build Parameters
These properties tell the build to set the Application Version and the Minimum Required Version to be the same property. This is what keeps the application up-to-date. By specifying that the minimum required version is the same as the latest version, the application always stays up to date.




Also use: system.Configuration = Release



Publish Url
The remaining element of this approach is to publish the necessary installation files. In the example I have specified http://mypublishurl.com/ as an example. In reality, this url could simple be the address of a virtual directory, pointing to a folder where the all the output of the build is copied to.


Build Step 2 (Copy files to Publish Path)
Therefore, the last step of the Team City build process is to copy the output to the directory our Publish Url is serving. The output required is found in the app.publish folder of our bin directory. 



Installing

You should now be able install your application. This is done by accessing the setup.exe generated by the Publish build step. Hit http://mypublishurl.com/setup.exe and you should be greeted with a download dialog. After continuing with the install the application will run immediately and you should be left with a start menu entry under the publisher name you specified.




Updating

Trigger a new build of the build configuration in Team City. This will increment your build number and re-deploy the application to the publish path. Wait for the build to complete and open your application from the start menu again. It should now attempt to update itself.




Scheduling

This is easily achived with windows task scheduler. You will just need to point to the 
.application file in your start menu.






Wednesday, 14 November 2012

How to set up git mergetool

1. Download and install DiffMerge

2. Find and open your .gitconfig file. If you installed Git using GitHub for windows then it should be in C:\Users\your_user_name\.gitconfig.

3. Append the following:

[merge]
    tool = diffmerge
[mergetool "diffmerge"]
    cmd = \"C:\\Program Files\\SourceGear\\Common\\DiffMerge\\sgdm.exe\" --merge --result=\"$MERGED\" \"$LOCAL\" \"$BASE\" \"$REMOTE\"
        trustExitCode = true
        keepBackup = false

Adjust the exe file path if it has changed since version 3.3.2 or you have the 32bit version on a 64bit machine.

4. When a conflict happens after a pull or merge, just type git mergetool at the command prompt and you'll get diffmerge all up in your grill. The left column is your version, the right colum is the remote version, and the center column is the result of the merge. Complete the merge, save and close. No other git commands are required after closing.

5. That is all.

ps. you can use any diff tool you want - just adjust the config accordingly.

Thursday, 8 November 2012

DDD Notes Part III

Part III Refactoring Toward Deeper Insight

8 - Breakthrough

- Breakthroughs cannot be forced, the usually occur after many modest refactorings.
- Concentrate on knowledge crunching and cultivating robust Ubiquitous Language.
- Watch for opportunities to gain deeper insight and do not hesitate to make even minor improvements.
- Breakthroughs can lead to a cascade of more breakthroughs.


9 - Making Implicit Concepts Explicit

- Developers need to sensitize to the hints revealing implicit concepts and sometimes search them out.
- Listen to the language the domain experts use. It is a warning when the experts use vocabulary that is not present in the design.
- Watch for puzzled expressions on domain experts faces when you mention particular phrases.
- When the missing concept is not on the surface of conversations with domain experts, try digging around the most awkward part of the design - i.e. the place that is hardest to explain.
- Actively engage domain experts in the search, play with ideas or use them for validation.
- Usually contradictions from opposing domain experts can reveal deeper insight to the domain.
- Business books on the domain are a great source of information when the domain expert is not available.
- Explicit Constraints, such as invariants, should be extracted in the design so that their intention is more obvious by using descriptive method names within the object.
- Constraints do not belong in the object if they require data that doesn't fit in the object's definition. Or if requirements conversation revolves around the constraint but they are hidden in procedural code within the object.
- The Specification Pattern provides a way of expressing conditional rules explicitly in the model (predicates) whilst keeping the logic in the domain layer.
- Specifications determine if an object satisfies a condition, the logic having probably been extracted from an attribute of the property.
- Specifications work well with repositories by specifying criteria to select against, esp with ORMs.
- Specs defined in terms of interfaces decouple designs and help clear logjams by using prototypes.


10 - Supple Design

- Relieve the client from having to know about the implementation by using Intention Revealing Interfaces.
- Describe purpose and effect but not how it is achieved and always with Ubiquitous language.
- Functions do not affect the state of an object and are therefore 'side-effect-free'.
- Function should be able to be called repeatedly and deeply nested. Functions are predictable and safe.
- Commands cause change of state and should be kept separate from queries (CQRS). Commands should not return domain data - leave that to functions.
- Where possible, return Value Objects from functions so you don't have to maintain its life-cycle. Furthermore, all Value Object operations should be functions as they are immutable and have no state.
- Move complex queries into Value Objects to reduce side effects.
- Make invariant conditions explicit by unit testing and making assertions.
- Search for meaningful units of functionality to make designs flexible and easier to understand.
- Use your intuition about the domain to decompose design elements (classes/aggregates) into cohesive units.
- Changes are more easily applied to Conceptual Contours and they are easier to combine and reuse.
- Standalone classes with zero dependencies other than primitives are easier to understand and reduces mental overload by having extremely low coupling.
- Every dependency is suspect until proven to be fundamental to the concept of the object.
- Prioritise the most complex computations into standalone classes (value objects?) and allow other classes to depend on them.
- Operations who's argument and return value are of the same type are said to be closed under the set of instances of that type. Closed operations do not introduce further dependencies.
- Closed Operations can operate under abstract types.
- Declarative design can be burdened by frameworks and DSLs. Use only lightweight plugins that solve a particular mundane problem well and that leave you to design more accurately.
- The specification pattern can be very useful for declarative design because individual specifications can be combined and chained to produce meaningful sentences using Ubiquitous Language.
- When trying to achieve Supple Design start with identifiable subdomains (conceptual contours?) and tackle problems step by step. It is better to make a big impact on one area, making some part of the design supple, rather than to spread efforts thinly.
- Draw on established formalisms of the domain when you can.


11 - Applying Analysis Patterns

- Analysis Patterns are not technical solutions but guides to help you work out the model.
- Analysis patterns can carry tried and tested experience of implementation and maintenance from a mature project and bring model insights, direction & discussion.
- Analysis Patterns provide cleanly abstracted vocabulary.
- Development can be stimulated by using analysis patterns and results often resemble adapted the form adapted to the circumstances, but sometimes can inspire the development in other directions.
- It is wise not to adjust the basic concept of an analysis pattern so as to keep the well understand terms and phrases consistent in your project. If model definitions change then keep class/method names up to date.
- Analysis patterns focus on the most critical decisions and offer choices. They anticipate downstream consequences that are expensive to discover yourself.


12 - Relating Design Patterns to the Model

- Design patterns present design elements that have solved problems before in purely technical terms. They can be applied to the model because they correspond to general objects that emerge in domains.
- The conventional focus of the strategy pattern focuses on substituting different algorithms. However, its use as a domain pattern focuses on its ability to express a process or policy rule as a concept.
- When a technical design pattern is used in the model we need another motivational layer of meaning to correlate actual business strategy to make the pattern more than just a useful implementation technique.
- A consequence of the strategy pattern is that it can increase the number of objects in the design and requires clients to be aware of the different strategies. This can be mitigated by implementing strategies as stateless objects that are contexts can share.
- The Composite pattern offers the same behaviour at every structural level of an inheritance hierarchy.
- The Flyweight pattern has no correspondence to the domain model, its only value is at the design level when implementing value objects. The Composite pattern differs as it is for conceptual objects composed of conceptual objects of the same type.
- This is what makes Composite pattern a domain, as the pattern applies to both model and implementation.


13 - Refactoring Toward Deeper Insight

- Is a multifaceted process: Live in the Domain > Keep looking at things differently > Maintain an unbroken dialogue with Domain experts.
- Seeking insight into the domain creates a broader context for the process of refactoring.
- Refactoring can begin in many ways - usually as a result of some awkward realisation or difficulty implementing a reasonable requirement, divergence of language, or any source of dissatisfaction.
- Exploration teams can be tasked with thinking about how to improve the model. These teams need to be self organised and small in members meeting for short frequent periods. The meetings should be spaced out over a few days to allow time to digest the new model. Brainstorming sessions should always exercise the Ubiquitous language. The end result is a refinement of the language to be formalised in code.
- Knowledge should be sought from other sources in the domain to gain insight. It is always possible that the domain has already got well defined concepts and useful abstractions.
- Refactoring toward deeper insight both leads to and benefits from a supple design. A supple design communicates it intent and limit mental overload by reducing dependencies and side effects and being fine grained only where it is most critical to users.
- Continuous refactoring has to be considered a best practice and part of the ongoing exploration of subject matter, education of developers and meeting of minds of developers and domain experts. Take any opportunity to refactor that benefits the integrity of the design and consolidates the teams's understanding. 
- Development suddenly comes to the brink of a breakthrough and plunges through to a deep model-then again begins a phase of steady refinement.


Wednesday, 7 November 2012

WebAPI ActionFilter Dependency Injection with StructureMap

This is a guide to performing dependency injection on a WebAPI ActionFilter. Specifically, an ActionFilter that implements System.Web.Http.Filters.ActionFilterAttribute, as this is designed to work on classes implementing ApiController and the methods of those classes. The IoC container library in this case is StructureMap.

Firstly, define your ActionFilter with its dependencies as properties with public setters. In this example our ActionFilter has a dependency on RavenDB's IDocumentStore. The purpose of the ActionFilter is to apply Aggressive Caching to the IDocumentStore.


We then need to write a custom IFilterProvider (again, of the System.Web.Http.Filters namespace) in order to provide us with ActionFilters with fully resolved dependencies. Our IFilterProvider uses the StructureMap container to 'build up' the ActionFilter's property dependencies via their public setter.

Our custom IFilterProvider also implements ActionDescriptorFilterProvider. This is the default IFilterProvider that asp.net will use if none other is specified. It is necessary because we need access to the GetFilters() method to return us the instantiated ActionFilters needed for the request. We then take each instantiated ActionFilter and pass it to StructureMap to inject the dependencies.


We then configure StructureMap to use our implementation of IFilterProvider when an instance is needed to create ActionFilters. What's interesting here is that we are using StructureMap to provide us with an implementation of something that will then do further dependency resolution for us.

Lastly, we also need to tell StructureMap which property types it should be looking to resolve dependencies for with public setters. All this configuration lives in the single StructureMap initialization step, but it is arguable that this should be done in the originating interface assembly IoC configuration.


Then we are able to decorate any ApiController implementation class or method with our AggresivelyCacheFor ActionFilter wherever we need Aggressive Caching on our IDocumentStore  Happy ActionFiltering!


Wednesday, 10 October 2012

Automock with StructureMap and Moq

This is a guide of how to use AutoMocking with StructureMap and Moq in conjunction with cucumber BDD style unit testing.

Taking the BDD GIVEN, WHEN & THEN Scenario approach to unit testing has a number of benefits. I find it makes tests easier to maintain as you stick to one assertion per test. The test scenarios are well partitioned making them cleaner than straight forward Arrange Act Assert style. It also produces more readable output of test results.

Automocking is a time-saving feature of StructureMap that allows you to easily instantiate the Class Under Test (the class that will be used in the GIVEN statement of the Scenario) by automatically resolving the constructor dependencies of the class with Mocked out instances. Test classes therefore become decoupled from the constructor(s) of the Class Under Test. Meaning that you'll get a test that fails if you change a constructor instead of a test that won't build. This is a debatable point but can be considered an advantage in TDD - you get feedback from your tests as opposed to the compiler, as it should be.

By installing the StructureMap.AutoMocking and Moq packages you get out-of-the-box support for AutoMocking. Given that I regularly use StructureMap and Moq as my IOC and mocking framework of choice respectively, it makes sense to take advantage.


An Example

In the example class to test there are two dependencies that will need to be automatically mocked, and a method that should be tested.


The base class provides us with the GIVEN & WHEN syntax. It uses the MoqAutoMocker provided by the StrutureMap.Automocking library to mock any dependencies on our class's constructor. It provides a handle to access any Mocked dependency that was injected when the class was instantiated. Also provided is a shortcut to Moq's Verify method as well as some syntactic sugar that lets you use [Then] instead of [Test] to round off the BDD style.


The example unit test class implements the abstract class and is named according to the method being tested. There should be one class per method being tested - i.e. one WHEN statement per scenario. If there is another public method it should be tested in a new class that also implements the base class. This practice is what makes the test output easy to digest.



References

Tuesday, 2 October 2012

DDD Notes Part I & Part II

Part I - Putting the Domain Model to Work


1 - Crunching Knowledge

- Communication with Domain Experts is mandatory to accumulate domain knowledge.
- Initial conversations should revolve around an emerging language based on the model.
- Experimenting with different ideas about the model is important.
- Layers of feedback erode the details of the model, only business people and developers needed.
- Learning has to be continuous as the business changes and domain evolves.


2 - Communication and the Use of Language

- Domain experts and developers need to speak the same language.
- The language of the model should be reflected in the code.
- Using the language will force model weaknesses into the open and reveal gaps to be - plugged.
- Model should be the backbone of the ubiquitous language, change in change means change in model.
- If experts don't understand the model there is something wrong with the model.
- Diagrams are useful but should be temporary and serve only communicate the present.
- Documents are useful if they add to the model.


3 - Binding Model and Implementation

- Model driven design removes the separation of analysis models and design models by having a single model that fits both purposes and makes for a very relevant model.
- If a model does not faithfully express key domain concepts, a new model is needed.
- The model must support the ubiquitous language and start with the domain reflected in code in a very literal way that can be improved as deeper insight is gained. This way the code becomes an expression of the model.
- OOP is a great paradigm for expressing models as it inherently supports objects as opposed to procedural languages.
- It is usually a good idea not to try to hide the inner workings of the system from the user as it prevents the user understanding the model, which is usual of benefit to the user as they can reason about it and make predictions.
- The people involved in modelling need also to be taking part in crafting the code. There are subtle relationships in code carry important information about the model that are difficult to communicate verbally.


Part II - The Building Blocks of a Model-Driven Design


4 - Isolating the Domain

- A layered architecture is the most appropriate when trying to employ DDD.
- The Domain layer is where the model lives.
- A separate domain layer allows a model to evolve to be rich and clear enough to capture business knowledge.
- Keeping it separate from infrastructure or UI concerns is imperative.
- Smart UI designs can be practical but are not sustainable long-term.


5 - A Model Expressed in Software

- Entities require an identity so they can be uniquely referenced throughout their lifetime.
- Entity class definitions should be should be kept focussed on the lifestyle continuity and identity i.e. properties that make the class findable with intrinsic characteristics.
- Add behaviour to entities that is essential to the concept, move non essential attributes into other objects associated with the core entity.
- Value objects have no identity as it is not important to tell them apart.
- Objects without identities are more transient and can allow for better system performance without the burden of being identifiable. Applying identities where not needed also confuses the design.
- In value objects we only care about the attributes of the class and they should always be immutable - the value object should be replaced rather than manipulated, e.g. address.
- Value objects can be shared where it is acceptable in the model and of benefit to system resources but is difficult in distributed systems. Shared value objects must never be mutable.
- Services represent valid ways of capturing behaviour that do not belong to an object.
- It is easy to create a service and not apply to an object when it should be in an object but forcing behaviour into an object in which it doesn't belong can distort the model.
- Service operations should be part of the ubiquitous language.
- Services are stateless and have interfaces defined in terms of other domain elements e.g. a bank transfer service deals with other domain elements - money (value) & accounts (entities).
- Modules should be treated as fully fledged parts of the domain and organise objects.
- Code divided into modules means being divided into logical domain concepts. This helps coherent thinking about few topics at once and helps to reason about the system as a whole in terms of its modules.
- Modules should act as a communication mechanism and support the ubiquitous language by reflecting insight into the domain.
- Mixing paradigms should only be considered when all other options for using the primary paradigm for the module in question have been exhausted.
- One way to ease the friction on heterogeneous models is to use the same language in all paradigms.
- Paradigms that do not allow expressive representation of the domain with ubiquitous language should not be used.
Slowly learning that the struggle should always be to use the tools in the best possible way to fit the model, not to adjust model problems so that they are easily solved with our tools.
What happens when domain experts do not know the model well enough to answer deeper questions of things the business hasn't decided yet? 

6 - The Life Cycle of a Domain Object

- Invariants are ways to maintain business rules within an Aggregate
- An aggregate is a cluster of associated objects that we treat as a unit for the purposes of data changes.
- An aggregate route is the one entity (of possibly many) in the aggregate that other entities outside the aggregate can hold reference to. 
- That means the other entities inside the aggregate only need identifiers distinguishable inside the aggregate, as nothing outside can access them.
- Aggregates are the only thing in the aggregate that can be accessed directly from the database. A delete operation must remove everything within the boundary.
- Invariants (consistency rules) involve relationships between members of the aggregate and must be enforced by persisting the aggregate as a whole.
- Decouple elements to keep clean distinctions between things inside and outside the aggregate boundary.
- Factories remove the need for object clients to know the inner workings of the client.
- They are not intrinsically part of the model but they are a reasoned necessity of the design.
- A factory should be an atomic operation that creates a valid entire aggregate for an entity with all invariants satisfied.
- Factories can over complicate a simple design but the threshold for choosing a factory method is short.
- Factory parameters should be from a lower design layer or basic objects and abstract types.
- Objects don't need to carry around logic that will never be applied in its active lifetime - so the factory can be a good place host the responsibility of enforcing invariants (especially if they are immutable).
- Factories are also suitable for reconstituting inactive objects, ie loading from the db.
- Infrastructure makes it easy to traverse dbs but developers have to try to keep the model in tact.
- Use a repositories to provide access to aggregates routes. Only ever traverse via the route to access objects internal to the aggregate.
- Repositories help maintain focus on the model by emulating an in-memory collection of all entities of a type.
- Repositories decouple domain design from persistence technology (facade) and allow for easy substitution.
- The specification pattern helps complex querying whilst maintaining the model.
- Keep mappings between objects and tables simple when designing relational dbs by compromising normalisation in favour of tight coupling between model and implementation.


7 - Using the Language: An Extended Example

- An Anti Corruption layer can protect external systems that are not based on the same domain model from bleeding into your design.
- Enterprise Segments can be useful value objects for the design to allow communication between modules and preserve the ubiquitous language.
- The application layer should orchestrate the different elements of the domain but should not contain any business logic.
Should repositories be placed in a data access layer or do they belong inside the conceptual modules along with the entities they are for? 
Could (or perhaps, should) there be an object oriented language or framework with these concepts of entities, values, aggregates, factories, repositories, anti-corruption layers as first class elements? We would lose precision but we would have a platform that is built for DDD?

Thursday, 27 September 2012

AppHarbor Environment Management

At our place we have been lucky enough to work on some greenfield projects that we chose to host in the cloud with AppHarbor. As it was our first foray into cloud hosting we hosted test and staging environments internally and used AppHarbor only to host our production environments.

Recently, we have moved our all our stage environments into AppHarbor too. So far its been a really good move and we're wondering why we didn't do it sooner.


Benefits

  • AppHarbor provides instantly available environments and infrastructure.
  • We get the best possible candidate for a staging environment as its an exact a replica of production (even to the extent of load/performance testing).
  • Our config settings are in AppHarbor so we no longer have to manage tricky config transformations, leaving us with only one web.config.
  • We have been relieved of the time spent configuring each application's continuous integration internally. Configuration time is required on Appharbor but the overhead is not nearly as much and is mostly GUI driven.
  • It has made for a very clean continuous deployment workflow, with an extra 'sanity-check' step before deploying to production (instead of just git push).
The one downside we have experienced so far is that we simply have more applications to manage in AppHarbor. This was my initial reason for not wanting to duplicate applications in the name of simplicity. But, all things considered what we have now is much less complex than before. For reference there exists great deal of information from the horses mouth to get started with.


Creating a New Environment In AppHarbor

It sounds strange but AppHarbor does not provide explicit support for multiple environments for your application. It's down to how you configure the different applications in under your account i.e. making use of a convention. AppHarbor simply provides hosting (domain name and server side processing), builds & tests code before deploying and the means to apply infrastructure 'Add-ons'. You just need to create a new application for the same code base when you want a different environment.

Getting your first application up and running is only possible if you are using one of AppHarbor's supported Version Control systems. We're using Git (and GitHub) and follow the convention of having at least two branches for each repository - develop and master. There are continual commits and pushes on the develop branch and we will merge develop into master when we're happy with it. This development workflow suits our staging and production needs also.

We therefore need one application in AppHarbor per Git repository branchWe prefix the application name with the branch it builds from. So we have develop-myapplication and master-myapplication, with respective host names develop-myapplication.apphb.com and master-myapplication.apphb.com. The distinction in the url makes it clear which branch you are looking at. The develop application is our stage environment, the master is the production environment.

The basic model for deploying with AppHarbor is to add it as a remote repo and push directly to it. However, I mentioned we are using Github and we use this to our advantage. A GitHub (GH) repository can be linked to an AppHarbor (AH) application so that pushing to your GH remote origin will automatically trigger another push to your AH repo which then builds and deploys. This saves us having to push to GH and also to AH. However, this can only be configured for one of the branches. The deployment frequency will be much higher on the develop branch, so this is what we set the tracking branch to be in our AH settings.

The fact that we get this very simple 'push to deploy' feature is very beneficial in our stage environment. That it can only be used once is also of benefit because it prevents us pushing to master and triggering a master (production) build accidentally or too eagerly. When we want to deploy master - which is less often than develop - we have an extra 'sanity-check' step which is to push the master branch to our remote AppHarbor repository. This is simply just reverting back to the basic model for the master branch. This is the git command line for pushing to AH.

# only need call the line below once to add appharbor as a remote repo
git remote add AppHarbor https://apphb.com/master-myapplication-repository-url 

# push your master branch to the added appharbor remote repo
git push AppHarbor master

The diagram below shows the lines of communication between Local, GitHub and AppHarbor repositories in each environment.





Trimming Down Environments

Our stage environment has also become our first phase testing environment. In other words it is now playing the role of both TEST and PRE-PROD environments. This was also an initial deterrent but it has actually not brought any pain (yet). The classical model is great and well worth investing in if you have an appropriately complex application and domain. Having unique environments for each purpose (TEST, QA, UAT, STAGE etc) takes time to configure (not to mention getting the resources). Also considering we don't yet have any integration tests or formal QA people we reasoned that we could simply do without the extra environments.

So now having done away with TEST we just have STAGE and PROD and its working just fine. Testing is performed by developers and stakeholders on our AppHarbor hosted STAGE environment (develop-myapplication). If everyone is happy with changes we deploy to Production (master-myapplication).

Instead, this model has given us the freedom to quickly generate environments for feature branches. If we need to demonstrate, test or stage a feature branch we create a new application and prefix the application name with the name of the feature so it is available with a distinct url e.g. coolnewfeature-myapplication.apphb.com. So we have moved from a 'vertical' environment stack (TEST > QA > STAGE > PROD) to having more 'horizontal' feature driven environments.


A Proper Staging Environment

The point of a staging environment is to be identical to live, we all know why this is a good idea. We have already found bugs that we would not have found until deploying to production. A good example is that the default AuthorizeAttribute does not work on appharbor - you need to use a custom attribute. Whilst I acknowledge this is not a ringing endorsement of AppHarbor it is a good example of why a proper staging environment is important. We also get the environment made instantly available to us - no more hassling IT Ops for extra boxes or getting that internal DNS updated.

Your infrastructure add-ons don't need to be bulked up to the same scale as your production environments. E.g. we have a stage enviroment with a free 20mb database, but production has 10gb which costs a few bob. However, should you want to performance test your in stage environment just assign it the same number of web-workers make sure the both apps are hosted in the same region. You can take advantage of the Blitz Add-on that can tell you how well your app can cope with traffic surges.

Creating a stage environment in the cloud does mean that its publicly available. If this is not in your interest there is brief discussion here around how you might want to restrict access to your public staging environment.


Application Settings

AppHarbor lets you manage Configuration Values (the app settings in the web.config) from within AH itself as a simple key-value collection and straight-forward GUI. After each successful build the keys are matched to your web.config <AppSettings> and the value that exists is replaced with the value managed in AppHarbor - or added if no key is found. Then the code is deployed with the transformed config. This means you no longer need to maintain web.config transformation files like web.release.config. Whilst AppHarbor does in fact support config transformations if a web.release.config file is found, I have found it easier to take these files out of the picture altogether.

Some would argue that the transform files belong in source control but I can do without the burden of configuring the tricky XML transformation syntax, even though Appharbor kindly provides a tool to test your config tranforms. It is important to note that config transformations only works for the <AppSettings> in the web.config and not <ConnectionStrings>. So if you have a connection string that you want managed by AppHarbor put it in the App Settings instead.

What's also great is that any add-ons that require app settings or connection strings such as Databases or Queue systems auto-magically populate your AppHarbor app settings collection with the necessary key-value pair. The keys are constant between applications so you can add RavenDB to your develop and master applications and never have to touch your local web.config! We happen you use our Stage RavenDB instance to develop with locally so we do copy the connection string value into our into our web.config app settings. This could be considered bad practice as you would probably be better off having a separate DB instance to develop with but for our small team it was just the easier option.

App Settings are applied after every successful build and then deployed. If you change a config setting you need to redeploy your application for it to take affect - there is no hoping onto the box to adjust the web.config! This sounds scary but as I've blogged before - config settings should only be used for environment variables and not for things that remain constant between environments (such adhoc text or prices). With this in mind AppHarbor helps to keep what should be a constant a constant and not a config setting that isn't changing between environments. If your constants change you redeploy and not concern yourself worrying about whether your app settings have been taken affect. Generally your app settings won't change all that often if they are being used correctly. For extra reading here's good discussion about how to keep sensitive data config settings private.


Team City

We are still using good old Team City for our first line of defence by building the develop branch after each push and running all unit tests. AppHarbor does this too but we get better notifications if anything goes wrong and also unit test coverage statistics.

There will be an issue if you are using Team City 7 as your internal nuget package server and it is running on an internally hosted system. You need the nuget server of TC to be externally visible to AppHarbor so it can can download packages with your build. You can't just spin up TeamCity on AppHarbor so you're only option is put TC on a publicly visible IP and be sure to authorize all nuget access. Or open source it and get it on Code Better's TC!


Conclusion

There are plenty of reasons why we should be using cloud hosting, we had the luxury of not having to integrate with any legacy systems or databases behind a firewall. Having taken advantage of this fact I would encourage teams to break free from internal infrastructure hassles and move into instant Continuous Integration with AppHarbor, or any integrated cloud hosting provider.

Monday, 27 August 2012

App Settings are for Variables that Change with the Environment

My opinion is that app settings (or web.config settings) are generally overused. It is usually reasonable for developers to expect that certain values in their code will change in the future. The temptation is to capture all these variables in the app settings. Doing so however, is a mistake.

The only values that belong in application settings are things that will change with the environment, i.e. TEST, STAGE, PRODUCTION.

Everything else, including the things we think (or even know) will change belong directly in source code. I've learned this rule only over time, and I think its reinforced by the YAGNI principle.


Explanation

Sometimes business requirements are delivered in such a way that they barely need translating into a model or domain or even into some syntax. They will usually be very short user stories, and not scored very highly. Examples include:
"As a patient I want a reminder 1 hour before my appointment." 
"As a broker I want the message push interval to be 10 seconds." 
"As a user I want the warning message to read: 'Are you sure you want to delete?'"
The important thing about these stories to notice is the precision they provide. Product owners like precision. They like to know exactly what their product will or won't do. Developers however, like abstraction. They like removing precision and extracting the general rules behind the process. Developers see a rule like '... a reminder 1 hour before my appointment' and take from it the underlying purpose of the story: 'something must happen at a certain time before something else happens'.

There are clearly a number of ways in which sending the reminder could be implemented, depending on the delivery mechanism or format that I won't discuss. But what is clear is that our Product Owner wants the reminder sent 1 hour prior to the appointment. Most developers would look at this requirement and expect it to change. The one certainty about any piece of software is that it will need to change.

The temptation therefore is that the 1 hour (or 60 minutes) should not be hard-coded - if the value is going to change, then lets keep it somewhere where it's easy to change like the app settings, right? Wrong.

There are two reasons for this: Unit Tests and Continuous Delivery.


Unit Tests

The user story in question is business value waiting to be delivered. We want to be absolutely sure that we are delivering that value. If we ship code that doesn't have a passing unit test titled "A_Reminder_Is_Sent_60_minutes_before_appointment" then we haven't succeeded. (Or perhaps closer resembling conceivable objects something like "Reminder_Delivery_Time_Is_60_minutes_before_appointment_time"). We should be sure that no other changes we might make in future will interfere with this piece of logic.

You might argue that we could write a test that reads this value out of our app settings. But then what you would have is an integration test. The reason we want to capture this behaviour in a unit test is to get the answer we want in the shortest feedback loop possible. Also, we don't want to have to maintain a duplicate app settings file in our unit test project.


Continuous Delivery

The practice of automated deployment is becoming more common. As Scott Hanselman says, "If You're Using XCOPY, You're Doing it Wrong. I would take this further and say that if you are still RDC'ing onto the live server to edit a config setting (like changing 60 minute reminders to 120 minute reminders) then you're also doing it wrong. This means the ONLY way to deploy new code (or changed code) is to use an automatic task, and never tamper with the live servers. In a lot of cases developers do not even have access to the live servers.

So now that our app settings file is no longer accessible, it is certainly not easier to change. If we want to change our reminders from 1 hour to 2 hours we have to do so locally, commit and deploy.

Not forgetting our unit test of course - if we made our change in code and ran our tests we should expect a failure. This is important. We have changed the rules of the system and should expect to have to update our unit test accordingly. The new rules are then captured and protect our logic against future unexpected changes.


Environments

So when are App Settings useful? The purpose for app settings in my opinion is for values that change with the target environment. Connection strings are a perfect example of this. In a TEST environment we would expect to talk to a different database than a PRODUCTION environment.

This is where app settings and Config Transformations come in very handy. One rule to decide is this: If your app setting is the same in every target environment, then it doesn't need to be an app setting.


What about Code Reuse?

Keeping values in one place is very useful. But it doesn't mean they belong in the app settings. Globally accessible static classes with Constant field variables is a perfectly acceptable way of maintaining consistency and adhering to the DRY principle


When App Settings become a Feature

If we begin to update changing values on a regular basis then the overhead of updating unit tests and deploying can become a burden. At this point we have learned that the ability to update such settings needs to become part of the application. A discussion should be had with the Product Owner over the time spent redeploying versus time invested in providing a new feature to allow users to update values without requiring developer time.

Conclusion

App settings are a tempting place to put variables that we expect to change. However, it is more valuable to 'hard code' such values. This is so that we can make valid assertions against the business rules 'at present'. It is in our interest to break a test if this value changes, and know about it as soon as possible. We also know that we should be deploying changes exactly the same way - whether the changes are in the app settings or source code. This means a change to the app settings file is no more difficult or time consuming than it is to change the source code of our system. Target environment variables are the only valid use-case for app settings. This is to allow such values to vary (transform) between environments. App settings should only exist if the value changes between environments.

Monday, 6 August 2012

Specflow BDD test with HttpClient

I've been using Specflow for a while now for various BDD testing scenarios. However until now I'd not come across an example that I'd like to blog about.

Enter: HttpClient and HttpRequestMessage included in .net 4.5 under the System.Net.Http namespace.


Scenario

The story starts with the requirement to drop a client cookie so we know which users have visited the site before. And perform some basic logging about the visit details, query string information, referrer etc.

Straight forward enough, but the complexity arises because we want a single Visit Tracking Utility that spans multiple sites, each on a different platform and written in different languages. Some are even 3rd party sites, so we have minimal control over the html displayed - and zero control over whats going on at the server. In order to achieve this platform-agnostic solution we need to solve the problem with javascript as we accept will have at least this much control over each target site. We also have some specification about the name and value of the cookie as well as the expiry date date.

The technical requirement therefore is a url available to make requests to that will drop a cookie on the clients machine called VisitId, with a Guid value that expires in 7 days time.


The BDD Test

If you are not already, please familiarise with specflow. The plan here is to capture the Behaviour in a Feature file that we can make assertions from based on our code. We begin by citing the scenario we want to test using the Gherkin Given; When; Then format. Specflow then allows us to attach code to the steps we dictate, then we insert the required code and necessary NUnit assertions to validate our scenario.

Starting with a new IntegrationTest project in the solution we add a new VisitTracking feature file and enter the following scernario.

Feature: Visit Tracking
    In order to know which users have previously visited
    I want to drop a client cookie on each visit

Scenario: Setting client cookie for a new visit
    Given the api uri is local.trackmyvisit.com/api/trackvisit
    And the expected cookie name is VisitId
    When I hit the visit tracking uri
    Then the response HttpCode is OK
    And the response sets a cookie
    And the cookie name is correct
    And the cookie value is a valid Guid
    And the cookie expiry is 7 days from now

Each 'Then' and following 'And' gives a chance to make an assertion about the request we have made. We are testing exactly the things described in our specification.

If we build and try to run this now, then specflow will tell us that we are missing the steps specified in the scenario:

-> No matching step definition found for the step. Use the following code to create one:
[When(@"I hit the visit tracking uri")]
public void WhenIHitTheVisitTrackingUri()
{
    ScenarioContext.Current.Pending();
}

Copy from the test output all the necessary steps into a VisitTrackingSteps class. With a couple of extra helper methods and paramter arguments on your class should look like the following. Don't forget to use the [Binding] attribute before the class declaration or specflow won't know where to look for the steps. Also you will need to install the System.Net.Http package to get reference to the HttpClient object et al.

[Binding]
public class VisitTrackingSteps
{
    private string _uri;
    private string _cookieName;
    private HttpResponseMessage _result;

    [Given(@"the api uri is (.*)")]
    public void GivenTheApiUriIs(string uri)
    {
        _uri = "http://" + uri;
    }

    [Given(@"the expected cookie name is (.*)")]
    public void GivenTheExpectedCookieNameIs(string cookieName)
    {
        _cookieName = cookieName;
    }

    [When(@"I hit the visit tracking uri")]
    public void WhenIHitTheVisitTrackingUri()
    {
        var client = new HttpClient();
        var msg = new HttpRequestMessage(HttpMethod.Get, _uri);

        _result = client.SendAsync(msg).Result;
    }

    [Then(@"the response HttpCode is (.*)")]
    public void ThenTheResponseHttpCodeIs(HttpStatusCode statusCode)
    {
        Assert.That(_result.StatusCode, Is.EqualTo(statusCode));
    }

    [Then(@"the response sets a cookie")]
    public void ThenTheResponseSetsACookie()
    {
        var isSetCookieHeaderPresent =
            !String.IsNullOrEmpty(GetValueOfSetCookieHeader());
        Assert.IsTrue(isSetCookieHeaderPresent);
    }

    [Then(@"the cookie name is correct")]
    public void ThenTheCookieNameIsCorrect()
    {
        var firstKey = ParseSetCookieValue().AllKeys.FirstOrDefault();
        Assert.That(firstKey, Is.EqualTo(_cookieName));
    }

    [Then(@"the cookie value is a valid Guid")]
    public void ThenTheCookieValueIsAValidGuid()
    {
        Guid guid;
        var guidValue = ParseSetCookieValue()[_cookieName];
        var isValidGuid = Guid.TryParse(guidValue, out guid);

        Assert.IsTrue(isValidGuid);
    }

    [Then(@"the cookie expiry is (.*) days from now")]
    public void ThenTheCookieExpiryIsDaysFromNow(int days)
    {
        var expiresValue = ParseSetCookieValue()["expires"];
        var expires = Convert.ToDateTime(expiresValue);
        var rangeStart = DateTime.Now.AddDays(7).AddMinutes(-1);
        var rangeEnd = DateTime.Now.AddDays(7).AddMinutes(1);

        Assert.That(expires, Is.InRange(rangeStart, rangeEnd));
    }

    private string GetValueOfSetCookieHeader()
    {
        IEnumerable<string> values;
        _result.Headers.TryGetValues("Set-Cookie", out values);

        return values.FirstOrDefault();
    }

    private NameValueCollection ParseSetCookieValue()
    {
        var collection = new NameValueCollection();
        var cookieValArray = GetValueOfSetCookieHeader().Split(';');

        foreach (var arr in cookieValArray.Select(s => s.Split('=')))
        {
            collection.Add(arr[0].Trim(), arr[1].Trim());
        }

        return collection;
    }
}

You can see that in the 'When' method the HttpClient and HttpRequestMessage are used to initiate the call to the visit tracking site - at present not yet created. The SendAsync method of the HttpClient is called using a GET request to the path of the api controller method. This is replicating exactly what the script tag will do from each web page it appears on. So we have been able to quickly conjure up a http request without the use of selenium or other web driver. The other nice part is that we can interrogate the response from the request by accessing the Result property of the generic Task we get from calling SendASync.

The tests will build and run now but there is currently nothing to hit so the tests will fail. We can now implement the logic that will fulfil the spec.


Tracking Logic using an Api Controller

Our spec tells us we need a uri we can hit that will perform the visit tracking logic and write back a cookie to the client through the http response. Furthermore, as there will be now javascript or html returned in the response (i.e. nothing for the client to parse) this sounds like a candidate for using an MVC4 WebApi Controller. The controller implements the exact requirements of the spec.

public class TrackVisitController : ApiController
{
    // call using GET method to /api/trackvisit
        
    public HttpResponseMessage Get()
    {
        var cookie = new HttpCookie("VisitId")
        {
            Value = Guid.NewGuid().ToString(),
            Expires = DateTime.Now.AddDays(7)
        };

        HttpContext.Current.Response.Cookies.Add(cookie);
            
        return new HttpResponseMessage(HttpStatusCode.OK);
    }
}

The unit tests are already fathomable for this method, but this post is about integration testing, so thats what we'll focus on,


Script Tag

We then only need to tag each page we wish to track. The uri is used for the source of a script tag on each page we want our tracking system to work on.

<script src="http://test.trackmyvisit.com/api/trackvisit" />

All the source is available in my GitHub Repository. And you will have to setup the site locally and adjust the host binding yourself if you want this to run out-of-the-box :-).