We (Frende) are hiring

imageWant to work for a great performing and highly skilled team primarily focusing on (but not limited to) Microsoft .NET? To give you a high level (technical) overview of what we’re currently working with, here’s a list:

  • C# 4.0 and Visual Studio 2010
  • TDD (Test-Driven-Development)
  • Executable Specifications using Specflow (similar to Cucumber in Ruby)
  • DSL’s
  • StructureMap (IoC)
  • NUnit
  • NHibernate (ORM) using FluentNHibernate and LinqToNHibernate
  • NServiceBus
  • JQuery
  • Continues Integration using TeamCity
  • Continues Deployment using WebDeploy
  • Source Control in TFS
  • Everybody is Pair Programming
  • Kanban

As you can probably see we’re doing web development only and you’re spot on! There’s a variety of applications ranging from online insurance shop to backend sales applications.

Our team is very much eager to learn and teach each other new techniques and technologies to improve planning, development, quality and delivery of the systems we create. As a member of our team you will be quickly integrated by participating in Pair Programming and take part in decisions and discussions from day 1. We don’t believe in having dedicated experts, but rather generalists that are able to be part of the complete development process, resulting in reduced handoffs. That being said, we all have areas of interests that we focus more on than others.

Quality is something we take seriously and therefor we use Executable Specifications (BDD) to define requirements as well as a communication tool with the business. We also need to have tests for our code units, create good design and have code that is Clean. That’s why we use a unit testing framework together with TDD.

Generally we’re looking for skilled developers. If you would like to work with the technologies mentioned earlier and also have a interest for and motivated by increasing quality by technical testing and you are a team player – then you are hired! You will then be the team member that has a special interest in driving our progress around Executable Specifications, TDD and any existing or coming tools that will improve our communication with the business as well as code and product quality.

Secondly we also practice Continues Deployment and Continues Integration, have lots of VM’s and have full control of our deployment environments. If you are a person that like some Operations work from time to time in addition to development, your are our new team member!

Last but not least, Frende is a small and young insurance company where communication flows quickly and decisions paths are short.

If this sounds interesting to you or you want to know more, drop me an email at jon@mydomain so we can have a chat.

There’s also a Norwegian ad out here: http://www.finn.no/finn/job/fulltime/object?finnkode=28416612

Continues Deployment on Hanselminutes

I was recently interviewed by Scott Hanselman on Hanselminutes about Continuous Deployment (or No-Click Deployment as I called it). The interview is now available online: http://hanselminutes.com/default.aspx?showID=248

I know I have two parts left in my No-Click Deployment series, covering the Load Balancer and TFS build integration, so hopefully I’ll manage to get those out shortly.

No-Click Web Deployment – Part 2 – Web Deploy (a.k.a. msdeploy)

In Part 1 I hadn’t decided if I was going to use Web Deploy as the base of this blog series or the PowerShell scripts I already had in production. I decided to give Web Deploy a chance. In the end I didn’t regret it, but I must admit it was not straight forward. Hopefully this post will make it a breeze for you :-)

Web Deploy With VS 2010 and TFS 2010

VS 2010 and TFS 2010 now comes with Web Deploy integration and works great for low to medium complex web apps. When I tried it out with my requirements I did not manage to get my solution to work so I reverted back to the command line version.

The documentation for this topic is also sparse, and it looks/feels unfinished. Here’s some resources if you’re going down that road:

The Web Deploy MSBuild schema: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web

The official doc: http://msdn.microsoft.com/en-us/library/dd394698.aspx

Web Deploy @PDC: http://www.microsoftpdc.com/2009/FT56

FAQ: http://blogs.msdn.com/b/aspnetue/archive/2010/03/05/automated-deployment-in-asp-net-4-frequently-asked-questions.aspx

MSBuild Web Deploy arguments + more resources: http://weblogs.asp.net/jdanforth/archive/2010/04/24/package-and-publish-web-sites-with-tfs-2010-build-server.aspx

THE commands

The two commands below solves all the requirements I listed in my previous post. I spent a great deal of time learning and studying Web Deploy in order to come up with this, so I hope you find it useful. It might not look too hard now, but nothing is when you have the solution in front of you :-) I’m always open for better ways if you find one though. If I could ask the Web Deploy team for a new feature, it would be to make the two commands below really simple.

The rest of this post will discuss these commands in detail. Text in blue can be replaced with your own values.

Create package:


Install package:


Note that the above commands are one-liners. I only structured them the way I did to make them readable.


Web Deploy was built for the purpose of deploying web applications, so at first it looked like the perfect solution. However, I soon realized I had to study it in great detail to make it do what I wanted. I’m talking days not hours. Time I was not eager to spend to replace something already working or just for the sake of this blog post (sorry guys :-)). The reason for not giving up on it entirely was because Web Deploy had the potential to simplify/replace my scripts, have fewer moving parts, and make the total solution easier to maintain.

Around the same time as I published Part 1 I tweeted my concern about Web Deploy’s complexity and the @wdeploy team contacted me. I sent them a long email describing my concerns and they promised to do “creative things in the future” to make it less complex. I’ve been in contact several times after that and they are very responsive and eager to get feedback on the product. I’m personally looking forward to see (and maybe help influence) how this product will evolve.

Web Deploy Command Options

Before diving into the inner workings of these commands, we need to know a bit more about the tool. Web Deploy have a lot of functionality and is extremely powerful. Here’s the command line syntax:

	msdeploy.exe -verb:<verbName>
	[-<MSDeployOperationSetting> ...]

Doesn’t look too scary right? There is more to it though…


The following verbs exist:

  • delete
  • dump
  • getDependencies
  • getParameters
  • getSystemInfo
  • sync


Let’s look at which providers it has:

These are of course only the built-in providers, then you can create your own or use 3rd party ones if you like. A quick scan through this list shows that it does a lot more that just web stuff, like COM objects, registry settings, certificates, gac, databases.. the lot.

Provider Settings

The providers again have a set of common provider settings:

  • authType
  • computerName
  • encryptPassword
  • getCredentials
  • ignoreErrors
  • includeAcls
  • password
  • storeCredentials
  • tempAgent
  • userName
  • wmsvc


From the documentation:

Web Deploy operation settings are non-provider specific command-line flags. They modify all of a Web Deploy operation.

  • allowUntrusted
  • declareParam
  • declareParamFile
  • dest
  • disableLink
  • disableRule
  • disableSkipDirective
  • enableLink
  • enableRule
  • enableSkipDirective
  • postSync
  • preSync
  • removeParam
  • replace
  • retryAttempts
  • retryInterval
  • setParam
  • setParamFile
  • showSecure
  • skip
  • source
  • useCheckSum
  • verb
  • verbose
  • whatif
  • xml
  • xpath

Manifest Provider

We now have an overview of the command line syntax of the tool, but there are a few other important aspects. The first one being the manifest provider. Most likely you want to use more than one provider in your command, and that’s exactly what this provider does. Here’s an example from the documentation:

   <appHostConfig path="mySite" />
   <gacAssembly path="System.Web, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
   <comObject path="Microsoft.ApplicationHost.AdminManager" />
   <contentPath path="c:\source" />
   <regKey path="HKLM\Software\ODBC" />

I would actually preferred having these on the command line instead. When I trigger my commands from a build, I’m inserting variables into the command anyways and I would be better of with the ONE command and not have to maintain the xml files as well. Actually I would like to have both options. In the command I would suggest something like this (which would adhere to the existing conventions):


Link Extensions

Another one is the concept of link extensions, which can be enabled/disabled by using the enableLink and disableLink operations. One example drawn from the documentation is:

…if you specify -disableLink:ContentExtension on the command line, you can prevent content from being included in a sync operation. This enables you to synchronize two Web servers without moving any content.

Web Deploy Rules

Web Deploy rules exists to disable or enable built-in or custom rules (using e.g. the enableRule and disableRule operations) for the sync verb. This is however (naturally) only true if the isDefault attribute is set to true in the rule definitions. See here to find the built-in rules. By creating the file Msdeploy.exe.configsettings in %program files%\IIS\Microsoft Web Deploy folder, you can add custom rules.

Packages and Archives

From the docs:

The Web Deploy package and archiveDir features let you create a snapshot backup of your Web site or Web server into a .zip file or archive directory. In addition, the parameterization and manifest features let you customize the archive or package that you create. You can then use your package files to deploy Web sites and Web servers to other computers or computer locations.

What’s wrong?

That’s a lot of stuff to consume! In order to effectively use this tool for deployment, you have to learn most of these verbs, providers and operations, figure out how they work and how you can take advantage of them.

This is (in my opinion) what Greg Young have been talking about and expressed in the NDC Magazine article Failure (well worth the read), where the intent is lost in the system and the user is forced to “reverse engineer” it to make sense of it all.

I’m fully aware of the endless amount of user scenarios that exists, and I don’t expect the Web Deploy team to cover them all, but the most common ones would be nice. Like the one I’m showing in this post I would think would be quite common for auto deployment scenarios. The advanced configuration could still be available for the not-so-common scenarios. Instead I’m “forced” to spend a lot of time learning the inner workings of a complicated tool, which in the end create a lot of value. How valuable wouldn’t it be if I didn’t have to invest so much time in learning the tool?

I’m also aware of the new TFS 2010 integration that exists, which simplifies many of these tasks, but that solution lacks documentation and the advanced options are hard to get at.

Setting up Web Deploy for remote access

To be able to access a server remotely, Web Deploy must be given access to the server. Several options are available, but I’ll focus on the Web Deployment Agent Service, which requires administrator privileges to use.

The Agent Service requires Web Deploy to be installed on the target server. You can find the detailed installation instructions here: http://technet.microsoft.com/en-us/library/dd569030(WS.10).aspx

Basic Usage

Let’s look at some vanilla examples for deploying web stuff remotely:








This copies your local C:\data directory to the server Server2, and provide a username and password to get access to the remote server. This is done by using the sync verb and the contentpath provider. So we can push files, which is nice cause we don’t have to use FTP, BITS or similar.

Deployment Requirements 

We want to do a lot more than just copy files. Here’s a list extracted from my previous post of what I want to deploy to remote servers:

  • File content
  • Virtual Directories and Applications settings
  • AppPools settings
  • Certificates
  • Bindings
  • Log location settings
  • Virtual Directory file location settings

Packaging structure

My commands can be used to produce one package per web site multiplied by the number of environments you have (e.g. Dev, Test and Prod). So if you have 4 web sites and 3 environments, you create 12 packages.

To better understand why this is, here’s a list of the settings that are specific for each environment and common across all servers in one specific environment:

  • File content
  • SSL Certificate password
  • Certificate hash/thumbprint
  • Web site log directory location
  • Virtual directory root locations

For server specific differences we create the package so that it accepts parameters for:

  • User and password for the service account under which the worker process of the Application Pools runs
  • The https binding (if used)
  • The http binding (if used)

If we did not define the last three parameters, we would have to multiply the packages with the amount of servers in the different environments as well. 4 web sites, 3 environments and 2 servers in each environment = 24 packages. You don’t want to go there…

Another option would be to define all possible changes as parameters, leaving you with only one package. I tried that. I could not figure out how to accomplish all of these using params, so I analyzed what I needed and used a little of both. It works, but could be better.

As an example of the above, a package for the web site blog.torresdal.net could be created for e.g. Dev with these settings:

  • File content = E:\MyBuildOutput\blog.torresdal.net
  • Certificate password = MySecureSSLPassword
  • Certificate hash=08bf3e051bd10cd8d89ac1a3ac431887886ed343
  • Web site log dir = E:\Logs\blog.torresdal.net
  • Virtual Directory locations = E:\Web

…and used these parameters when deploying:

  • Username for AppPool = MyDomain\AppPoolServiceAccount
  • Password for AppPool = MySecureAppPoolPassword
  • Https binding =
  • Http binding =

Notes on Application Pool

Note on bindings: You have to use either http or https or both. If you want to use hostname you could do: *:80:blog.torresdal.net

Note that host names are not supported on SSL in IIS 7, even though it’s technically possible to do.

Note on the service account for the application pool: On Windows Server 2008, IIS support Application Pool Identities. These are Windows virtual accounts that are assigned to the application pool effectively isolating it from other services. If you want to use these, then you don’t have to declare the parameters for the app pool identities as long as your source is using App Pool Id’s.

Dissecting The Create Package Command

I believe you now have an idea of what the commands above do, but lets pull it apart and describe each piece by itself.

The package manifest file

First we’re going to look at the manifest file which contains all source providers used to create the package:

   <appHostConfig path="blog.torresdal.net" />
   <cert path="my\08bf3e051bd10cd8d89ac1a3ac431887886ed343 " />
   <dirPath path="E:\MyBuildDrop\LatestVersion\Dev\blog.torresdal.net" />

This is the way that Web Deploy allows you to use more than one provider. Here’s a short description of each provider and what it does:

appHostConfig Gets all IIS specific settings from a web site.
cert Gets the certificate with the given thumbprint/hash.
dirPath Gets the content from the given path.

The create Package command

Now lets look at each section of the command:

-verb:sync Tells Web Deploy to do a sync
-source:manifest=PackageManifest.xml Points to the manifest file showed above containing all providers to use as source.

Use the package provider as destination for creating a package (zip) containing all content and information needed to deploy a package to a server. Since we’re using the cert provider (in the manifest) we need to provide the password to get access to the certificate.
-enableLink:AppPoolExtension I want the application pool to be synced as well.
-disableLink:CertificateExtension The appHostConfig provider include certificate and content by default. I want to control which certificate and what content to include, so I use –disableLink to disable these extensions. That is why I’ve added the cert and dirPath providers in the manifest so that I can be explicit about these.
-disableLink:ContentExtension See previous…



The appHostProvider and the cert provider are unaware of each others actions, so the appHostProvider outputs the thumbprint from the certificate found in IIS. I need to replace the httpCert hash property with the same hash used in the manifest file or else the web site would be bounded to the wrong certificate.



I want to replace the IIS log directory to a different path than the IIS I’m exporting from.




The web site I’m exporting from reside in C:\WebDeployMasterWebSites\{webSiteName}, and all Virtual Directories are located below this path. I want to control the root path, so I replace it.

Note: This would be a natural candidate for a parameter, but with params you can only replace the complete value, not just part of it.

Here’s a description of the parameter declarations allowing us to pass in params when deployment the package (these are also part of the same command):





I want to control the SSL binding for the web site.




Same as above only for HTTP.




I want to set the user name of the account under which the worker process of the Application Pools runs




Same as above only for the password.

To shorten the above command a tiny bit you can define the parameters in an xml file instead, like this:

   <parameter name="HttpsBinding" description="Web Site Binding for SSL">
      <parameterEntry kind="DestinationBinding" scope="sikker.frende.no" match=".*:443:" />

   <parameter name="HttpBinding" description="Web Site Binding for http">
      <parameterEntry kind="DestinationBinding" scope="sikker.frende.no" match=".*:443:" />

   <parameter name="AppPoolUsername" description="Account username for this application pool">
      <parameterEntry kind="DeploymentObjectAttribute" scope="processModel" match="processModel/@userName"/>

   <parameter name="AppPoolPassword" description="Account password for this application pool">
      <parameterEntry kind="DeploymentObjectAttribute" scope="processModel" match="processModel/@password"/>

You will then replace the –declareParam operation settings above with this:


However, if you’re going to execute this command from TFS or some other build tool, you’re better off leaving them inside the command. Less moving parts.

Dissecting The Deploy Package Command

We also need a manifest file when installing the package. This is ALMOST identical to the manifest we used when creating the package:

   <appHostConfig path="blog.torresdal.net" />
   <cert path="my\08bf3e051bd10cd8d89ac1a3ac431887886ed343 " />
   <dirPath path="E:\Web" />

The only difference between the two is in the dirPath provider. For the package we used files from E:\MyBuildDrop\LatestVersion\Dev\blog.torresdal.net, but when deploying we PUT files to E:\Web.

The deployment package command

Each section in detail:

-verb:sync Tells Web Deploy to do a sync

Uses the earlier created package as source. Since this package contains a certificate, you need to provide the certificate password in order to get access to it.




Points to the manifest file to use for deployment, and to which server we are deploying to with the username/password.

Set the SSL binding for the web site. Using IP on port 443 (SSL).

Same as above only for HTTP.

Set the user name for the account under which the worker process of the Application Pools runs

Same as above, only for the password

The Build/Deploy Process Using Web Deploy

Here’s the exact same deployment process overview as in my previous post, only now with Web Deploy. As a result the process got two steps shorter :-)


What’s Next?

In my next post I’ll either show you how to control the load balancer during deployment or integration with TFS. Both will be covered in the end.

No-Click Web Deployment – Part 1

Update: Part 2 is now available, covering Web Deploy (a.k.a. msdeploy)

Getting code deployed should be as easy as open up a web site (only taking slightly longer :-)). Devs or IT people should not spend manual labor (with the possibility of mishaps) on getting files from one place to another, making changes to IIS (or whatever you’re using), restarting servers, copy/changing web.config files etc. That’s the job for scripts and automation tools. Not to mention the cost savings of not needing IT people to do deployment. I bet you there’s no one manual step in the process of deployment that cannot be automated. Saying that, you have to consider how many times you deploy per day/week/month or year before going for full automation. However, if you’re serious about being Agile/Lean, you can’t do without a auto deployment scheme.

In the coming blog posts I’ll walk you through the steps I went through to automate our deployment, and hopefully you’ll find it interesting and even suggest improvements.

Below is an overview of the environments we’re deploying to:


One Load Balancer (LB) in each environment, two web servers in Dev and Test, and 3 in Prod. The actual numbers might or might not be true ;-), but that doesn’t really matter. In addition there’s SQL Servers, but I will not cover that here.

Why have a LB in Dev? Reason number one is to catch any possible LB issues in Dev before going to Test and Prod, and have the exact same environment in Dev as in Prod. It’s also useful to try out new stuff, like having the LB do caching etc.

Deployment Frequency
For Dev we auto deploy every night (part of nightly build) and at will during day. For Test, 2-4 times per week and to Prod 1-2 times every 2nd week. That was yesterday! :-) Today we can do it when the sun comes out of the clouds (not often in Bergen), every time I refill my coffee cup or whenever we feel like. The point being: we are no longer constrained by how often we can deploy.

Why All These Environments?
You can read about that here, but for us:

  • Dev is where we try out things without physically hurting users, but still being in a real server environment avoiding the “works on my machine” issue.
  • Test is as close to Prod as we can get (at external hosting provider, different network, firewalls etc) and where we make sure things run smoothly before going to Prod.
  • Prod is Prod

Tech Details
Here’s the tech stuff we use which might be relevant:

  • All servers are running Windows Server 2008 R2
  • Web servers are running on IIS 7.5 (since we’re on R2)
  • Application Request Routing in IIS is used as Load Balancer and runs on 2008 R2 Server Core (if you like, check out my previous post about setting up and configuring ARR)
  • TFS 2010 for builds
  • Team City for CI

Also note that we have access to the actual subnet where Test and Prod lives. This does however not mean we have access to all servers and features in all environments, it just means we can be given access to certain things not recommended through external firewalls, like PowerShell Remoting. This is where your environments might be different from ours.

Some General Advice

Consider Using a LB Even If You Don’t Need One For Performance Reasons
Load Balancers are useful for other things than load balancing. The biggest benefit (except from its core task), is that you can do upgrades and maintenance on servers without taking the whole site offline, by always leaving at least one server online.

Consider Turning Off IIS Recycling
Do you know that IIS automatically recycle your applications every 1740 minute, effectively restarting them? Are your web sites free from memory leaks or do you want to know if you have memory leaks? Why not turn off recycling? This is too big of a topic to cover here, but go Google: IIS7 recycle.

Consider Using Windows Server 2008 R2 Server CORE 
This should get you slightly better performance, but for me it’s more about scripting. Most of the things that needs to be done on server core, must be performed from command line, forcing you to create scripts.

Why Is Deployment Difficult?
First of all because every environment is different and there are no really good tools to automate the whole process. The challenge is to find the right tools to solve the problems your organization is facing, and have the tools work for you to get to the final goal.

What’s The Challenges?
For us it was about:

  1. How can we safely move files from a build server to Dev, Test and Prod?
  2. How can we automate the process of taking a node out of an LB cluster?
  3. How can we safely execute an upgrade on a server in Dev, Test or Prod and get feedback of progress, errors, and abort and roll back on failure?
  4. How can we remotely make changes to IIS?
  5. How can we avoid all manual tasks? (like adding a virtual directory in IIS or copy a web.config file)

Safety and automation is two keywords that sticks out. Where safe means no-one else than the intended persons or services should be able to perform the specified tasks. Automation meaning no manual operations should ever be needed in either Dev, Test or Prod except from IT maintenance like hardware upgrades, windows update etc.

What tool options have we?

Copy files:

  • Secure FTP in IIS 7 on a non public/available IP
  • or PowerShell with BITS
  • or WebDeploy

Taking LB nodes offline/online:

  • Use PowerShell Remoting to execute PowerShell scripts on ARR server
  • or the Web Farm Framework

Safely execute an upgrade:

  • Use PowerShell Remoting to execute PowerShell scripts on web servers
  • or WebDeploy

Avoid manual tasks:

  • Script all tasks, so they can be repeated

The Build/Deploy Process     

What About MSI’s?
If you read my blog you know I’ve done quite a bit of MS Installer stuff and WiX in particular. MSI’s are perfect for deploying to multiple places where you have no control. The drawback is that most developers don’t know how to customize MSI’s and often end up with a versioning problem and leaving lots of old stuff behind on the server after upgrades. If you have people skilled in Windows Installer, please feel free to use MSI, but I personally find XCopy to be very easy and is what I recommend if you’re not an ISV. With MSI’s you still have to install them remotely, which could be done with WMI or PowerShell.

Notes on WebDeploy
I’m currently looking at using Web Deploy to simplify/reduce the amount of scripts needed. WebDeploy would replace the FTP and deploy steps, but first impression is that it’s too generic, making it really hard to do simple things without spending quite a bit of time learning the tool, it’s underlying package schema and IIS schemas. Hopefully one day Web Deploy will be the only tool I’ll need to execute the whole deployment process.

What’s Coming?
In future blog posts I’ll walk you through step-by-step how to accomplish the above solution. While I’m writing this I’m not 100% sure if it will be a solution using PowerShell (which I have in production) or a slightly modified version using Web Deploy. It all depends on which one is easiest and which has the potential of being maintained by other people than me in the long run.

Hopefully this will give you the input you need to fully automate your deployment process as well.

Udi Dahan on NServiceBus Now Available for Download

Many have asked for a downloadable version of Udi’s presentation, so here it is :-)

If you rather want to stream it you can still do that from here: http://blog.torresdal.net/2010/06/08/NNUGPresentationUdiDahanOnNServiceBus.aspx