Entity Framework 6 Change Tracking POCOs vs nHibernate

Let me warn you that this post is kind of a rant. Entity Framework is on version 6 and Code First with POCOs still can’t track changes on detached object graphs that has been sent over the wire. The scenario is quite simple:

  1. A client request an entity from the server, e.g. asp.net mvc controller renders a view.
  2. The client modifies the entity and posts it back, i.e. form post.
  3. The MVC model binder deserializes the form data to our domain model.
  4. The MVC controller should be able to update the database without first fetching the data and applying the update per property.

Well this isn’t entirely fair since we can actually do this for first level properties, but not for navigation properties, e.g. has-many relationships. NHibernate has been able to do this for ages.

A colleague working on the web team asked me for help regarding this matter and I knew that this was the case in EF4/5, but I figured they’ve implemented it by now. So I threw together a test, just to confirm that it was still as bad as I remembered it.

Let’s say our db context looks like this:

public class Db : DbContext
    public Db() : base("Db")
        Configuration.LazyLoadingEnabled = true;

    public DbSet Foos { get; set; }
    public DbSet Bars { get; set; }

With the following entities:

public class Foo
    public int Id { get; set; }
    public string Value { get; set; }
    public virtual ICollection Bars { get; set; }

public class Bar
    public int Id { get; set; }
    public string Value { get; set; }

    public int FooId { get; set; }
    public virtual Foo Foo { get; set; }

Our db initializer just adds one record of Foo that has a child record Bar with the values Value:

public class DbInitializer : DropCreateDatabaseAlways
    protected override void Seed(Db context)
        context.Foos.Add(new Foo
            Value = "Value", 
            Bars = new List
                new Bar { Value = "Value" }


Our test first reads the entity and disposes the context, i.e. the graph is now detached. We make some changes to the detached graph and try to apply the changes by passing the object to SaveOrUpdate with a new db context, i.e. simulating a post from the client to our server:

public class Test
    public void TestChangeTracking()
        System.Data.Entity.Database.SetInitializer(new DbInitializer());
        Foo foo;
        // Read and send to the client over the wire
        using (var db = new Db())
            foo = db.Foos.First();
            Assert.AreEqual(1, foo.Bars.Count);

        // Client changes some values
        foo.Value = "Changed";
        foo.Bars.First().Value = "Changed";

        // Post to server for an update
        using (var db = new Db())

        // What got saved?
        using (var db = new Db())
            foo = db.Foos.First();
            Console.WriteLine("Foo.Value: {0}", foo.Value);
            Console.WriteLine("Foo.Bars[0].Value: {0}", foo.Bars.First().Value);

        Assert.Fail("Use nhibernate instead.");

What got saved? Foo.Value got updated to Changed but Bars didn’t. Pretty lame if you ask me. If you insist on using EF instead of nhibernate you’ll need to fetch the record in the new db context, diff and apply the changes to it and save.


Hope they do something about this soon, until next time, have a great weekend!

Asynchronous proxy for a synchronous WCF service

I recently had this scenario while working for a client where we wanted to consume a synchronous service asynchronously. We couldn’t change the service contract since it would break other proxy implementations. Luckily conventions implemented by the guys at microsoft made this task surprisingly easy, this is how we solved it.

Let’s say our service contract looks like this:

public interface IService1
    string GetData(int value);

With the following service implementation:

public class Service1 : IService1
    public string GetData(int value)
        return string.Format("You entered: {0}", value);

An synchronous proxy would look something like this:

class ServiceClient : IService1
    private readonly IService1 _channel;

    public ServiceClient()
        var factory = new ChannelFactory(binding, address);
        _channel = factory.CreateChannel();

    public string GetData(int value)
        return _channel.GetData(value);

And consuming it synchronously is trivial:

var client = new ServiceClient();
var data = client.GetData(10)

So how do we consume this asynchronously? We simply define a parallel async service contract with the same ServiceContract name, like this:

[ServiceContract(Name = "IService1")]
interface IService1Async
    Task GetDataAsync(int value);

And an asynchronous proxy would look something like this:

class ServiceClientAsync : IService1Async
    private readonly IService1Async _channel;

    public ServiceClientAsync()
        var factory = new ChannelFactory(binding, address);
        _channel = factory.CreateChannel();

    public Task GetDataAsync(int value)
        return _channel.GetDataAsync(value);

And consuming it asynchronously becomes trivial:

var client = new ServiceClientAsync();
var data = await client.GetDataAsync(10);

To summarize: The convention is as long as the service name is the same on the contract the method will be called if the method name matches and also if you’ve appended Async to the method name. Pretty smooth imho, and all without the need to change the “legacy service” contract.



Office 365 OneDrive Offline Cache Size Problem

Office 365 offers 1TB of OneDrive storage to their subscribers for a fairly small fee and according to the rumors soon to be unlimited space. Excellent for backing up your family photos and videos, right? No!

So what’s the problem?

Even if we overlook the fact that the business version of OneDrive actually is a SharePoint site in disguise we have massive issues with uploading a couple of hundred gigs of content. That goes for the home version of OneDrive as well. My setup is pretty common, I run windows on a 512GB SSD and I have another disk on 2TB for storing photos & videos. I installed the application and started uploading ~300 GB photos and videos.

After a couple of hours the sync had failed miserably, the computer was barely workable due to the lack of space on the system disk. Turns out that Office 365 thinks it’s a good idea to create an offline cache in the user AppData-folder (located on the system drive) for every file.

The accepted solution according to the community is to delete the files manually or setting “Days to keep files in the office document cache” to 1. But I couldn’t even do the initial sync!?

My solution

As the files were uploaded we needed to make room on the system drive for new uploads. The cache is located under C:\Users\<user>\AppData\Local\Microsoft\Office\15.0\OfficeFileCache. I threw together a simple console program in 10 minutes that deletes cache files older than 5 minutes. I’ve made the source code and a compiled dist version available on github here. It could be a good idea to always run it as a service, but I’ll leave that for a later exercise. It’s very straightforward and not complicated at all, more importantly it does the job.


I changed the implementation so it runs as a windows service and it’s installed with a msi.

Hope it helps.

By the way…

I switched to google drive instead since the business version of onedrive just lists the files in a list á la sharepoint style. No thumbnails, no viewing images or playing videos in the browser, can’t even download an entire folder as a zip file. Pretty useless, just like sharepoint.

socialtime architecture (1)

socialtime.se lab days project

Lab days

I’ve been super busy at work but finally I’ve come around for some fun lab days coding.

The objective

I’ve often wondered how much time we spend on social media apps on our phones. The idea came pretty naturally – create an app that accurately measures the time for us.

Coding sessions

The 1.0 version of socialtime.se was coded in 4 sessions and this is how I disposed the time. The sessions are 1 or 2 days apart since I rarely have the time to sit and code for 8h straight nowadays, unless it’s for a paying customer of course.

  1. 2h developing the android app for monitoring running processes.
  2. 1h studying the android facebook SDK and implementing auth from the app.
  3. 4h developing REST backend with facebook auth and a super simple frontend.
  4. 1h publishing the app to play store and minor refactorings.
  5. 1.5h writing this blog post, 45 minutes on creating the graphics 😉

Android App

The app is really basic, it has a background service running which monitors the running processes on a separate thread and just one activity to display the social time.


My first approach was to read the log and filter for ActivityManager since that seemed to work with the adb. But when running logcat from within the app I didn’t get the same information, which I guess is a good thing looking at it from a security standing point.

REST API and Authentication

Since we initially only measure facebook time it’s safe to assume that the users could use facebook to authenticate themselves. One other upside is that we can retrieve their identity by requesting their public information, meaning they won’t need to create a local account for providing a username.

This is where it became interesting, we’ve built a backend using asp.net webapi which allows authorized calls if the user’s authenticated via facebook. The user is authenticated via facebook but via the app, we can’t use the access token issued for the app to communicate securely with our backend. So this is my solution.

android facebook webapi authentication

In a sentence – We issue a new custom token by validating the facebook access token that is passed to us which can be used for secure communication. Pretty neat!


I think it was about 2:55am when I finished the app and the API and everything was in place and working. My deadline was 3pm and not a minute more, I needed to be at work 9am and since I’m not in my twenties anymore I need the sleep to function properly.

I hosted the backend in an azure website and I had bought the domain socialtime.se via surftown and once I uploaded the page to surftown I noticed it couldn’t fetch the data. Why? You guessed it, I hadn’t enabled cross-origin resource sharing. So I quickly just installed the nuget package for cors, enabled it, decorated the controller with a EnableCors-attribute, re-deployed the API and voilá, beautiful fully working stack in place. And all this exactly as the clock turned 3am!



It isn’t pretty but hey, it worked!


The infrastructure is in place so adding functionality will go fast. My unprioritized backlog looks something like this.

  • Measure time for Twitter, Instagram and G+, separately.
  • Measure time spent per app and per day. (now it’s just milliseconds since forever)
  • Proper frontend and move it to azure.
  • Remove the public list, you’ll need to login to see only your social time. Several requests for this actually 🙂
  • Use some cool HTML5 charting lib to display your social time.

Until then, get the app and have a nice day!





Customize your TFS build process to run StyleCop

This will be a short guide on how to get StyleCop to run for every check-in as a part of your build process in TFS or VisualStudio Online. You may also integrate stylecop with the project’s MSBuild file, see StyleCop docs for how to do so.

Start off by downloading and installing StyleCop and TFS build extensions.

Build controller changes

First you’ll need to set the custom assemblies path for your build controller. I created a new folder Custom Assemblies in the BuildProcessTemplate folder directly under the team project root folder. Also download a copy of the default template.


Add the following assemblies to Custom Assemblies:
  • StyleCop.dll
  • StyleCop.CSharp.dll
  • StyleCop.CSharp.Rules.dll
  • TFSBuildExtensions.Activites.dll
  • TFSBuildExtensions.Activites.StyleCop.dll
Next Manage Build Controllers and set Version control path to custom assemblies to your Custom Assemblies folder from Build Controller Properties.




Custom Build Templates

Create a custom build process solution with a Workflow Activity Library project. I created a BuildProcess solution in the BuildProcessTemplates folder and named the workflow project Templates.


Next rename the DefaultTemplate.11.1.xaml template you downloaded earlier to CustomTemplate.xaml and add it to the project. Make sure you set the Build Action to Content.


Now let’s add the StyleCop activity to the Toolbox window. Add a new tab TFS Build Extensions, right-click and select Choose items. Browse to the assembly TfsBuildExtensions.Activites.Stylecop.dll and click OK.
We want to run StyleCop early in the build process for the build to fail quickly if there are any violations, the first place where StyleCop can be executed is after the Initialize Workspace sequence within the Run on Agent sequence.


Add a new sequence activity right after Initialize Workspace and name it Run StyleCop. Add the following variables with a scope of the Run StyleCop sequence.
  • StyleCopFiles – IEnumerable<string>
  • StyleCopSettingsFile – string
  • StyleCopResults – bool
  • StyleCopViolations – Int32
Now add the following activities:
  1. FindMatchingFiles – Set the result to StyleCopFiles and the MatchPattern to String.Format(“{0}\**\*.cs”, BuildDirectory)
  2. Assign – Set the StyleCopSettingsFile variable to String.Format(“{0}\Settings.StyleCop”, SourcesDirectory)
  3. StyleCop – Set the following properties
    • SettingsFile to StyleCopSettingsFile
    • SourceFiles to SyleCopFiles.ToArray()
    • Succeeded to StyleCopResults
    • ViolationCount to StyleCopViolations.
  4. WriteBuildMessage – Change the Importance level to High and format the message to something like String.Format(“StyleCop was completed with {0} violations”, StyleCopViolations)
My final sequence activity looks like this:


Commit the solution if you haven’t done so already.

Running the Build

Now edit the build definition you want to run StyleCop for and use the custom template.


Trigger a new build and violá, you’ll probably have an unsuccessful build.


Personally I prefer using TeamCity with NAnt for the build process and JIRA for issue tracking, TFS is way behind imho, but the choice isn’t always up to me. 😉

Entity Framework 6 Code First unreported breaking change/bug when migrating from version 5

Came a cross a nice bug in my golf score app on my season premiere round yesterday. I suddenly had a couple more strokes than I should with my HCP, although it seemed fair since it was the first round and all it didn’t make sense. I hadn’t made any (programmatic) changes to the code since the last time I played.

It took me a while to figure it out but my strokes were based on the female slope. All the custom male and female slopes were suddenly flipped, how come? I had moved the site from surftown to windows azure and with that also upgraded EF from 5 to 6 and compiled the back-end for .NET 4.5 instead of 4.0.
Without diving into golf details a golf club has one to many courses and a course has one to many tees. How many strokes a player gets from a given tee is based his HCP and is usually calculated with a formula. Unless the course has a custom slope table, then you’ll need to specify explicitly how many strokes you’ll get for a given HCP. The latter is the case on my home course, which also is the reason for me creating the app, since there is no other app that handles this well.
So my entities looked something like this (yup, I’m serializing my EF POCOs directly in my WebApi service, get over it :D)
public class Tee
    public int Id { get; set; }

    public virtual ICollection CustomHcpAsMale { get; set; }
    public virtual ICollection CustomHcpAsFemale { get; set; }
public class TeeHcp
    [JsonIgnore, IgnoreDataMember]
    public int Id { get; set; }

    public double From { get; set; }
    public double To { get; set; }
    public int Strokes { get; set; }
And since I haven’t mapped up the inverse property EF won’t be able to give the foreign keys good names, so the table will look like this:


which isn’t neat but it’s fine imho, it works, or at least it did work! Now all of the sudden CustomHcpAsMale and CustomHcpAsFemale switched places when getting the data. Since EF seemed to be confused I went ahead and explictly mapped the inverse properties, changing the code to something like this:
public class Tee
    public int Id { get; set; }


    public virtual ICollection CustomHcpAsMale { get; set; }
    public virtual ICollection CustomHcpAsFemale { get; set; }
public class TeeHcp
    [JsonIgnore, IgnoreDataMember]
    public int Id { get; set; }

    public double From { get; set; }
    public double To { get; set; }
    public int Strokes { get; set; }

    [JsonIgnore, IgnoreDataMember]
    public Tee TeeAsMale { get; set; }
    [JsonIgnore, IgnoreDataMember]
    public Tee TeeAsFemale { get; set; }
which after applying the migration changed the tables to look like this:


and voilá, problem solved! Once again upgrading libraries to run the latest and greatest versions for no reason bites me in the ass. I’ll try to report this to Microsoft as a breaking change, at least it was an easy fix and I figured it out quickly!
Lessons learned – don’t be lazy like me when mapping navigation properties!
Hope it helps somebody out. Peace!

Integrate blog engine into existing site

So I got BlogEngine.NET running on my personal tech blog (this site) and I want to add a simple news feed to my corporate site to announce various happenings and job openings. Wouldn’t it be great if I could just create a dedicated news blog here and then just display the posts on my company page? As it happens blogengine has a metaweblog api, this is the api that you can use with Windows Live Writer to manage your posts.

The API isn’t exactly a modern RESTful service as one could wish for, so I’ve put together a simple c# lib to extract posts. I’ve released it as open source and you can get it from github here. Bear in mind that it’s as minimal as I need it to be, feel free to contribute and send me a pull request. The client interface only exposes two methods for now:
public interface IMetaWeblogClient
    Task GetPostAsync(string postId);
    Task<IEnumerable> GetRecentPostsAsync(int numberOfPosts);
There are some guides on how to install the engine to a subdirectory (and keeping the styling separate) or merging the engine with your existing site. I don’t really know why you would like to do that unless you’d want to base your entire site on the blog engine. My way separates writing/publishing news completely from how the posts are presented on the company site, which I think many others are interested in as well. So here’s the lib in action from the test client which also is available on github:


works like a charm. Now I just need to integrate it into my company site and style it.
Hope somebody else finds it useful as well!

Become your own Root Certificate Authority and Create Self-Signed SSL Certificates

Why do we want to do this? Sometimes having to pay for commercial SSL certificates isn’t an option. By creating your own Root Certificate you can sign your own certificates to allow you to quickly and cheaply secure internal websites or applications that use SSL.

In this post we will
  1. Setup Cygwin and OpenSSL
  2. Generate a Root Certificate
  3. Deploy our Root Certificate Authority
  4. Create a Certificate Signing Request
  5. Generate a signed SSL certificate
  6. Deploy the SSL certificate to IIS
We’ll also test it from an android device by
  1. Deploying the CA certificate to the trust store
  2. Browse our webserver securly with no warnings
  3. Securly download resources from an app
Let’s get started!

Setting up the environment


  • Android SDK with an AVD running 4.x or a real device
  • JDK 1.6.x or higher
  • IntelliJ or your favorite editor
  • IIS 7.x or higher

Install Cygwin and OpenSSL

Download Cygwin and run the installer, make sure check the openssl packages.
Open C:\cygwin\usr\ssl\openssl.cnf and find the section beginning with [ CA_default ], edit it so it looks like this:
[ CA_default ]
dir             = /etc/ssl		# Where everything is kept
certs           = $dir/certs		# Where the issued certs are kept
crl_dir	        = $dir/crl		# Where the issued crl are kept
database        = $dir/CA/index.txt	# database index file.
#unique_subject = no			# Set to 'no' to allow creation of
					# several ctificates with same subject.
new_certs_dir   = $dir/newcerts		# default place for new certs.

certificate     = $dir/certs/cacert.pem 	# The CA certificate
serial          = $dir/CA/serial 		# The current serial number
crlnumber       = $dir/crlnumber	# the current crl number
					# must be commented out to leave a V1 CRL
crl             = $dir/crl.pem 		# The current CRL
private_key     = $dir/private/cakey.pem# The private key
RANDFILE        = $dir/private/.rand	# private random number file
Open a cygwin command shell and create the directories:
mkdir /etc/ssl/{CA,certs,crl,newcerts,private}
Create the certificate index file:
echo "01" > /etc/ssl/CA/serial
touch /etc/ssl/CA/index.txt

Generate the Root Certificate

We can now generate the Root Certificate with the following command:
openssl req -new -x509 -extensions v3_ca -keyout cakey.pem -out cacert.pem -days 3650
You’ll be asked to provide a private key – this password should be complex and kept secure as it will be needed to sign any future certificates. If someone was to get their hands on this they would be able to generate certificates in your name! You should have two files, cakey.pem which is your private key and cacert.pem which is the Root Certificate. Let’s move the certificate and the key to the correct folders.
mv cakey.pem /etc/ssl/private
mv cacert.pem /etc/ssl/certs

Trust our Root Certification Authority

Let’s add the root certificate to our trust store so we don’t get warnings from websites using a SSL certificate signed with our root certificate. The best way to do this is by deploying it through a group policy, I’ll add it manually since this is my dev machine.
  • <windows> + R (Run) -> mmc <enter>
  • Add the certificates snap-in <ctrl> + m, select Certificates and add snap-in for Computer account
  • Expand Console Root -> Certificates -> Trusted Root Certification Authorities -> Certificates
  • Right-click -> All Tasks -> Import… Select cacert.pem located in C:\cygwin\etc\ssl\certs
If you view the imported certificate it should look something like this:

Create a Self-Signed SSL Certificate

Now that we have successfully created a new Root Certificate we can use it to sign our own certificates. We need to create a Certificate Signing Request (CSR) before we can create a SSL certificate.

Generate the Certificate Signing Request

First create a Private Key that will be used during the certifcate signing process:
openssl genrsa -des3 -out server.key.secure 4096
Now that we have a Private Key we can use it to generate the Certificate Signing Request, this is the file that you would normally send to a Certificate Authority to generate a certificate. This will ask for the password for your private key as well as various details. When asked for the Common Name (CN) enter the domain name that the SSL certificate will be used for. You can enter the IP address of the server but many hostname verifiers on various devices won’t accept this, more specifically the DownloadManager in Android won’t accept it.
openssl req -new -key server.key.secure -out server.csr

Generate a signed SSL certificate

Now we have a CSR that we can generate a signed SSL certificate from:
openssl ca -in server.csr
Confirm the passphrase and answer yes to both signing the certificate and commiting it to the database and you should be able to find a new file in C:\cygwin\etc\ssl\newcerts, the file will probably be called 01.pem and this is your SSL Certificate.

Create a pkcs12 file to import in Personal Certificate Store

Before deploying it we need to do one more thing. If we tried to use this with the private key we created earlier IIS would ask us to confirm our private key passphrase each time it started. To avoid this we should take our private key and create an insecure version of it, this will allow IIS to load your SSL certificate without needing the passphrase.
The downside is that anyone who gets a copy of your insecure key can use it to impersonate your SSL certificate, therefore it’s important to secure the folder it’s stored in.
openssl pkcs12 -export -in <pem-file-from-previous-step> -inkey server.key.secure -out cert.p12

Deploy the SSL Certificate to IIS

Open the Management Console again but this time import the pk12-file to Certificates -> Personal -> Certificates, it should look something likes this:
Bind the SSL certificate to port 443 on your Default Web Site in IIS.
  • Open IIS Manager (<windows> + R -> inetmgr)
  • Select Default Web Site
  • Click on Bindings under Actions
  • Edit the https binding and set the SSL Certificate
When you’re done it should look something like this:
And if everything is done correctly you shouldn’t get any warnings when browsing your site.

Edit your hosts file to use any hostname you want on your machine

As I mentioned earlier all devices/browsers don’t trust certificates issued to an IP address. Add a record to your C:\Windows\System32\drivers\ets\hosts to resolve any hostname you want to       kingen.se       facebook.com
And you’re all set! Now let’s access our server via HTTPS from an android device.

Install CA Root Certificate on Android

Pre Ice Cream Sandwich (Android 4.0) there was no way to add a new CA to the trust store, so we built our own custom keystores which we loaded in from code in our applications. The process was quite cumbersome! Now we can simply push the certificate to our phone using adb and install it.
First we need to export the Root Certificate from the Management Console as a DER encoded binary since android can’t install pem-files.
  • Right-click on Console Root -> Certificates -> Trusted Root Certification Authorities -> Certificates -> <your Root Certificate>
  • Select All Tasks -> Export…, DER encoded binary, and save it somewhere.
Now let’s install it on a device or emulator, the steps are the same.
  • adb push <your.cer-file> /sdcard/.
  • Enter Settings -> Security -> Install from device storage on your Android device
  • Press Ok and Confirm your PIN
Your Trusted Credentials Store should now display your Root Certificate:
Before navigating to your machine from a browser we must make sure that our selected hostname resolves to our machine on the device. In the emulator it’s quite easy but to modify the hosts file on a real device you need root access.

Modifying the hosts file

We want the device or emulator to resolve the hostname we issued the SSL certificate for to your machine. Basically the same thing that we did for windows.
Update the /etc/hosts on the emulator
  • adb pull /etc/hosts
  • <add record to your machine with notepad>
  • adb remount
  • adb push <modified-file> /system/etc/hosts
Update /etc/hosts on an actuall device (requires root access)
  • adb pull /etc/hosts
  • <add record to your machine with notepad>
  • adb shell
  • su
  • mount -o rw,remount -t yaffs2 /dev/block/mtdblock3 /system
  • chmod 0326 /system/etc/hosts
  • exit
  • adb push <modified-file> /system/etc/hosts

Verify that our server is trusted

Finally we can open up any browser we like and navigate to our server and we shouldn’t get any warnings for our self-signed SSL certificate.
Any app that fetches data from our server can now do it securly over HTTPS without the need of loading a custom keystore since the server is trusted.
Let’s verify this with a simple app that downloads an image with the DownloadManager.

Securly download an image

The code is very straightforward and needs no deeper explaination, we simply request to get an image from the server over HTTPS via the DownloadManager and expect it to work. Download the full source code (14.8KB) for the sample app if you like but it aint much for the world.


Final words

There are quite many steps to get this to work and there aren’t many guides out there covering the full example so I figured I share my experience, since sharing is caring 😉 Almost all apps today fetch some kind of data and we should concider doing it in a secure manner as often as possible. Sadly when you browse the forums really bad and unsecure suggestions are marked as the accepted answer. It’s kind of scary when you think of that developers like the one below might have coded some app or software that you’re using.

I’ve found a very easy solution for this:
request = new DownloadManager.Request(sourceUrl.replace(“https://”, “http://”))
Surprisingly worked for all https URLs that I tried. I’m not sure about the https security, but there is no exception and file gets downloaded properly

Anyways, I hope this helps someone out!

Using Microsoft OAuth Identity provider with IIS Express

Shouldn’t be any problem right? Facebook apps allows you to specify localhost for the callback/redirect url. Sweet! But microsoft doesn’t! So this is what I did.

I created a new application https://account.live.com/developers/applications/create and specified kingen.se as redirect domain.

I added a DNS record to C:\windows\system32\drivers\etc\hosts that routes to kingen.se.

I changed the site binding in %Documents%\IISExpress\config\applicationHost.config :

<site name="<site name>" id="24">
  <application path="/" applicationPool="Clr4IntegratedAppPool">
    <virtualDirectory path="/" physicalPath="<your-path>" />
    <binding protocol="http" bindingInformation="*:80:kingen.se" />

I changed the “Default Web Site” binding to port 8080 (or whatever) for IIS.

I turned off SQL Reporting Services because the agent used port 80. Use netsh to list port usage if you run into other problems.

netsh  http show urlacl | Select-String :80
Finally in my asp.net mvc project I changed the properties for the project to
  • Use Local IIS Web server
  • Check Use IIS Express, Project Url: http://localhost
  • Check override application root URL: http://kingen.se
  • Set Start URL to: http://kingen.se/
Voilá, press F5 and IIS Express will fire up on http://kingen.se with a working Microsoft OAuth Identity provider.

Migrating ASP.NET MVC to WebApi with no breaking changes

Recently I’ve had the pleasure to upgrade our REST interface at work from Asp.Net MVC3 to WebApi so I thought a lessons learned or “watch out for this” blog post was suitable, especially since I managed to do it without the need to bump any version number on our server i.e. no breaking changes.

I think there are others out there that have been using the MVC framework as a pure REST interface with no front end, i.e. dropping the V in MVC, before webapi was available.


First of all webapi is all about registering filters and message handlers and letting requests be filtered through them. A filter can either be registered globally, per controller or per action which imo already is more flexible than MVC.

public class MvcApplication : HttpApplication
    protected void Application_Start()
        GlobalConfiguration.Configuration.MessageHandlers.Add(new OptionsHandler());
        GlobalConfiguration.Configuration.MessageHandlers.Add(new MethodOverrideHandler());
        GlobalConfiguration.Configuration.Formatters.Insert(0, new TypedXmlMediaTypeFormatter ...);
        GlobalConfiguration.Configuration.Formatters.Insert(0, new TypedJsonMediaTypeFormatter ...);


Metod override header

Webapi doesn’t accept the X-HTTP-Method-Override header by default, in our installations we often see that the PUT, DELETE and HEAD verbs are blocked. So I wrote the following message handler which I register in Application_Start.

public class MethodOverrideHandler : DelegatingHandler
    private const string Header = "X-HTTP-Method-Override";
    private readonly string[] methods = { "DELETE", "HEAD", "PUT" };

    protected override Task SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
        if (request.Method == HttpMethod.Post && request.Headers.Contains(Header))
            var method = request.Headers.GetValues(Header).FirstOrDefault();
            if (method != null && methods.Contains(method, StringComparer.InvariantCultureIgnoreCase))
                request.Method = new HttpMethod(method);

        return base.SendAsync(request, cancellationToken);


Exception handling filter

In our controllers we throw HttpResponseExceptions when a resource isn’t found or if the request is bad for instance, this is quite neat when you want to short-circuit the request processing pipeline and return a http error status code to the user. The thing that caught me off guard is that when throwing from a controller action the exception filter is not run, but when throwing from a filter the handler is run. After some head scratching I found a discussion thread on codeplex where it’s explained that this is intentional, so do not throw exceptions in your filters.

We have a filter which looks at the clients accept header to determine if our versions (server/client) are compatible, this was previously checked in our base controller but with webapi it felt like an obvious filtering situation and we threw Not Acceptable if we weren’t compatible. This needed to be re-written to just setting the response on the action context and not calling on the base classes OnActionExecuting which imo isn’t as clean design.

public class VersioningFilter : ActionFilterAttribute
    public override void OnActionExecuting(HttpActionContext actionContext)
        var acceptHeaderContents = ...;
        if (string.IsNullOrWhiteSpace(acceptHeaderContents))
            actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.NotAcceptable, "No accept header provided");
        else if (!IsCompatibleRequestVersion(acceptHeaderContents))
            actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.NotAcceptable, "Incompatible client version");


Request body model binding and query params

In MVC the default model binder mapped x-www-form-encoded parameters to parameters on the action if the name and type matched, this is not the case with webapi. Prepare yourself to create classes and mark the parameters on your controller with the FromBody attribute even if you only want to pass in a simple integer that is not a part of the URI. Furthermore to get hold of the query params provided in the URL you’ll need to pass in the request URI to the static helper method ParseQueryString on the HttpUtility class. It’s exhausting but it will work and it still doesn’t break any existing implementation.

public HttpResponseMessage Foo([FromBody]MyModel bar)
    var queryParams = HttpUtility.ParseQueryString(Request.RequestUri.Query);
    string q = queryParams["q"];


Posting Files

There are plenty of examples out there on how to post a file with MVC or WebApi so I’m not going to cover that. The main difference here is that the MultipartFormDataStreamProvider needs a root path on the server that specifies where to save the file. We didn’t need to do this in MVC, we could simply get the filename from the HttpPostedFiledBase class. I haven’t found a way to just keep the file in-memory until the controller is done. I ended up with a couple of more lines of code where I create the attachments directory if it doesn’t exist, save the file and then delete it once we’ve sent the byte data to our services.

[ActionName("Index"), HttpPost]
public async Task<HttpResponseMessage> CreateAttachment(...)
    if (!Request.Content.IsMimeMultipartContent())
        throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);

    string attachmentsDirectoryPath = HttpContext.Current.Server.MapPath("~/SomeDir/");
    if (!Directory.Exists(attachmentsDirectoryPath))

    var provider = new MultipartFormDataStreamProvider(attachmentsDirectoryPath);
    var result = await Request.Content.ReadAsMultipartAsync(provider);
    if (result.FileData.Count < 1)
        throw new HttpResponseException(HttpStatusCode.BadRequest);
    var fileData = result.FileData.First();
    string filename = fileData.Headers.ContentDisposition.FileName;
    Do stuff ...

    return Request.CreateResponse(HttpStatusCode.Created, ...);


Beaking change in serialization/deserialization JavaScriptSerializer -> Newtonsoft.Json

So WebApi is shipped with the Newtonsoft.Json serializer, there are probably more differences than I noticed but date time types are serialized differently with Newtonsoft. To be sure that we didn’t break any existing implementations I implemented my own formatter which wrapped the JavaScriptSerializer and inserted it first in my formatters configuration. It is really easy to implement custom formatters, all you need to do is inherit MediaTypeFormatter.

public class TypedJsonMediaTypeFormatter : MediaTypeFormatter
    private static readonly JavaScriptSerializer Serializer = new JavaScriptSerializer();
    public TypedJsonMediaTypeFormatter(MediaTypeHeaderValue mediaType)


    public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
        var task = Task<object>.Factory.StartNew(() =>
            var sr = new StreamReader(readStream);
            var jreader = new JsonTextReader(sr);
            object val = Serializer.Deserialize(jreader.Value.ToString(), type);
            return val;

        return task;

    public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content, System.Net.TransportContext transportContext)
        var task = Task.Factory.StartNew(() =>
            string json = Serializer.Serialize(value);
            byte[] buf = System.Text.Encoding.Default.GetBytes(json);
            writeStream.Write(buf, 0, buf.Length);

        return task;


MediaFormatters Content-Type header

Any real world REST interface needs to have a custom content type format and by default the xml and json formatter always returns application/xml respectively appliation/json. This is not good enough, I suggest that you create custom implementations of JsonMediaTypeFormatter and XmlMediaTypeFormatter and insert them first in your formatters configuration. In your custom formatter just add your media type that includes the vendor and version to the SupportedMediaTypes collection. In our case we also append the server minor version to content type as a parameter, the easiest way to do that is by overriding the SetDefaultContentHeaders method and append whichever parameter you want to the header content-type header.

public class TypedXmlMediaTypeFormatter : XmlMediaTypeFormatter
    private readonly int minorApiVersion;

    public TypedXmlMediaTypeFormatter(MediaTypeHeaderValue mediaType, int minorApiVersion)
        this.minorApiVersion = minorApiVersion;


    public override void SetDefaultContentHeaders(Type type, HttpContentHeaders headers, MediaTypeHeaderValue mediaType)
        base.SetDefaultContentHeaders(type, headers, mediaType);
        headers.ContentType.Parameters.Add(new NameValueHeaderValue("minor", minorApiVersion.ToString(CultureInfo.InvariantCulture)));


I think that covers it all, good luck migrating your REST api!

I might cover upgrading from WIF 3.5 (Microsoft.IdentityModel) to WIF 4.5 in my next post, or thinktectures startersts to identity server v2. Take a wild guess what I’ve been busy with at work! 😉