Aurelia VS AngularJS 2.0

Angular

I spend a couple of hours familiarizing myself with angular 2 for a pluralsight audition which resulted in the following screencast.

Aurelia

I then got an interesting comment that aurelia is the client side app framework to use. I try to be agnostic about the frameworks, libs and the tools that I use, so spent a couple of hours familiarizing myself with aurelia as well which resulted in the following screencast where I port the angular app in a matter of minutes.

Conclussions

Angular is at the time I’m writing this post in alpha and it’s changing a lot, which is really frustrating since something always seems to be broken and documentation/examples are inaccurate even on their own site. It’s hard to keep track of if it’s *for, *ngfor, *ng-for or something completely different from one build to the other.

I like that web components are the mainstay and it unifies react.js, angular.js and polymer. One big thumbs up to that $scope is gone, but also a big thumbs down for that two-way data-bindings is gone. Kind of a deal breaker for me. Angular has teamed up with the typescript team and they are pushing it which definitely is a good thing but not angular specific ergo no credit for that.

Learning curve was about the same for both frameworks, both are using the ES6 module loading system and depending on runtime transpilation to ES5 for now. What tips the scale towards Aurelia for me is that it’s cleaner, supports two-way data-bindings and that it’s all about conventions over configuration/code. That’s my opinion after a first look at both frameworks, watch the screencasts and judge for yourself.

As I mentioned I try to stay agnostic about the frameworks, libs and tools that I use, but I’ll definitely give aurelia a try for my next web project and try to forget that there is an IDE named VS Code out there. *BOOM* ;-)

EDIT: About 2-way data binding

It’s ridiculous how many emails I’ve received about this, but really, there is no 2-way data binding in angular 2. Do not confuse the square-bracket-parenthesis-syntax [(ng-model)] with two-way bindings — it one-way binds to the property with square brackets and listens for events with parenthesis. They obviously realized that typing


is cumbersome so they introduced a new directive called ng-model


So now you have very angular specific code in your view templates instead of just two-way binding to the DOM property. I refactor the code to bind to the checked property from this screencast in the beginning of next one here.

I like angular

Don’t get me wrong, I like angular and I will use angular 2, but I don’t mindlessly love everything they do like some fan boy. Stay open minded people and don’t be afraid to express your opinion, it’s just yet another framework. You don’t need to love everything the angular team produces.

By the way, make sure to check out my latest screencast on angular 2 forms!

 

Until next time, have a nice day!  

 

Source code as always at @ https://github.com/ajtowf

Links: aurelia.io angular.io

Keywords : Visual Studio Code, HTML5, JavaScript, AngularJS 2.0, Aurelia, TypeScript

SignalR Scaleout with SQL Server in Azure

A project for one of our customers recently had this scenario where mobile clients posted updates to a REST-backend running on one site and we needed to notify all clients browsing a different site. The first technique that came into mind was SignalR but we had only used it within the same app before. Now we’d like to share a hub across apps.

Fortunately, the solution was very simple, SignalR supports CORS and scaling out with SQL server out of the box. All you need to install is the following two nuget packages.

PM> Install-Package Microsoft.AspNet.SignalR
PM> Install-Package Microsoft.AspNet.SignalR.SqlServer

This is truly powerful and the SignalR-team has done a great job making it easy to use. There’s an excellent official guide here. Here’s a screencast to show how quickly and easy you can get it up and running in Azure from scratch.

 

Source code @ https://github.com/ajtowf/signalrlab/

 

Hope you find it useful!

 

Keywords : VS2013, Azure, SignalR, WebApi, MVC, OWIN, SQL Server

socialtime.se lab days project

Lab days

I’ve been super busy at work but finally I’ve come around for some fun lab days coding.

The objective

I’ve often wondered how much time we spend on social media apps on our phones. The idea came pretty naturally – create an app that accurately measures the time for us.

Coding sessions

The 1.0 version of socialtime.se was coded in 4 sessions and this is how I disposed the time. The sessions are 1 or 2 days apart since I rarely have the time to sit and code for 8h straight nowadays, unless it’s for a paying customer of course.

  1. 2h developing the android app for monitoring running processes.
  2. 1h studying the android facebook SDK and implementing auth from the app.
  3. 4h developing REST backend with facebook auth and a super simple frontend.
  4. 1h publishing the app to play store and minor refactorings.
  5. 1.5h writing this blog post, 45 minutes on creating the graphics ;-)

Android App

The app is really basic, it has a background service running which monitors the running processes on a separate thread and just one activity to display the social time.

socialtime_app_v1.0

My first approach was to read the log and filter for ActivityManager since that seemed to work with the adb. But when running logcat from within the app I didn’t get the same information, which I guess is a good thing looking at it from a security standing point.

REST API and Authentication

Since we initially only measure facebook time it’s safe to assume that the users could use facebook to authenticate themselves. One other upside is that we can retrieve their identity by requesting their public information, meaning they won’t need to create a local account for providing a username.

This is where it became interesting, we’ve built a backend using asp.net webapi which allows authorized calls if the user’s authenticated via facebook. The user is authenticated via facebook but via the app, we can’t use the access token issued for the app to communicate securely with our backend. So this is my solution.

android facebook webapi authentication

In a sentence – We issue a new custom token by validating the facebook access token that is passed to us which can be used for secure communication. Pretty neat!

Frontend

I think it was about 2:55am when I finished the app and the API and everything was in place and working. My deadline was 3pm and not a minute more, I needed to be at work 9am and since I’m not in my twenties anymore I need the sleep to function properly.

I hosted the backend in an azure website and I had bought the domain socialtime.se via surftown and once I uploaded the page to surftown I noticed it couldn’t fetch the data. Why? You guessed it, I hadn’t enabled cross-origin resource sharing. So I quickly just installed the nuget package for cors, enabled it, decorated the controller with a EnableCors-attribute, re-deployed the API and voilá, beautiful fully working stack in place. And all this exactly as the clock turned 3am!

 

socialtime_v1.0

It isn’t pretty but hey, it worked!

Future

The infrastructure is in place so adding functionality will go fast. My unprioritized backlog looks something like this.

  • Measure time for Twitter, Instagram and G+, separately.
  • Measure time spent per app and per day. (now it’s just milliseconds since forever)
  • Proper frontend and move it to azure.
  • Remove the public list, you’ll need to login to see only your social time. Several requests for this actually :-)
  • Use some cool HTML5 charting lib to display your social time.

Until then, get the app and have a nice day!

 

googleplay

 

AngularJS Loading Indicator / Splash screen

How do we prevent the flash of unrendered content (FOUC)? We don’t want to simply hide the unrendered content from the user since the user may think that the app is broken, especially if they’re on a slow connection. The browser will start to render elements as soon as it encounters them, we can use this to our advantage to display a full-screen block of HTML while the page is loading, i.e. a splash screen.

Full working sample can be found at http://jsfiddle.net/45EfF/.
Let’s start with the markup, I’ve added a splash div at the top of the body-tag and the css is loaded inline in the head-tag. This way, when the browser starts parsing the body-tag it will lay out the splash screen elements and style them before it starts to load any other files.

Loading

Name

{{greeting}} {{name}}

We want to make sure that the splash screen goes away after the page is loaded, we can do this using the ngCloak directive. The ngCloak directive will add a CSS class that sets a display: none !important on the element and then remove the display: none once the page has been loaded. We can hijack this ngCloak directive for the splash screen and invert the style for the splash element only.


.splash {
	display: none;
}

[ng-cloak].splash {
	display: block !important;
}
		
.splash {
  position: absolute;
	top: 0;
	left: 0;
	height:100%;
	width:100%;	
	filter: alpha(opacity=60);
	opacity: 0.6;
	background: #000;
}

.splash h2 {
	text-align: center;
	font-size: 4em;
	color:white;			
}
Finally, I’ve manually bootstrapped angular to simulate slow loading.

 

It’s as simple as that, hope it helped, cheers!

Mobile Angular UI with Bootstrap 3

I’ve been looking for a nice mobile UI framework that plays well together with angular for quite some time now. When I started developing golf-caddie.se jQuery Mobile was pretty much the only competent framework out there and it was quite cumbersome to get it to play nice with angular. So every now and then I try out new UI frameworks, hoping to find a replacement for jqm. By the way, did I mention that I want it to work well on WP8? Yup, proud owner of a lumia 920.

A couple of months ago I gave ionicframework a go, don’t remember the details but it was awful on WP8. Yesterday I came across an open source project, mobile-angular-ui, seemed to be exactly what I’ve been looking for.
You may wonder, why not just use bootstrap? Simple answer – because it doesn’t give the same look and feel as a native app, my web app is intended to be used on the golf course and on mobile devices only, plain bootstrap just doesn’t cut it. The user controls isn’t designed with mobile first in mind even if the layouting mechanism is.
Anyways, before I started a massive refactoring to replace jqm I put together a little “hello world”-app with a simple slide in menu and some page navigations. I’ve hosted it on http://mobile-angular-ui-test.towfeek.se/.
Really simple stuff, a sidebar and two partials. Seemed to work fine until I added an animation when navigating between pages. Boom, page doesn’t render! Ok that’s fine, not a deal breaker, who needs animations anyways? I just reported a bug and went on with it.
The ultimate test – how does it look on my WP8?
  • Page renders a bit slow, I can see the default title before the partial updates it.
  • Slide menu actually slides in, sweet!
  • Navigation works fine without animations.
  • Page only renders the content that fits the screen initially, scrolling doesn’t work. Deal breaker!

angular_mobile_ui

The project is still in an early stage, and it definitely gots potential. I’ll try to contribute to the project if time allows but I wont refactor golf-caddie to use it just yet.
 
Cheers!

ASP.NET MVC and AngularJS

I really think these two frameworks don’t mesh together too well and this blog post is me trying to explain why.

If you’re used to traditional MVC with full page refreshes, views, server side partials etc. and you’ve decided that you want to use a bit of angular inside your pages you are going to be very frustrated. Angular is really best for single page applications (SPA seems to be a buzz word for the moment). If you just want some javascript goodness just use knockoutjs.

I’d suggest using WebApi instead. This is what I’ve found out works best.

  • The server side actions on the ApiController should only return JSON data to the client and be consumed by ajax calls from angular.
  • Stop thinking in terms of razor and writing C# code in your views and working with the @model passed from the controller.
  • Only have one razor page for your app (Index.cshtml), no views, no partials. Load everything on the index page.
  • Do not use the MVC routing, let angular perform the dynamic page updates.
You might ask yourself; why use razor at all? Well the Web.Optimization bundle is quite neat and I’ve grown quite attached to it :-)
 
For SPAs sure go ahead and use angular it’s awesome. But stick with angular/javascript for front-end and use WebApi for back-end if you’ve decided that you want to try out angular on top of asp.net.
 
Also imo SPAs should be fairly small, if you know your project will be fairly big SPA really isn’t the way to go. It could be me just having a hard time to adapt since I’m used to hooking everything up in a spaghetti of document.ready-functions :-P I’ve always been a “back-end / do it on the server / use as much static typing as possible” kind of guy but perhaps that’s changing, especially with tools like TypeScript which is a must in all my web projects nowadays.
 
I might add that I also have experience using angular together with jQuery mobile when developing a scorecard app for golf (golf-caddie.se) which wasn’t too pleasant but that’s another blog post (two frameworks manipulating the DOM don’t play nice together). 
 
Just my thoughts, it’s not written in stone :D

Lab days with focus on security (STS, WebApi, WCF and WIF)

Lab days once again at work, this time I focused on security. Can’t share the code this time since it contains some company confidential stuff but the diagram below basically summarizes my architecture idea.

lab_days_security

Securing WCF service with WIF

There are plenty of blogs out there descibing how to do this so I’m not going to explain this i depth, if you however only have experience with using WIF 3.5 as me I found an excellent migration guidelines page on msdn (http://msdn.microsoft.com/en-us/library/jj157089.aspx). We use a custom binding for tcp transport support with reliable sessions which makes you wanna kill yourself when you see the binding configuration. Due to this I can’t share the code since it is considrered “intellectual property” of Saab (me?).

Important WIF 3.5 to 4.5 change!

You could previously access the calling users claims in the WCF service by casting ServiceSecurityContext.Current.PrimaryIdentity to ClaimsIdentity. This is no longer true, you’ll get the claims by casting Thread.CurrentPrincipal to ClaimsPrincipal or Thread.CurrentPrincipal.Identity directly to ClaimsIdentity. For this to work you’ll also need to set <serviceAuthorization principalPermissionMode=”Always” /> on the service behavior declaration. See also Dominick Baiers post for more details.

JWT, Identity Server v2 and WebApi

I’ve updated our development STS in our product at work from startersts to identityserver v2, which I thought was a great platform to continue exploring, especially since it supported to issue JWT (json web tokens). At least I thought it did until I discovered that it doesn’t! They do however have a branch that supports it and uses Microsoft JWT (https://github.com/thinktecture/Thinktecture.IdentityServer.v2/tree/Microsoft-JWT).

I found a nice message handler that validates JWTs which uses JSON Web Token Handler For the Microsoft .Net Framework 4.5 written by the security guru Vittorio Bertocci (http://code.msdn.microsoft.com/AAL-Native-Application-to-fd648dcf/sourcecode?fileId=62849&pathId=697488104). And as always I encountered major problems when trying to validate the JWT issued by identity server on the server-side (webapi). Victor uses windows azure ad together with Windows Azure Authentication Library which of course validates the token correctly.

I think there is a may release coming up for identity server, perhaps I’ll give it another shot then, but for now I had to use Azure AD.

The outcome wasn’t actually what I expected when I began but the road there was educative, I suggest following Victor on http://www.cloudidentity.com/blog to keep up with the updates on the json web token handler which currently only is in the developer preview stage. It hurts to keep up and always use bleeding edge technology but hey, they don’t call it lab days for nothing. ;-)

 

Peace!

Migrating ASP.NET MVC to WebApi with no breaking changes

Recently I’ve had the pleasure to upgrade our REST interface at work from Asp.Net MVC3 to WebApi so I thought a lessons learned or “watch out for this” blog post was suitable, especially since I managed to do it without the need to bump any version number on our server i.e. no breaking changes.

I think there are others out there that have been using the MVC framework as a pure REST interface with no front end, i.e. dropping the V in MVC, before webapi was available.

Filters

First of all webapi is all about registering filters and message handlers and letting requests be filtered through them. A filter can either be registered globally, per controller or per action which imo already is more flexible than MVC.

public class MvcApplication : HttpApplication
{
    protected void Application_Start()
    {
        ...
        GlobalConfiguration.Configuration.MessageHandlers.Add(new OptionsHandler());
        GlobalConfiguration.Configuration.MessageHandlers.Add(new MethodOverrideHandler());
        GlobalConfiguration.Configuration.Formatters.Insert(0, new TypedXmlMediaTypeFormatter ...);
        GlobalConfiguration.Configuration.Formatters.Insert(0, new TypedJsonMediaTypeFormatter ...);
        ...
    }
}

 

Metod override header

Webapi doesn’t accept the X-HTTP-Method-Override header by default, in our installations we often see that the PUT, DELETE and HEAD verbs are blocked. So I wrote the following message handler which I register in Application_Start.

public class MethodOverrideHandler : DelegatingHandler
{
    private const string Header = "X-HTTP-Method-Override";
    private readonly string[] methods = { "DELETE", "HEAD", "PUT" };

    protected override Task SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Method == HttpMethod.Post && request.Headers.Contains(Header))
        {
            var method = request.Headers.GetValues(Header).FirstOrDefault();
            if (method != null && methods.Contains(method, StringComparer.InvariantCultureIgnoreCase))
            {
                request.Method = new HttpMethod(method);
            }
        }

        return base.SendAsync(request, cancellationToken);
    }
}

 

Exception handling filter

In our controllers we throw HttpResponseExceptions when a resource isn’t found or if the request is bad for instance, this is quite neat when you want to short-circuit the request processing pipeline and return a http error status code to the user. The thing that caught me off guard is that when throwing from a controller action the exception filter is not run, but when throwing from a filter the handler is run. After some head scratching I found a discussion thread on codeplex where it’s explained that this is intentional, so do not throw exceptions in your filters.

We have a filter which looks at the clients accept header to determine if our versions (server/client) are compatible, this was previously checked in our base controller but with webapi it felt like an obvious filtering situation and we threw Not Acceptable if we weren’t compatible. This needed to be re-written to just setting the response on the action context and not calling on the base classes OnActionExecuting which imo isn’t as clean design.

public class VersioningFilter : ActionFilterAttribute
{
    public override void OnActionExecuting(HttpActionContext actionContext)
    {            
        var acceptHeaderContents = ...;
        if (string.IsNullOrWhiteSpace(acceptHeaderContents))
        {                
            actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.NotAcceptable, "No accept header provided");
        }
        else if (!IsCompatibleRequestVersion(acceptHeaderContents))
        {             
            actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.NotAcceptable, "Incompatible client version");
        }
        else
        {
            base.OnActionExecuting(actionContext);
        }
    }
}

 

Request body model binding and query params

In MVC the default model binder mapped x-www-form-encoded parameters to parameters on the action if the name and type matched, this is not the case with webapi. Prepare yourself to create classes and mark the parameters on your controller with the FromBody attribute even if you only want to pass in a simple integer that is not a part of the URI. Furthermore to get hold of the query params provided in the URL you’ll need to pass in the request URI to the static helper method ParseQueryString on the HttpUtility class. It’s exhausting but it will work and it still doesn’t break any existing implementation.

[HttpGet]
public HttpResponseMessage Foo([FromBody]MyModel bar)
{
    var queryParams = HttpUtility.ParseQueryString(Request.RequestUri.Query);
    string q = queryParams["q"];
    ...
}

 

Posting Files

There are plenty of examples out there on how to post a file with MVC or WebApi so I’m not going to cover that. The main difference here is that the MultipartFormDataStreamProvider needs a root path on the server that specifies where to save the file. We didn’t need to do this in MVC, we could simply get the filename from the HttpPostedFiledBase class. I haven’t found a way to just keep the file in-memory until the controller is done. I ended up with a couple of more lines of code where I create the attachments directory if it doesn’t exist, save the file and then delete it once we’ve sent the byte data to our services.

[ActionName("Index"), HttpPost]
public async Task<HttpResponseMessage> CreateAttachment(...)
{
    if (!Request.Content.IsMimeMultipartContent())
    {
        throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
    } 

    string attachmentsDirectoryPath = HttpContext.Current.Server.MapPath("~/SomeDir/");
    if (!Directory.Exists(attachmentsDirectoryPath))
    {
        Directory.CreateDirectory(attachmentsDirectoryPath);
    }

    var provider = new MultipartFormDataStreamProvider(attachmentsDirectoryPath);
    var result = await Request.Content.ReadAsMultipartAsync(provider);
    if (result.FileData.Count < 1)
    {
        throw new HttpResponseException(HttpStatusCode.BadRequest);
    }
    
    var fileData = result.FileData.First();
    string filename = fileData.Headers.ContentDisposition.FileName;
    
    Do stuff ...

    File.Delete(fileData.LocalFileName);    
    return Request.CreateResponse(HttpStatusCode.Created, ...);
}

 

Beaking change in serialization/deserialization JavaScriptSerializer -> Newtonsoft.Json

So WebApi is shipped with the Newtonsoft.Json serializer, there are probably more differences than I noticed but date time types are serialized differently with Newtonsoft. To be sure that we didn’t break any existing implementations I implemented my own formatter which wrapped the JavaScriptSerializer and inserted it first in my formatters configuration. It is really easy to implement custom formatters, all you need to do is inherit MediaTypeFormatter.

public class TypedJsonMediaTypeFormatter : MediaTypeFormatter
{        
    private static readonly JavaScriptSerializer Serializer = new JavaScriptSerializer();
    
    public TypedJsonMediaTypeFormatter(MediaTypeHeaderValue mediaType)
    {        
        SupportedMediaTypes.Clear();
        SupportedMediaTypes.Add(mediaType);
    }

    ...

    public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
    {
        var task = Task<object>.Factory.StartNew(() =>
        {
            var sr = new StreamReader(readStream);
            var jreader = new JsonTextReader(sr);
            object val = Serializer.Deserialize(jreader.Value.ToString(), type);
            return val;
        });

        return task;
    }

    public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content, System.Net.TransportContext transportContext)
    {
        var task = Task.Factory.StartNew(() =>
        {
            string json = Serializer.Serialize(value);
            byte[] buf = System.Text.Encoding.Default.GetBytes(json);
            writeStream.Write(buf, 0, buf.Length);
            writeStream.Flush();
        });

        return task;
    }
}

 

MediaFormatters Content-Type header

Any real world REST interface needs to have a custom content type format and by default the xml and json formatter always returns application/xml respectively appliation/json. This is not good enough, I suggest that you create custom implementations of JsonMediaTypeFormatter and XmlMediaTypeFormatter and insert them first in your formatters configuration. In your custom formatter just add your media type that includes the vendor and version to the SupportedMediaTypes collection. In our case we also append the server minor version to content type as a parameter, the easiest way to do that is by overriding the SetDefaultContentHeaders method and append whichever parameter you want to the header content-type header.

public class TypedXmlMediaTypeFormatter : XmlMediaTypeFormatter
{
    private readonly int minorApiVersion;

    public TypedXmlMediaTypeFormatter(MediaTypeHeaderValue mediaType, int minorApiVersion)
    {
        this.minorApiVersion = minorApiVersion;
        SupportedMediaTypes.Clear();
        SupportedMediaTypes.Add(mediaType);
    }

    ...

    public override void SetDefaultContentHeaders(Type type, HttpContentHeaders headers, MediaTypeHeaderValue mediaType)
    {
        base.SetDefaultContentHeaders(type, headers, mediaType);
        headers.ContentType.Parameters.Add(new NameValueHeaderValue("minor", minorApiVersion.ToString(CultureInfo.InvariantCulture)));
    }
}

 

I think that covers it all, good luck migrating your REST api!

I might cover upgrading from WIF 3.5 (Microsoft.IdentityModel) to WIF 4.5 in my next post, or thinktectures startersts to identity server v2. Take a wild guess what I’ve been busy with at work! ;-)

Performance measurement ASP.NET MVC vs. WebAPI

I’ve been playing around with WebAPI for a couple of days now and I’m quite pleased with most of it so far. At work we have a stripped down a MVC3 site (threw away the M and the V) for ur public REST interface. Except for the ugly ActionResults in the controllers we are satisfied with it so far. I’ve read some articles on how the web api pipeline is supposed to be more optimized so I thought I’d put it up for a test before trying to sell in the platform upgrade to our product owners, also it would result in our first major version bump so we need to be sure that we can benefit from it and not just do it for the sake of fun (for me :-)).

The server side is a simple controller with 4 actions, 1 get, 1 post, 1 put and 1 delete method that only returns a status code, no I/O or business logic we just want to compare the pipelines. The test will run a five parallel threads executing 100 000 requests all together. The MVC site is hosted on a IIS 8 Express server and the WebAPI is both self hosted and hosted on a IIS8 Express server. The test was performed on my developer laptop running on Intel Core i7 CPU @ 2.67 (dual core with two threads on each CPU), 8GB RAM and 64 bit Win7 operating system.

  Avg. request time (ms) Total run time (s) Requests per second
MVC (IIS) 0.42 41.83 2391
WebAPI (IIS) 0.35 35.06 2852
WebAPI (Self Hosted) 0.12 11.67 8567

 

The results don’t say much since the average request time is very fast in all three cases but a relative comparison shows that MVC vs. WebAPI on IIS is ~20% and the self hosted console application is a whole ~260% faster. Although this pure pipeline time will be negligible when adding business logic with I/O operations or even a single WCF call (~50ms perhaps?).

There’s a whole lot more I could write about asp.net webapi but I’ll save it for future blog posts and leave you with the performance measure for now. 

The code for the test is available for download here, please don’t contact me about translating it to VB ;-)

Peace!

Building real-time web app with SignalR

We just recently had lab days at work which for me basically means that I can play around with some new tech to keep myself up to date. Anywho, one of the things I tried out was SignalR which is a .net library for building real-time, multi-user interactive web applications.

To goal was to build a simple chat, textbook example, shouldn’t be any fuzz, right? I started out with an asp.net mvc 3 empty project and installed the EntityFramework, SignalR and knockoutjs nuget packages. Lately I’ve been using knockoutjs pretty much in all my web projects, I’m a big fan!

PM> Install-Package EntityFramework
PM> Install-Package knockoutjs
PM> Install-Package SignalR

I’m not a big fan of entity framework code first but it was enough for my throw-it-away-when-you’re-done requirements. The basic idea is to just create POCOs and decorate the properties with attributes and the db will be generated for you.

The only entity I want to store is a chat message:

public class Message
{
    [Key]
    public int MessageId { get; set; }

    [Required, MaxLength(200)]
    public string Text { get; set; }
}

and the actual database context as

public class DatabaseContext : DbContext
{
    public DbSet&lt;Message&gt; Messages { get; set; }
}

keeping it really simple, no relations at all. And let’s not forget to register the initializer in Application_Start.

Database.SetInitializer(new DropCreateDatabaseIfModelChanges<DatabaseContext>());

Now to the fun part, SignalR. We simply create a Hub that runs on the server that will be able to invoke methods on our clients.

public class Messages : Hub
{
    public void GetAll() { ... }
    public bool Add(Message newMessage) { ... Clients.messageAdded(message); ... }
}

and on our page we only need to 1) create a proxy on the fly, 2) declare a function on the hub so the server can invoke it, 3) call on the add method on the server.

$(function () {
    // proxy created
    this.hub = $.connection.messages;

    // method invoked by the server when a message is added
    this.hub.messageAdded = function (m) { ... };

    // how we send a message to the server
    this.sendMessage = function () { ... this.hub.add(m); ... };
});

The server-side code is telling all the clients to call the messageAdded() JavaScript function. Mind blowing? We are actually calling the client back from the server by sending the name of the client method to call from the server via our connection. SignalR handles all the connection stuff on both client and server and makes sure the channel stays open and alive.

Screenshot of desktop browser and windows phone emulator browser:

signalr_wp_browser

 

Download the code (793,9KB) for full example.