Breaking Changes When Upgrading from EF Core 6 to 7: What You Need to Know

Entity Framework Core (EF Core) is a popular Object-Relational Mapping (ORM) framework used by .NET developers for database operations. With the release of EF Core 7, many developers are considering upgrading their projects to take advantage of the new features and improvements.

However, as with any major version upgrade, there are some breaking changes that developers need to be aware of. In this blog post, we’ll discuss some of the breaking changes when migrating from EF Core 6 to 7 and how to address them.

  1. Renaming of “FromSqlRaw” and “FromSqlInterpolated” methods
    In EF Core 7, the “FromSqlRaw” and “FromSqlInterpolated” methods have been renamed to “FromSql” and “FromSqlInterpolated”, respectively. This is a minor change, but if your project has a lot of code that uses these methods, you will need to update it to use the new method names.EF Core 6:

    var blogs = context.Blogs.FromSqlRaw("SELECT * FROM dbo.Blogs").ToList();
    var blogs = context.Blogs.FromSqlInterpolated($"SELECT * FROM dbo.Blogs WHERE Url = {url}").ToList();
    

    EF Core 7:

    var blogs = context.Blogs.FromSql("SELECT * FROM dbo.Blogs").ToList();
    var blogs = context.Blogs.FromSqlInterpolated($"SELECT * FROM dbo.Blogs WHERE Url = {url}").ToList();
    
  2. Changes to the “Update” method In EF Core 7, the “Update” method has been changed to use a “Set” method instead of “Update”. This means that if you have code that uses the “Update” method, you will need to update it to use the new “Set” method.EF Core 6:
    context.Blogs.Update(blog);
    

    EF Core 7:

    context.Blogs.Set<Blog>().Update(blog);
    
  3. Removal of the “UseInternalServiceProvider” method In EF Core 7, the “UseInternalServiceProvider” method has been removed. This method was used to configure the dependency injection container for EF Core, but it has been replaced with a new configuration method called “AddEntityFramework”.EF Core 6:
    options.UseInternalServiceProvider(serviceProvider);
    

    EF Core 7:

    options.AddEntityFramework().UseInternalServiceProvider(serviceProvider);
    
  4. Removal of the “ExecuteSqlInterpolated” method In EF Core 7, the “ExecuteSqlInterpolated” method has been removed. This method was used to execute SQL queries with interpolated string arguments. In EF Core 7, you can use the new “ExecuteSql” method with parameters to achieve the same functionality.EF Core 6:
    context.Database.ExecuteSqlInterpolated($"UPDATE dbo.Blogs SET Rating = {newRating} WHERE Url = {url}");
    

    EF Core 7:

    context.Database.ExecuteSqlRaw("UPDATE dbo.Blogs SET Rating = {0} WHERE Url = {1}", newRating, url);
    
  5. Changes to the “ToSql” method In EF Core 7, the “ToSql” method has been changed to “ToQueryString”. This method is used to generate SQL queries from LINQ expressions. If you have code that uses the “ToSql” method, you will need to update it to use the new “ToQueryString” method.EF Core 6:
    var sql = context.Blogs.Where(b => b.Url.StartsWith("https://")).ToSql();
    

    EF Core 7:

    var sql = context.Blogs.Where(b => b.Url.StartsWith("https://")).ToQueryString();
    
  6. Changes to the “AsNoTracking” method In EF Core 7, the “AsNoTracking” method has been changed to accept a “QueryTrackingBehavior” parameter. This parameter can be used to specify the tracking behavior for the query. If you have code that uses the “AsNoTracking” method, you will need to update it to include the new parameter.EF Core 6:
    var blogs = context.Blogs.AsNoTracking().ToList();
    

    EF Core 7:

    var blogs = context.Blogs.AsNoTracking(QueryTrackingBehavior.NoTracking).ToList();
    
  7. Changes to the “CreateDbContext” method In EF Core 7, the “CreateDbContext” method has been changed to accept a “DbContextOptions” parameter. This parameter can be used to configure the DbContext options. If you have code that uses the “CreateDbContext” method, you will need to update it to include the new parameter.EF Core 6:
    var context = new MyDbContext();
    

    EF Core 7:

    var options = new DbContextOptionsBuilder<MyDbContext>().UseSqlServer(connectionString).Options;
    var context = new MyDbContext(options);
    
  8. Default value of “Encrypt” attribute in SQL Server connection strings has been changed to “true” In EF Core 7, the default value for the “Encrypt” attribute in SQL Server connection strings has been changed to “true”. This is a high-impact breaking change as it may affect existing applications that rely on the previous default behavior. This means that if you are upgrading to EF Core 7 and your application relies on an unencrypted SQL Server connection, you will need to explicitly set the “Encrypt” attribute to “false” in your connection string.For example, you can modify the “Data Source” attribute in the connection string to include the “Encrypt=false” parameter, like this:
    "Data Source=myServer;Initial Catalog=myDatabase;Integrated Security=True;Encrypt=false;"
    

    Alternatively, you can set the “Encrypt” attribute explicitly to “true” if you wish to use an encrypted connection:

    "Data Source=myServer;Initial Catalog=myDatabase;Integrated Security=True;Encrypt=true;"
    

In summary, upgrading to EF Core 7 can bring many benefits to your project, but it’s important to be aware of the breaking changes that come with the upgrade. By understanding and addressing these changes, you can ensure a smooth migration and take full advantage of the new features and improvements in EF Core 7.

Enabling C# and .NET Core Debugging in VS Code from an Offline Environment

VS Code has quickly become a fairly popular IDE/editor and a free light weight alternative to Visual Studio. Getting up and running with editing and debugging C# code is usually as simply as installing the csharp extension from within VS Code. Great instructions can be found here.

But what if your computer isn’t hooked up to the internet? Here’s a great question on stackoverflow where the correct answer allows you to at least install the csharp extension, but once the extension is loaded some errors will be logged that the tools to enable .NET Core debugging failed to install due to no internet connection. And there is no bundle out there containing it all today.

So, how do we solve this? We’ll build the extension ourselves from the source, isn’t open source great?

Solution

The omnisharp github page has a reported issued on the matter but it’s closed without really providing the full and complete answer. Modifications to the gulp file needs to be made to be able to build the vsix file on a windows machine.

Cloning the repo

Start by simply cloning the repo, installing the npm dependencies and compiling the code:

    git clone https://github.com/OmniSharp/omnisharp-vscode.git
    cd omnisharp-vscode
    npm i
    npm run compile

Manipulating the gulp file

Before building the vsix file as the github issue suggests we’ll need to do a couple of changes, at least on a windows machine. Open up gulpfile.js wich is located at the root level of the repo and on line 95 change:

throw new Error('Do not build offline packages on windows. Runtime executables will not be marked executable in *nix packages.');

to

console.log('Do not build offline packages on windows. Runtime executables will not be marked executable in *nix packages.');

also it’s a bit overkill to build the extension for all platforms, let’s just build it for our intendent platform, on line 125 you should find the following:

    var packages = [];
    packages.push(new PlatformInformation('win32', 'x86_64'));
    packages.push(new PlatformInformation('darwin', 'x86_64'));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('centos', '7')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('debian', '8')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('fedora', '23')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('opensuse', '13.2')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('rhel', '7.2')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('ubuntu', '14.04')));
    packages.push(new PlatformInformation('linux', 'x86_64', new LinuxDistribution('ubuntu', '16.04')));

you can change this to win32 only by simply removing the other lines:

    var packages = [];
    packages.push(new PlatformInformation('win32', 'x86_64'));

Now we’re ready to build!

Build the VSIX-file

Now let’s build our vsix-package by running:

    node node_modules/gulp/bin/gulp.js package:offline

which should provide the following output:
omnisharp-vsix-build
and a csharp.1.12.0-beta1-undefined.vsix file, mine’s around 200MB.

Installing the VSIX-file

Just copy the VSIX to the offline computer and in VS Code press F1, start typing “Install from VSIX” and browse the file and restart VS Code once it’s installed.

That’s it, you’re now ready to debug .NET Core applications by simply pressing F5. VS Code will help you to add the necessary json configuration files to the .vscode folder.

Git Deploying a Bundled Angular 2 App using Angular CLI to Microsoft Azure

In this screencast I use the angular-cli tool for the first time to package an angular2 app for production before git deploying it to Microsoft Azure.

Screencast

Angular CLI

The CLI is at the moment of writing in beta and very much still a work in progress. It’s an excellent tool imho for scaffolding a new project, components and services. In this screencast we ran the following commands:

  ng new PROJECT_NAME // creates a new project
  ng g component COMPONENT_NAME // creates a new component
  ng g service SERVICE_NAME // creates a new service
  ng build -prod // builds a production ready version
  ng serve -prod // serves a production ready version

The CLI allows you to do a lot more, I really recommend you to install it and play with it for yourselves.

  npm install -g angular-cli

Also make sure to check out their official github repo which serves as great documentation.

Git Deployment

In this screencast we took a couple of short cuts, we didn’t setup a full CI environment. We initialized a new git repository in the dist folder and pushed only that folder to azure, meaning we built it on our dev machine, big NO NO. The workflow we want would look something like this instead.

  1. We commit a code change.
  2. Agent gets the latest code and builds it.
  3. Tests are run on build agent.
  4. If tests pass, deploy the dist folder to a staging slot.

Nevertheless we still need to enable a git repository for our web app, here’s an excellent guide on how to do that, it basically takes you through the steps I did in the screencast. Basically from the dist folder:

  1. git init
  2. git add *
  3. git commit -m “Initial Commit.”
  4. git remote add azure GIT_CLONE_URL
  5. git push azure master

These commands should fire up an authentication dialog and once you’ve provided the credentials the files should be pushed to the site.

Summary

With these steps we’ve managed to create a production build of an angular 2 app and deployed it to Azure. We did it all with just a few command lines using the angular CLI which was pretty awesome. The CLI does lagg behind the release candidates and is a work in progress so please use with caution.

Until next time, have an excellent day!

Angular 2 Material Replacing Bootstrap

In this weeks screencast we fully replace bootstrap with material components for angular 2. Material2 just announced their alpha 2 release, adding a bunch of components, perfect timing for live coding screencast, code at https://github.com/ajtowf/ng2_play. The ng2play repo has also been updated to the latest angular2 version which at the time of writing is beta 15, see the changelog for details.

During the coding session we integrate the following components into our app:

Make sure to check out the screencast below, enjoy!

Screencast

Documentation / Demo App

There isn’t any official documentation for material2 yet, but there is a demo app in their github repo, here are the steps to get it up and running on your local dev machine:

  1. Make sure you have `node` installed with a version at _least_ 4.2.3.
  2. Run `npm install -g angular-cli` to install the Angular CLI.
  3. Clone the angular/material2 repo
  4. From the root of the project, run `npm install`, then run `npm run typings` to install typescript definitions.
  5. To build the project, run `ng build`.
  6. To bring up a local server, run `ng serve`. This will automatically watch for changes and rebuild.

After the changes rebuild, the browser currently needs to be manually refreshed. Now you can visit the prompted URL in your browser to explore the demo app.

Resouces on Angular Material

To learn more about material deisgn and components for angular, make sure to check out my pluralsight course Angular Material Fundamentals.

Until next time, have a nice day folks and keep on coding!

Programming Interview Questions: Recursion

In this screecast we solve two commonly asked interview questions; faculty and traversing binary trees.

Screencast

What’s recursion?

A recursive function is simply a function that repeatedly calls itself and the trick is to realize when to stop calling ourselves to avoid infinite loops that result in stack overflows.

If the interviewers ask you to write down an algoritm that gives you the n:th fibonacci number, calculate faculty or traverse a binary tree they probably want you to provide both an iterative and recursive solution. We don’t address fibonacci in the screencast, but the formula for the n:th number is simply the sum of the previous two, i.e.

f(n) = f(n-1) + f(n-2)

Is this a good interview question?

Here’s the recursive methods I developed during the screencast to calculate faculty and to sum the value of all the nodes in a binary tree:

    private static int sum(Node node) {
        if (node == null) return 0;
        return node.Value + sum(node.Left) + sum(node.Right);
    }
    
    private static long faculty(int n) {
        if (n == 1) return 1;
        return n * faculty(n - 1);
    }

As you can see the answers are usually very simple but it’s not unusual to see candidates try to make things more complicated than they need to be. Just keep it simple.

Interviewers tend to ask these kind of questions even if functional programming is a very small part of the day to day work. It’s always good to be prepared by training on some simple problems similar to the ones covered here. After one or two exercises you’ll get the hang of it and it won’t be a problem if they throw these kind of questions at you during the interview.

And as always, until next time, have a nice day!

Connection leaks when using async/await with Transactions in WCF

If you’re getting “The current TransactionScope is already complete” from service calls that don’t even consume transactions, you’ll probably want to read/see this.

Screencast and Code

The code can be found on github, https://github.com/ajtowf/dist_transactions_lab, one change I did since the recording is that we don’t create the nhibernate factory with each call, we now use a singleton SessionManager instead. Also we’re adding the convention to the factory to never load lazy so that our Item entity don’t need to have virtual properties, which makes it easier to switch between OR-mapper implementations.

Leaking Connections

In a fairly complex distributed enterprise system we were getting some strange The current TransactionScope is already complete errors. We used transactions frequently but we saw this on calls that wasn’t even supposed to run within an transaction.

After trying almost everything we got a hint from a nhibernate analyzer product that we shouldn’t consume a nhibernate session from multiple threads since it’s not thread safe.

If you use await, that’s exactly what happens. Turns out entity framework has the same problem.

The following code in your service will leak connections if the awaited method or service call uses a database connection with EntityFramework or NHibernate.

    [OperationBehavior(TransactionScopeRequired = true)]
    public async Task CallAsync()
    {
        using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
        {
            await _service.WriteAsync();
            ts.Complete();
        }
    }

Why Tasks in the Service Contract at all?

The lone reason for our service contracts being task based is that we use the same interface to implement our client-side proxies, which is neat, but the service doesn’t need use await because of that. This will work for instance:

    [OperationBehavior(TransactionScopeRequired = true)]
    public Task CallAsync()
    {
        // Do synchronous stuff
        return Task.FromResult(true);
    }

or (don’t like this one though)

    [OperationBehavior(TransactionScopeRequired = true)]
    public Task CallAsync()
    {
        // Remember to copy the OperationContext and TranactionScope to inner Task.
        return Task.Run(() =>
        {
            // Do synchronous stuff
        });          
    }

Oh, you don’t want to return a Task if you’re not doing anything async? Do this then:

    [OperationBehavior(TransactionScopeRequired = true)]
    public async Task CallAsync()
    {
        // Do synchronous stuff
    }

What about the warning? Turn it off with #pragma.

     [OperationBehavior(TransactionScopeRequired = true)]
#pragma warning disable 1998
     public async Task CallAsync()
#pragma warning restore 1998
        {            
            // Do synchronous stuff        
        }

You’ll probably want to wrap the entire service class with that pragma disable.

Solution

The main take away here is to simply not use async/await in your service code if you’re awaiting methods or service calls that will use database connections. The following refactoring solves the problem:

    [OperationBehavior(TransactionScopeRequired = true)]
    public Task CallAsync()
    {
        _service.WriteAsync().Wait();
        return Task.FromResult(true);
    }

As always, until next time, have a nice day!

Distributed Transactions in WCF with async and await

TL;DR?

See my screencast explaining problem instead:

Problem

When flowing a transaction from a client to a service Transaction.Current becomes null after awaiting a service to service call.

Unless of course you create a new TransactionScope in your service method as follows:

    [OperationBehavior(TransactionScopeRequired = true)]
    public async Task CallAsync()
    {
        using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
        {
            await _service.WriteAsync();
            await _service.WriteAsync();            
            scope.Complete();
        }
    }

Problem UPDATE

It doesn’t even have to be a service to service call, an await to a local async method also nulls Transaction.Current. To clearify with an example

    [OperationBehavior(TransactionScopeRequired = true)]
    public async Task CallAsync()
    {
        await WriteAsync();
        // Transaction.Current is now null
        await WriteAsync();                     
    }

Why TransactionScopeAsyncFlowOption isn’t enabled by default I don’t know, but I don’t like to repeat myself so I figured I’d always create an inner transactionscope with that option using a custom behavior.

Attempted Solution

I created a Message Inspector, implementing IDispatchMessageInspector and attached it as a service behavior, code executes and everyting no problem there, but it doesn’t have the same effect as declaring the transactionscope in the service method.

    public class TransactionScopeMessageInspector : IDispatchMessageInspector
    {
        public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
        {
            var transactionMessage = (TransactionMessageProperty)OperationContext.Current.IncomingMessageProperties["TransactionMessageProperty"];
            var scope = new TransactionScope(transactionMessage.Transaction, TransactionScopeAsyncFlowOption.Enabled);            
            return scope;
        }

        public void BeforeSendReply(ref Message reply, object correlationState)
        {
            var transaction = correlationState as TransactionScope;
            if (transaction != null)
            {
                transaction.Complete();
                transaction.Dispose();
            }
        }
    }

by looking at the identifiers when debugging I can see that it in fact is the same transaction in the message inspector as in the service but after the first call, i.e.

    await _service_WriteAsync();

Transaction.Current becomes null. Same thing if not getting the current transaction from OperationContext.Current in the message inspector as well so it’s unlikely that is the problem.

Is it possible to create a TransactionScope in a Custom WCF Service Behavior?

Is it even possible to accomplish this? It appears like the only way is to declare a TransactionScope in the service method, that is:

    public async Task CallAsync()
    {
        var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);
        await _service.WriteAsync();
        await _service.WriteAsync();            
        scope.Complete();
    }

with the following service contract it’s obvious that we get an exception on the second service call if transaction.current became null inbetween

    [OperationContract, TransactionFlow(TransactionFlowOption.Mandatory)]
    Task WriteAsync();

Got a link to a book posing the exact same question on my stackoverflow question. The conclusion is basically that it can’t be done in a clean way. Quoting the book:

We consider the lack of parity with standard WCF behavior introduces by async service operations a design flaw of WCF…

And then a far from ideal / insane solution is proposed.

Accepted Solution for now

It seems like the only way to make this work is to create an inner transaction, if you have a better solution feel free to comment or contact me or why not answer my stackoverflow question http://stackoverflow.com/questions/34767978.

Until next time, have an excellent day!

Navigate Code Efficiently with JetBrains ReSharper in VisualStudio

Started a new series on efficiency in visual studio with resharper, here’s the first part that’s on navigating the code. What’s your favorite navigation shortcut?

Shortcuts used in video:

  • Ctrl + Shift + T: Find everything
  • Ctrl + T: Find files
  • Ctrl + F12: Go to Implementations
  • Shift + F12: Find Usages
  • Ctrl + Alt + PgUp/PgDn: Navigate to next/prev usage
  • Alt + Shift + L: Find file in Solution Explorer

Hope you enjoy the video and make sure to subscribe if you do, cheers!

Introduction to ASP.NET 5 (Part I) – Frameworks, DNVM, DNX and DNU

This is the first part of an introduction to ASP.NET 5 series. We start easy by taking a look at the different runtime versions and the dnvm, dnx and dnu commands. Plan to release a new part/lesson every week, check out the entire playlist here.

Cheers!

ASP.NET vNext – DNX, DNU and DNVM

There seems to be some confusion about abbreviations used with aspnet5 (vNext), let’s try to sort them out and give a brief explanation.

  • DNX is a SDK and a runtime environment for creating .NET applications for Windows, Mac and Linux. Basically it allows the cross-platform development using the .NET 5 Core.
  • DNU is the .NET Development Utility. It allows you to build, package and publish projects created with DNX.
  • DNVM is the .NET Version Manager. It is basically a set of command line instructions which allow you to configure your .NET Runtime.

The official docs of ASP.NET 5 is a great resource, here’s a link: http://docs.asp.net/en/latest/getting-started/index.html.

Cheers!