RSS

vNext and Class Libraries

This is a followup to the previous post discussing my first steps with vNext. I suggest reading it first.

vNext class libraries are similar in concept to a regular .NET library. Like ASP.NET vNext they do not actually build to a dll. What is great about them is that you can basically change the vNext library using notepad and it will be compiled on-the-fly when you re-run the page. However unfortunately code changes done in vNext, including vNext libraries, will require a restart of the host process in order to take effect. It would have been great if code changes did not require this restart and the state of the app would be maintained.

The good news is that you can reference regular .NET class libraries from vNext. This may sound trivial, but up until the recent beta 2 and VS2015 CTP 5 it wasn’t possible unless you pulled a github update and manually “k wrapped” your .NET assembly with a vNext library as explained here. Fortunately CTP 5 allows referencing a regular .NET library but it is still buggy as VS might raise build errors (as it does on my machine), but the reference will actually work and running code from a vNext MVC site actually invokes the compiled .NET dll.

Here’s how:

1. I create and open a new ASP.NET vNext web application. This time I’m using an Empty project and adding the bear minimum of code required to run an MVC app.
class lib1

project.json: Add a dependency to “Microsoft.AspNet.Mvc”.

{
    "webroot": "wwwroot",
    "version": "1.0.0-*",
    "exclude": [
        "wwwroot"
    ],
    "packExclude": [
        "node_modules",
        "bower_components",
        "**.kproj",
        "**.user",
        "**.vspscc"
    ],
    "dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2"
    },
    "frameworks" : {
        "aspnet50" : { },
        "aspnetcore50" : { }
    }
}

Startup.cs: Add configuration code and a basic HomeController like so:

using Microsoft.AspNet.Builder;
using Microsoft.Framework.DependencyInjection;

namespace WebApplication3
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app)
        {
            app.UseMvc();
        }
    }

    public class HomeController
    {
        public string Index()
        {
            return "hello";
        }
    }
}

Running the code now “as is” should show a “hello” string in the web browser.

2. I am adding a vNext class library (this isn’t mandatory for running a .NET lib but I do it anyway for the sake of the demo).
class lib2

The class lib has a single class like so:

namespace ClassLibrary1
{
    public class vNextLib
    {
        public static string GetString()
        {
            return "hello from vNext lib";
        }
    }
}

It is now possible to “add reference” to this new ClassLibrary1 as usual or simple modify the project.json like so (partial source listed below):

	"dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2",
		"ClassLibrary1": ""
	},

Finally change the calling ASP.NET vNext web app to call the library and display the string (Startup.cs change):

    public class HomeController
    {
        public string Index()
        {
            return ClassLibrary1.vNextLib.GetString();
        }
    }

At this point I recommend to “Start without debugging” (Ctrl F5). If you don’t do that, any code changes will stop IISExpress and you’ll have to F5 again from VS. You can do that but it will not be easy to take advantage of the “on the fly” compilation.

Note the process ID in iisexpress. 2508.
class lib3

Now changing the ClassLibrary1 code to say “hello from vNext lib 2″ and saving causes iisexpress to restart (note the different Process ID). A simple browser refresh shows the result of the change – no build to the Class Library was required:
class lib4

3. Now create a new Class Library, but this time not a vNext library but a regular .NET library.
class lib5

The code I use is very similar:

namespace ClassLibrary2
{
    public class RegLib
    {
        public static string GetString()
        {
            return "hello from .NET lib";
        }
    }
}

This time you can’t just go to project.json directly as before because the .NET library isn’t a “dependency” as the vNext library. You must first add a reference the “old fashion way” (VS 2015 – CTP 5, remember?):
class lib6

This automates several things:

  • It “wraps” the .NET dll with a vNext lib wrapper which can be viewed in a “src” sibling “wrap” folder. It actually performs what was previously “k wrap” from that github pull mentioned earlier:
    class lib7
  • It adds the “wrap” folder to the “sources” entry in the global.json. If you recall from the previous post, global.json references source folders for the vNext solution so this is a clever way to include references to the class lib:
    {
      "sources": [
        "src",
        "test",
        "wrap"
      ]
    }
    

Now that you have a “ghost vNext lib” wrapping the real .NET code. This isn’t enough. You need to add a dependency to the project.json just like with a regular vNext lib (you may have to build your .NET DLL first before the ClassLibrary2 is available):

	"dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2",
		"ClassLibrary1": "",
		"ClassLibrary2": ""
	},

Now changing the Startup.cs code:

    public class HomeController
    {
        public string Index()
        {
            //return ClassLibrary1.vNextLib.GetString();
            return ClassLibrary2.RegLib.GetString();
        }
    }

Now it seems like in VS2015 CTP 5 this is buggy because building results with an error that complains of a missing ClassLibrary2:

Error CS0103 The name ‘ClassLibrary2′ does not exist in the current context.

If you ignore the build error and still run using ctrl+F5 – it will work:
class lib9

Changing the regular .NET code and saving has no effect as expected, until you recompile it.

Why referencing a regular .NET lib so important? I can think of several reasons:

  1. Migrating a .NET web app to vNext. You may want to use vNext for the ASP.NET tier and use your older business logic and data layers regular .NET assemblies until you decide to migrate them.
  2. Security: If you deploy a vNext app which is not precompiled, a hacker might change your code and vNext will compile on-the-fly. You may claim that you should not deploy non-precompiled apps, but I think differently. I see it as a great advantage to be able to make changes on a server using notepad, in order to perform a quick fix or debug an issue. The downside is the potential hacking. Not sure if MS will allow a single vNext class lib to be precompiled while the rest of the site to be deployed with its source code. If not, perhaps the solution is to use a pre-compiled regular .NET class library for sensitive code, deployed with a non-precompiled vNext app.
  3. Referencing a 3rd party .NET library which does not have a vNext package might be useful.

 

Summary

I assume that Microsoft will spend more time making this wrapping mechanism transparent and flawless. Basically I see no reason why adding the dependency to the wrapping vNext lib isn’t a part of adding a reference to a .NET lib.

A good reference to vNext Class Libraries can be found in the video session of Scott Hanselman and David Fowler: http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DEV-B411.

 
Leave a comment

Posted by on 25/01/2015 in Software Development

 

Tags:

First steps with ASP.NET vNext

This is a big topic. I have had trouble writing about this because it is so big. So I decided to split it into several blog posts. This first post is divided into several parts: intro, cool features, getting started and the K* world.

Disclaimer: Many of the stuff written here is based on beta1 and then beta2 which was released as I was into writing. Lots of stuff is bound to change including the names of the various stuff that can be seen here.

Introduction
ASP.NET 5 a.k.a vNext (temp name) is a major change to ASP.NET programming. It includes many things. This isn’t just an upgrade of several .NET assemblies, but it is also a concept change. There are quite a few resources available on the internet detailing the vNext changes in depth so instead of repeating them here I will settle for providing several references:

Cool vNext features
There are quite a few cool features but I find the following two to be most significant:

Cross Platform
The ability to run ASP.NET over Linux or Mac is a long awaited revolution (did someone just shout Mono?). Time and again I encountered customers who were opposed to use Windows based servers just because they are Microsoft’s. The option to develop .NET on a Windows machine and run it over a non-Windows server is something I really look forward to. For Windows based servers, IIS is supported but not required. Your app can also self host itself.

Yes, I am well aware of the Mono project and that it exists for years. But it makes a world of a difference to have Microsoft officially support this capability.

On-the-fly code compilation
Up until now this feature was available only to Web Site development (File->New->Web Site). In a Web Application (File->New->Project->Web) you can change a cshtml/aspx file, refresh the browser and watch the changes. But in Web Site you can also change the “code behind” files (e.g. aspx.cs), handlers, App_Code, resx files, web services, global.asax and basically everything which isn’t a compiled DLL. This allowed you to make changes on a deployed environment. This is a very important ability as it allows you to change the code using your preferred Notepad app without requiring Visual Studio and a compiler. Why is this important? Here are several examples: a) you can insert debug messages to understand a weird behavior on a production server; b) potentially fix errors on the spot without having to redeploy the entire app or a hotfix; or c) simply change a resource file. Time and again I used this ability on different servers for various purposes. In fact, to me this cool “on the fly” compilation was probably the most significant capability that made me time and again choose Web Site over Web Application. But that also means Web Forms over MVC.

On a side note, to be honest, up until now I am not convinced that MVC is a better paradigm than Web Forms. I also prefer Web Forms because of the great PageMethods/WebMethods js client proxies. Unfortunately it is clear that Web Forms will become more and more obsolete as Microsoft pushes MVC.

vNext works with MVC. Not with Web Forms. Web Forms continues to be supported for now but it will not enjoy the vNext features. But I was very thrilled to hear and see that vNext allows on the fly compilation too. Not only can you change Controllers and Models and run your software, but you can also change vNext Class Libraries and change them using your favorite Nodepad – and that – is something that you cannot do in a Web Site - you cannot change a compiled DLL using notepad! This is a very cool feature indeed, although it does made you wonder how you are supposed to protect your deployed code from hackers who may want to change your license mechanism or bypass your security. When I asked Scott Hunter about this, he replied that there is a pre-compilation module and that it will be possible to precompile the entire site. I’m not sure that it means that you can pre-compile a single assembly. Perhaps the way to over come this will be to reference a regular class library from vNext (it is doable – you can “wrap” an regular assembly as a vNext assembly and use it from a vNext app (I will demonstrate this in the next blog post).

However there is currently a difference in the on-the-fly compilation in favor of Web Site over vNext: In vNext, every change to the code (excluding cshtml) – will trigger a reset of the host. For example changing Controller code while running in IIS Express will not only restart the App Domain but will actually terminate the process and recreate a different one. This means losing any state your application may have had (statics, caching, session etc). With Web Site you can change quite a few files without losing your state (App_Code or resx files will restart the App Domain, but almost any other file change such as ashx/asmx that have no code behind or aspx.cs/ascx.cs – will work great). So making changes during development will not lose your state, or making changes on a production server – does not necessarily mean that all your users get kicked out or lose their session state. The current behavior in vNext is different and limited. I emailed this to Scott Hunter and he replied that if they do decide to support this, it’ll be most likely post RTM. I sure hope that this will be on their TODO list.

Other cool features
There are several other features which are also important. Using vNext you can have side-by-side .NET runtimes. Imagine the following scenario: you code an application using vNext and deploy it. Several months afterwards you develop another app using vNext “version 2″. You need to run both apps on the same server. Today you cannot do that because the developed apps all use the .NET installed on the same machine. An app developed to run using .NET 4 might break if you install .NET 4.5 on the same server to support a different app. With vNext you should be able to deploy your app with the .NET runtime assemblies that it was designated to be used with. More over, vNext supports a “core set” of .NET dlls for the server. So not only you do not have to install irrelevant Windows Updates (e.g. hotfixes for WPF), but the size of the deployed dlls is greatly reduced and makes it feasible to be deployed with your app.

One more feature I find interesting the the ASP.NET Console Application. What!? ASP.NET Console Application!?  Basically this means writing a Console application which has no exe. No build. The same vNext mechanism, compiling on-the-fly your vNext MVC app will be the one compiling and running your Console app. Eventually when released, perhaps vNext will not be categorized as an ASP.NET only component. vNext (or however it will be eventually named) will probably be categorized as a “new .NET platform”, capable of running MVC, Console app or whatever Microsoft is planning to release based on the on-the-fly-no-build platform. Although I did not try it, I assume that you can run a vNext Console app on a Linux or Mac (disclaimer: I haven’t yet tried running vNext on a non-Windows platform).

console


Getting started

  1. Download and install VS 2015 Preview build (http://www.visualstudio.com/en-us/downloads/visual-studio-2015-downloads-vs.aspx).
  2. Create a new vNext Web Application using File->New->Project->Web->ASP.NET Web Application. Select ASP.NET 5 Started Web with the vNext icon.
  3. After the starter project opens, notice the “run” menu (F5). It should point to IIS Express by default. Run using F5.

If all goes well, the demo website will be launched using IIS Express. Going back to VS in vNext there are several noticeable changes that can be seen in the Solution Explorer:

  • global.json: this is a solution level config file. Currently it holds references to source and test folders. I assume further solution and global projects related configuration will be placed in this file. A usage of this would be if you would like to specify external source files and packages. Taking Scott Hanselman’s example: you can download MVC source in order to fix a bug point to it locally instead of using the official release – at least until Microsoft fixes it. Personally I see myself using it to debug and step through .NET code. I have had more than enough situations struggling to understand under the hood behavior by disassembling .NET dlls and trying to figure out what I was doing wrong. As somehow the ability to download the .NET released PDBs never worked well for me, downloading the actual source code and debugging it locally seems useful. Another usage of this file is for wrapped .NET class libraries (beta2 CTP5) which I will describe in the next blog post.
  • project.json: this somewhat resembles the web.config. It contains “different kinds” of configurations, commands, dependencies etc. Dependencies are basically “packages”: NuGet packages or your very own custom vNext class libraries.
  • Static files of the website reside under wwwroot. This way there’s a clean separation from the website’s static files and the actual MVC code. (You can change from wwwroot to a different root.)
  • Startup.cs. This is the entry point in vNext. If you place breakpoints in the different methods in this file you will see that when the website starts this is indeed the starting point. Taking a look at the code in Startup.cs of the starter project you may notice that there are many things that we are used to receive for granted such as identity authentication, static files and MVC itself. In vNext we are supposed to “opt in” to the services we want, giving us full control of the http pipeline.

            // Add static files to the request pipeline.
            app.UseStaticFiles();

            // Add cookie-based authentication to the request pipeline.
            app.UseIdentity();

            // Add MVC to the request pipeline.
            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller}/{action}/{id?}",
                    defaults: new { controller = "Home", action = "Index" });

                // Uncomment the following line to add a route for porting Web API 2 controllers.
                // routes.MapWebApiRoute("DefaultApi", "api/{controller}/{id?}");
            });

The K* world

An ASP.NET vNext application is not supposed to be bound to IIS or IIS Express as we are used to. In fact, it can be self-hosted and if I understood correctly, it should also run well on other platforms which are compatible with OWIN. It basically means that IIS is no longer a mandatory requirement for running ASP.NET applications. Any web server compatible with OWIN should be able to run a vNext application! OWIN:

OWIN defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application

A quick demonstration of this capability is to change the “run” menu from the default IIS Express to “web” (in the starter project) and then running the web site (this is available in the latest update of the VS2015 preview – CTP5).

run menu

Notice that the running process is shown within a console window. It is the klr.exe and it is located within C:\Users\evolpin\.kre\packages\KRE-CLR-x86.1.0.0-beta2\bin. To understand what happened here we need to further understand how vNext works and clarify the meaning of KRE, KVM and K.

As previously mentioned the vNext world will have the capability to run side-by-side. It is also an open source, so you can download it and make changes. It means that development will change from what we are used to. Today when you start development you know that you’re usually bound to a specific version of the framework that is installed on your dev machine (e.g. .NET 4.5.1). In the vNext world you may have multiple .NET runtimes for different projects running on the same machine. Development could become an experience which allows you to develop alongside the latest .NET builds. Microsoft is moving away from today’s situation that you have to wait for the next .NET update to get the latest bug fixes or features, to a situation that you can download in-development code bits or the latest features without having to wait for an official release.

Getting back to the version of the framework, the KRE (K Runtime Environment) basically is a “container” of the .NET packages and assemblies of the runtime. It is the actual .NET CLR and packages that you will be running your code with (and again, you can target different versions of the KRE on the same server or dev machine).

Now that we have a more or less understanding of what is the KRE and that you may have different KRE versions, it is time to get acquainted with a command line utility called KVM (K Version Manager). This command line utility is supposed to assist in setting up and configuring the KRE runtime that you will use. It is used to setup multiple KRE runtimes, switch between them etc. Currently in the beta stages you need to download the KVM manually but I assume that it’ll be deployed with a future VS release.

To download and install go to: https://github.com/aspnet/home#install-the-k-version-manager-kvm. There is a Windows powershell command line to install. Just copy-paste it to a cmd window and within a short time KVM will be available for you to use.

kvm-install

If you open the highlighted folder you will see the KVM scripts. From now on you should be able to use the KVM from every directory as it is in the PATH. If you go to the parent folder you can see aliases (KREs may have aliases) and packages, which actually contain different KRE versions you have locally. If you open <your profile>\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta1\KRE-CoreCLR-x86.1.0.0-beta1\bin (will be different if you have a different beta installed), you will actually see all the .NET stuff of the Cloud Optimized version. That 13MB is the .NET used to run your cloud optimized server code (see Scott Hunter and Scott Hanselman’s video).

Re-open a cmd window (to allow the PATH changes to take effect). If you type ‘kvm’ and press enter you’ll see a list of kvm commands. An important kvm command is ‘kvm list‘. It will list the installed KRE versions that you have ready for your account:

kvm-list

As you can see there are 4 runtime versions installed for my user account so far, including 32/64 versions and CLR and CoreCLR. To make a particular runtime active you need to ‘kvm use‘ it. For example: ‘kvm use 1.0.0-beta1 -r CoreCLR’ will mark it as Active:

kvm-list2

As you can see, ‘kvm use’ changes the PATH environment variable to point to the Active KRE. We need this PATH change to allow us to use the ‘k‘ batch file that exists in the targeted KRE folder. The K batch file allows to perform several things within the currently active KRE env (see below). In short, this is one way to switch between different KRE versions on the same machine for the current user account (side-by-side versioning.)

Just to make things a little clearer, I went to the KRE folders (C:\Users\evolpin\.kre\packages on my machine and account) and found the k.cmd batch file in each of them. I edited a couple of them: x86 CLR and CoreCLR. In each cmd file I placed an “echo” with the version and then used the ‘kvm use’ to switch between them and run ‘k’ per active env. This is what it looks like:

kvm-list3

You can upgrade the versions of the KRE by using ‘kvm upgrade‘. Note that the ‘kvm upgrade’ does not upgrade all KRE’s at once so you may have to add some command line parameters to upgrade specific KRE versions. In the example below, I have requested to upgrade the CoreCLR KRE:

kvm upgrade

After upgrading the x86 CLR and CoreCLR I found two new folders: C:\Users\evolpin\.kre\packages\KRE-CLR-x86.1.0.0-beta2 and C:\Users\evolpin\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta2 as expected.

OK, so why did we go through all this trouble? As mentioned earlier, each KRE comes with the ‘k’ batch file (located for example in: C:\Users\evolpin\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta1\bin). ‘k’ is used for various stuff such as running the vNext Console Apps or running the self-hosted env. Now is a good time to go back and remember that the website was last run not by IIS Express but by switching to “web” and running a self-hosted site using the klr.exe process (remember?). If we open the project.json file of the Starter project we can observe the “web” command in the “commands” section:


"web": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.WebListener --server.urls http://localhost:5000",

This basically means that when we run the “web” command, the Asp.Net host within Microsoft.AspNet.Hosting will launch a listener on port 5000. In the command line we can use ‘k web’ to actually instruct the currently active KRE to run the “web” command like so (must be in the same folder as project.json is located or an exception will be thrown):

k web

So now we know what happened in VS when we switched to “web” in the “run” menu earlier: VS was actually running the KRE using the “web” command similar to typing ‘k web’ from the command line.

To summarize my understanding so far:

  • KRE is the current .NET runtime being used.
  • KVM is the utility which manages the KREs for my account, allowing me to upgrade or switch between the active KRE.
  • K is a batch file which needs to be invoked in the folder of the app allowing me to run vNext commands. project.json needs to be available in order for ‘k’ to know what to do. K is relevant to the active KRE.

Conclusions

I see vNext as a major leap and I think MS is on the right path although there’s still a long way to go. I recommend going through the videos specified in the beginning of this blog post, especially the two videos with Scott Hanselman.

In my next post I will plan to discuss vNext and Class Libraries.

 

 
Leave a comment

Posted by on 04/01/2015 in Software Development

 

Tags:

CSRF and PageMethods / WebMethods

If you’re interested in the code shown in this post, right-click here, click Save As and rename to zip.

In short, Cross-Site Request Forgery (CSRF) attack is one that uses a malicious website to send requests to a targeted website that the user is logged into. For example, the user is logged-in to a bank in one browser tab and uses a second tab to view a different (malicious) website, sent via email or social network. The malicious website invokes actions on the target website, using the fact that the user is logged into it in the first tab. Example for such attacks can be found on the internet (see Wikipedia).

CSRF prevention is quite demanding. If you follow the Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet you’ll noticed that the general recommendation is to use a “Synchronizer Token Pattern”. There are other prevention techniques listed but also specified are their disadvantages. The Synchronizer Token Pattern requires that request calls will have an anti-forgery token that will be tested on the server side. A lack of such token or an invalid one will result in a failure of the request.

ASP.NET WebForms are supposed to prevent attacks on two levels: “full postback” levels (which you can read here how to accomplish) and on Ajax calls. According to a post made by Scott Gu several years ago, ASP.NET Ajax web methods are CSRF-safe because they handle only POST by default (which is known today as an insufficient CSRF prevention technique) and because they require a content type header of application/json, which are not added by the browser when using html element tags for such an attack. It is more than possible that I am missing something but in my tests I found this claim to be incorrect. I found no problem invoking a web method from a different website using GET from a script tag. Unfortunately ASP.NET didn’t detect nor raise any problem doing so (as will be shown below).

Therefore I was looking into adding a Synchronizer Token Pattern onto the requests. In order not to add token to every server side methods’ arguments, one technique is to add the CSRF token to your request’s headers. There are several advantages to using this technique: you don’t need to specify a token argument in your server and client method calls, and more importantly: you do not need to modify your existing website web methods. If you’re using MVC and jQuery Ajax you can achieve this quite easily as can be shown here or you can follow this guide. However if you are using PageMethods/WebMethods, the Synchronizer Token Pattern can prove more difficult as you’ll need to intercept the http request to add that header.

Test websites
I set up a couple of websites for this solution. One website simulates the bank and the other simulates an attacker.
The bank website has PageMethods and WebMethods. The reasons I am setting up a PageMethod and a WebMethod are to demonstrate both and because the CSRF token is stored in session and for WebMethods session is not available by default (as opposed to PageMethods).

public partial class _Default : System.Web.UI.Page
{
    [WebMethod]
    public static bool TransferMoney(int fromAccount, int toAccount, int amount)
    {
        // logic

        return true;
    }
}
[System.Web.Script.Services.ScriptService]
public class MyWebService  : System.Web.Services.WebService {

    [WebMethod]
    public bool TransferMoney(int fromAccount, int toAccount, int amount)
    {
        // logic

        return true;
    }
}

A bank sample code for invoking both methods:

<asp:ScriptManager runat="server" EnablePageMethods="true">
    <Services>
        <asp:ServiceReference Path="~/MyWebService.asmx" />
    </Services>
</asp:ScriptManager>
<input type="button" value='WebMethod' onclick='useWebMethod()' />
<input type="button" value='PageMethod' onclick='usePageMethod()' />
<script type="text/javascript">
    function usePageMethod() {
        PageMethods.TransferMoney(123, 456, 789, function (result) {

        });
    }

    function useWebMethod() {
        MyWebService.TransferMoney(123, 456, 789, function (result) {

        });
    }
</script>

This is how it looks like:
1

The web.config allows invoking via GET (note: you don’t have to allow GET; I’m deliberately allowing a GET to demonstrate an easy CSRF attack and how this solution attempts to block such calls):

<configuration>
  <system.web>
    <compilation debug="true" targetFramework="4.0" />
    <webServices>
      <protocols>
        <add name="HttpGet" />
        <add name="HttpPost" />
      </protocols>
    </webServices>
  </system.web>
</configuration>

The attacking website is quite simple and demonstrates a GET attack via the script tag:

<script src='http://localhost:55555/MyWebsite/MyWebService.asmx/TransferMoney?fromAccount=111&toAccount=222&amount=333'></script>
<script src='http://localhost:55555/MyWebsite/Default.aspx/TransferMoney?fromAccount=111&toAccount=222&amount=333'></script>

As you can see, running the attacking website easily calls the bank’s WebMethod:
2

Prevention
The prevention technique is as follows:

  1. When the page is rendered, generate a unique token which will be inserted into the session.
  2. On the client side, add the token to the request headers.
  3. On the server side, validate the token.

Step 1: Generate a CSRF validation token and store it in the session.

public static class Utils
{
    public static string GenerateToken()
    {
        var token = Guid.NewGuid().ToString();
        HttpContext.Current.Session["RequestVerificationToken"] = token;
        return token;
    }
}
  • Line 5: I used a Guid, but any unique token generation function can be used here.
  • Line 6: Insert token into the session. Note: you might consider ensuring having a Session.

Note: you might have to “smarten up” this method to handle error pages and internal redirects as you might want to skip token generation in certain circumstances. You may also check if a token already exists prior to generating one.

Step 2: Add the token to client requests.

<script type="text/javascript">
    // CSRF
    Sys.Net.WebRequestManager.add_invokingRequest(function (sender, networkRequestEventArgs) {
        var request = networkRequestEventArgs.get_webRequest();
        var headers = request.get_headers();
        headers['RequestVerificationToken'] = '<%= Utils.GenerateToken() %>';
    }); 

    function usePageMethod() {
        PageMethods.TransferMoney(123, 456, 789, function (result) {

        });
    }

    function useWebMethod() {
        MyWebService.TransferMoney(123, 456, 789, function (result) {

        });
    }
</script>
  • Line 3: Luckily ASP.NET provides a client side event to intercept the outgoing request.
  • Line 6: Add the token to the request headers. The server side call to Utils.GenerateToken() executes on the server side as shown above, and the token is rendered onto the client to be used here.

Step 3: Validate the token on the server side.

To analyze and validate the request on the server side, we can use the Global.asax file and the Application_AcquireRequestState event (which is supposed to have the Session object available by now). You may choose a different location to validate the request.

protected void Application_AcquireRequestState(object sender, EventArgs e)
{
    var context = HttpContext.Current;
    HttpRequest request = context.Request;

    // ensure path exists
    if (string.IsNullOrWhiteSpace(request.PathInfo))
        return;

    if (context.Session == null)
        return;
        
    // get session token
    var sessionToken = context.Session["RequestVerificationToken"] as string;

    // get header token
    var token = request.Headers["RequestVerificationToken"];

    // validate
    if (sessionToken == null || sessionToken != token)
    {
        context.Response.Clear();
        context.Response.StatusCode = 403;
        context.Response.End();
    }
}
  • Line 7: Ensure we’re operating on WebMethods/PageMethods. For urls such as: http://localhost:55555/MyWebsite/Default.aspx/TransferMoney, TransferMoney is the PathInfo.
  • Line 10: We must have the session available to retrieve the token from. You may want to add an exception here too if the session is missing.
  • Line 14: Retrieve the session token.
  • Line 17: Retrieve the client request’s token.
  • Line 20: Token validation.
  • Line 22-24: Decide what to do if the token is invalid. One option would be to return a 403 Forbidden (you can also customize the text or provide a subcode).

When running our bank website now invoking the PageMethods you can see the token (the asmx WebMethod at this point doesn’t have a Session so it can’t be properly validated). Note that the request ended successfully.
3

When running the attacking website, note that the PageMethod was blocked, but not the asmx WebMethod.

4

  • The first TransferMoney is the unblocked asmx WebMethod, as it lacks Session support vital to retrieve the cookie.
  • The second TransferMoney was blocked as desired.

Finally we need to add Session support to our asmx WebMethods. Instead of going thru all the website’s asmx WebMethods modifying them to require the Session, we can add the session from a single location in our Global.asax file:

private static readonly HashSet<string> allowedPathInfo = new HashSet<string>(StringComparer.InvariantCultureIgnoreCase) { "/js", "/jsdebug" };
protected void Application_BeginRequest(Object sender, EventArgs e)
{
    if (".asmx".Equals(Context.Request.CurrentExecutionFilePathExtension) && !allowedPathInfo.Contains(Context.Request.PathInfo))
        Context.SetSessionStateBehavior(SessionStateBehavior.Required);
}
  • Line 1: Paths that we would like to exclude can be listed here. asmx js and jsdebug paths render the client side proxies and do not need to be validated.
  • Line 4-5: If asmx and not js/jsdebug, add the Session requirement so it becomes available for the token validation later on.

Now running the attacking website we can see that our asmx was also blocked:
5

Addendum

  • Naturally you can add additional hardening as you see fit.
  • Also important is to note that using the Session and the suggested technique has an issue when working in the same website with several tabs, as each time the Utils.GenerateToken is called it replaces the token in the session. So might consider also checking the referrer header to see whether to issue a warning instead of throwing an exception, or simply generating the token on the server side only once (i.e. checking if the token exists and only if not then generate it.)
  • Consider adding a “turn off” switch to CSRF in case you run into situations that you need cancel this validation.
  • Moreover: consider creating an attribute over methods or web services that will allow skipping the validation. You can never know when these might come in handy.
 
2 Comments

Posted by on 07/09/2014 in Software Development

 

Tags: , ,

Taking a [passport] photo using your camera and HTML 5 and uploading the result with ASP.NET Ajax

If you just want the sample, right click this link, save as, rename to zip and extract.

7

You can use your HTML 5 browser to capture video and photos. That is if your browser supports this feature (at the time of this writing, this example works well on Firefox and Chrome but not IE11).
I have followed some good references on the internet on how to do that, but I also needed some implementation on how to take a “passport” photo and upload the result. This is the intention of this post.

Steps:

  1. Capture video and take snapshot.
  2. Display target area.
  3. Crop the photo to a desired size.
  4. Upload the result to the server.

Step 1: Capture video and take snapshot.
This step relies mainly on the Eric Bidelman’s excellent article. After consideration I decided not to repeat the necessary steps for taking a snapshot using HTML 5, so if you require detailed explanation please read his good article. However the minimum code for this is pretty much straight forward so consider reading on. What you basically need is a browser that supports the video element and getUserMedia(). Also required is a canvas element for showing a snapshot of the video source.

<!DOCTYPE html>
<html>
<head>
    <title></title>
</head>
<body>
    <video autoplay width="320" height="240"></video>
    <canvas width='320' height='240' style="border:1px solid #d3d3d3;"></canvas>
    <div>
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

Several points of interest:

  • Line 7: Video element for showing the captured stream. My camera seems to show a default of 640×480 but here this is set to 320×240 so it will take less space on the browser. Bear this in mind, it’ll be important for later.
  • Line 8: Canvas element for the snapshots. Upon clicking ‘take photo’, the captured stream is rendered to this canvas. Note the canvas size.
  • Line 22: Drawing the snapshot image onto the canvas.
  • Line 26: Consider testing support for getUserMedia.
  • Line 30: Capture video.

The result, after starting a capture and taking a snapshot (video stream is on the left, canvas with snapshot is to the right):
2

Step 2: Display target area.
As the camera takes pictures in “landscape”, we will attempt to crop the image to the desired portrait dimensions. Therefore the idea is to place a div on top of the video element to mark the target area, where the head is to be placed.

4

The code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='320' height='240' style="border: 1px solid #d3d3d3;"></canvas>
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

As you can see, the code was modified to place the dashed area on top of the video. Points of interest:

  • Lines 20-27: note the dimensions of the target area. Also note that the target area is positioned horizontally automatically using ‘margin’.
  • Line 41: The dashed area.

Step 3: Crop picture to desired size.
Luckily the drawImage() method can not only resize a picture but also crop it. A good reference on drawImage is here, and the very good example is here. Still, this is tricky as this isn’t an existing image as shown in the example, but a captured video source which is originally not 320×240 but 640×480. It took me some time to understand that and figure out that it means that the x,y,width and height of the source arguments should be doubled (and if this understanding is incorrect I would appreciate if someone can comment and provide the correct explanation).

As cropping might be a confusing business, my suggestion is to first “crop without cropping”. This means invoking drawImage() to crop, but ensuring that the target is identical to the source in dimensions.

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        // original draw image
        //ctx.drawImage(video, 0, 0, 320, 240); 

        // crop without cropping: source args are doubled; 
        // target args are the expected dimensions
        // the result is identical to the previous drawImage
        ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240);
    }
}

The result:
5

Let’s review the arguments (skipping the first ‘video’ argument):

  • The first pair are the x,y of the starting points of the source.
  • The second pair are the width and height of the source.
  • The third pair are the x,y of the starting points of the target canvas (these can be greater than zero, for example if you would like to have some padding).
  • The fourth pair are the width and height of the target canvas, effectively allowing you also to resize the picture.

Now let’s review the dimensions in our case:
6

In this example the target area is 140×190 and starts at y=40. As the width of the capture area is 320 and the target area is 140, each margin is 90. So basically we should start cropping at x=90.

But since in the source picture everything is doubled as explained before, the drawImage looks different as the first four arguments are doubled:

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
        //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

        //instead of using the requested dimensions "as is"
        //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

        // we double the source args but not the target args
        ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
    }
}

The result:
7

Step 4: Upload the result to the server.
Finally we would like to upload the cropped result to the server. For this purpose we will take the image from the canvas and set it as a source of an img tag.

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Lines 43-44: Note that the canvas has been resized to the desired image size, and the new img element is also resized to those dimensions. If we don’t match them we might see the cropped image stretched or resized not according to the desired dimensions.
  • Line 66: We instruct the canvas to return a jpeg as a source for the image (other image formats are also possible, but this is off topic).

This is how it looks like. The video is on the left, the canvas is in the middle and the new img is to the right (it is masked with blue because of the debugger inspection). It is important to notice the debugger, which shows that the source image is a base64 string.
8

Now we can add a button to upload the base64 string to the server. The example uses ASP.NET PageMethods but obviously you can pick whatever is convenient for yourself. The client code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <form runat="server">
        <asp:ScriptManager runat="server" EnablePageMethods="true"></asp:ScriptManager>
    </form>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
        <input type="button" value="upload" onclick="upload()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function upload() {
            var base64 = document.querySelector('img').src;
            PageMethods.Upload(base64,
                function () { /* TODO: do something for success */ },
                function (e) { console.log(e); }
            );
        }

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Line 40: PageMethods support.
  • Line 60-61: Get the base64 string from the image and call the proxy Upload method.

The server side:

public partial class _Default : System.Web.UI.Page
{
    [WebMethod]
    public static void Upload(string base64)
    {
        var parts = base64.Split(new char[] { ',' }, 2);
        var bytes = Convert.FromBase64String(parts[1]);
        var path = HttpContext.Current.Server.MapPath(string.Format("~/{0}.jpg", DateTime.Now.Ticks));
        System.IO.File.WriteAllBytes(path, bytes);
    }
}
  • Line 6: As can be seen in the client debugger above, the base64 has a prefix. So we parse the string on the server side into two sections, separating the prefix metadata from the image data.
  • Line 7: Into bytes.
  • Lines 8-9: Save to a local photo. Replace with whatever you need, such as storing in the DB.

Addendum
There are several considerations you should think of:

  • What happens if the camera provides a source of different dimensions?
  • Browsers that do not support these capabilities.
  • The quality of the image. You can use other formats and get a better photo quality (at the price of a larger byte size).
  • You might be required to clear the ‘src’ attribute of the video and or img elements, if you need to reset them towards taking a new photo and ensuring a “fresh state” of these elements.
 
Leave a comment

Posted by on 01/06/2014 in Software Development

 

Tags: , , , , , ,

The curious case of System.Timers.Timer

Basically there are 3 Timer classes in .NET: WinForms timer, System.Timers.Timer and System.Threading timer. For a non-WinForms your choice is between the two latter timers. For years I have been working with System.Timers.Timer simply because I preferred it’s object model over the alternative.

The pattern that I choose to work with the Timer is always one of AutoReset=false. The reason is because except for rare cases, I do not require the same operation to be concurrently carried out. Therefore I do not require a concurrent re-entrant timer. So this is probably what my usual timer code looks like:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };
    t.Start();

    Console.ReadKey();
}

private static void DoSomething()
{
    Thread.Sleep(1000);
}
  • Line 21: First Start of timer.
  • Line 18: When the logic is done, the Elapsed restarts the timer.

The try-catch-finally structure ensures that no exceptions will crash the process and that the timer will restart regardless of problems. This works well and as expected. However, recently I needed to run the code first time immediately and not await the first interval to fire the Elapsed event. Usually that’s also not a problem because you can call DoSomething before the invocation of the first timer Start like so:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };

    DoSomething();
    
    t.Start();

    Console.ReadKey();
}

By calling DoSomething() before the first timer, we run the code once and only then turn the timer on. Thus the business code runs immediately which is great. But the problem here is that the first invocation of the DoSomething() is blocking. If the running code takes too long, the remaining code in a real-world app will be blocked. So in this case I needed the first DoSomething() invocation to run in parallel. “No problem” I thought: let’s make the first interval 1ms so that DoSomething() runs on a separate thread [almost] immediately. Then we can change the timer interval within the Elapsed event handler back to the desired 100ms:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 1;
    t.Elapsed += delegate
    {
        try
        {
            t.Interval = 100;

            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };

    t.Start();

    Console.ReadKey();
}
  • Line 5: First time interval is set to 1ms to allow an immediate first time invocation of DoSometing() on a separate non-blocking thread.
  • Line 10: Changed the interval back to the desired 100ms as before.

I thought that this solves it. First run after 1ms followed by non re-entrant intervals of 100ms. Luckily I had logs in DoSomething() that proved me wrong. It seems like that the Elapsed event handler did in fact fire more than once at a time! I added a reference counter-like mechanism to demonstrate:

static int refCount=0;
static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 1;
    t.Elapsed += delegate
    {
        Interlocked.Increment(ref refCount);
        try
        {
            t.Interval = 100;

            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            Interlocked.Decrement(ref refCount);
            t.Start();
        }
    };

    t.Start();

    Console.ReadKey();
}

private static void DoSomething()
{
    Console.WriteLine(refCount);
    Thread.Sleep(1000);
}

As can be seen below, the reference counter clearly shows that DoSomething() is called concurrently several times.
ref1

As a workaround this behavior can be blocked using a boolean that will be set to true in the beginning of the Elapsed event and back to false at the finally clause, and testing that bool at the start of the Elapsed. If already true, not to proceed to DoSomething(). But this is ugly.

In the real world app things aren’t as clear as in a sample code and I was certain that the problem was with my code. I was absolutely confident that some other activity instantiated more than one class that contained this timer code and therefore I was witnessing multiple timers instantiated and fired. As this wasn’t expected I set out to find the other timers and instances that cause this but found none, which made me somewhat happy because I didn’t expect additional instances, but it also made me frustrated because the behavior was irrational and contradicts what I had known about timers. Finally (and after several hours of debugging!) I decided to google for it and after a while I came across a stackoverflow thread that quoted from MSDN the following (emphasis is mine):

Note If Enabled and AutoReset are both set to false, and the timer has previously been enabled, setting the Interval property causes the Elapsed event to be raised once, as if the Enabled property had been set to true. To set the interval without raising the event, you can temporarily set the AutoReset property to true.

Impossible!

OK, let’s remove the interval change within the Elapsed event handler. And behold, the timer works as expected:
ref2

WTF!? Are the writers of this Timer from Microsoft serious?? Is there any logical reason for this behavior?? This sounds like a bug that they decided not to fix but just document this behavior as if it was “by design” (and if anyone can come up with a good reason for this behavior, please comment).

Solution
Despite the MSDN doc, I wasn’t even going to see if setting “AutoReset property to true” temporarily solves it. This is even uglier than having a boolean to test if Elapsed has already fired and is working.

After considering switching to the Threading timer and see if this will provide the desired behavior, I decided to revert to what I consider as a more comfortable solution: I will not change the Interval but set it only once, and simply invoke DoSomething() for the first time in a separate thread:

static int refCount = 0;
static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        Interlocked.Increment(ref refCount);
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            Interlocked.Decrement(ref refCount);
            t.Start();
        }
    };

    Task.Factory.StartNew(() =>
    {
        DoSomething();
        t.Start();
    }, TaskCreationOptions.LongRunning);

    Console.ReadKey();
}

In this solution I simply start a new long running thread, invoke DoSomething() first time and only then start the timer for the other invocations. Here’s the result:
ref3
The first run doesn’t use the timer so the ref counter is 0. The other invocations of Timer’s Elapsed event set the ref count as expected and run only once.

 
Leave a comment

Posted by on 25/04/2014 in Software Development

 

Tags:

Oracle ODP.NET provider and BLOBs over internet environment

Here’s a problem. If you run your queries in an environment where the DB and the .NET code are geographically close and the ping times are short, you are less likely to notice this issue. But when running them in totally different locations you may see very poor performance up to behavior which is impossible to work with.

To cut a long story short, after narrowing the problem down, the bad performance was unique to columns with LOB types. Googling this issue showed that this is due to Oracle’s ODP.NET provider. It seems like by default, ODP.NET will not fetch LOBs data by default but defer it until explicitly requested by the calling application.

Luckily, this behavior can be controlled by setting the InitialLOBFetchSize of the OracleCommand. By default it is set to 0 which means ‘defer’. You can set it to the number of bytes you would like to retrieve or simply to -1 to fetch it entirely. From the docs.

“By default, InitialLOBFetchSize is set to 0. If the InitialLOBFetchSize property value of the OracleCommand is left as 0, the entire LOB data retrieval is deferred until that data is explicitly requested by the application. If the InitialLOBFetchSize property is set to a nonzero value, the LOB data is immediately fetched up to the number of characters or bytes that the InitialLOBFetchSize property specifies.”

Read more here:

“When InitialLOBFetchSize is set to -1, the entire LOB data is prefetched and stored in the fetch array.”

Personally I think that the default should be exactly the opposite. It is the responsibility of the developer to include or exclude LOB columns in the SELECT clause. If the developer attempted a “SELECT *” he will notice the lag and will have to modify the query.

I also think that it is a shame that these properties must be set in code and cannot be tweaked in the config file.

Additional tips
Here are some additional tips. They do not relate to the obvious tips of fine tuning your queries or database, but to the amount of data is to be passed over the network.

  • There is also a InitialLONGFetchSize that you can set to -1 to allow prefetch of LONG and LONG RAW data types.
  • If you set the CommandTimeout property to 0, it will be infinite (that goes also for MySQL, SQLServer and DB2). You must take into consideration that setting the LOBFetchSize to -1 will solve just the prefetch problem but the data might still take a lot of time to be fetched. Also note that the different documentations do not recommend setting the timeout to infinite, but perhaps you should still increase it for queries that are supposed to retrieve lots of data.
  • You should also consider changing the Connection Timeout. This is usually done via the connection string. Consult http://www.connectionstrings.com how to do this.
  • Change your queries to retrieve not the entire table but explicit columns that you actually need.
 
Leave a comment

Posted by on 22/12/2013 in Software Development

 

Tags: , , ,

First attempt at AngularJS and ASP.NET’s PageMethods & WebMethods

If you need the example, right click this link, Save As and rename to zip.

I made a first attempt at AngularJS and was particularly interested in how I can use it alongside ASP.NET Ajax. The reason is because I am very fond of PageMethods and WebMethods for Ajax, as they are extremely simple and trivial to use. I know that just by writing this short introduction some AngularJS religious fans are shaking their heads in disagreement. That’s OK.

AngularJS seems nowadays like one of the hottest frameworks so I thought I to try it out. I did some reading (the AngularJS homepage shows really helpful examples), saw some videos and experimented a bit. My purpose: see if AngularJS can be good as a templating solution. Comparing to other solutions, what I really liked about AngularJS was that it is embedded within the html itself. Note: AngularJS is much more than “just templates” (the html is not just referred to as a collection of DOM elements but as The “View” in MV* architectures) – but that’s off topic. So in comparison for example to jsrender, which I’ve been using lately, AngularJS just integrates into the html, allowing you to manipulate it using JavaScript controllers. This is very convenient because your favorite editor will colorize everything as usual. That’s just a short description of one of the advantages. To be fair I lack the experience to describe the gazillions of advantages of AngularJS. But again, this is off topic.

What I am interested in is something that I will be comfortable working with, that will also be reliable and long lasting. Without going too much into details, there are many libraries, 3rd party controls and frameworks that failed to stand up to some or all of these requirements. Sometimes when you figure out that a library is not living up to standards, it might be after lots of your work was already written using it and it’ll be very difficult or even impossible to back out. Then you have to start working around problems and this could be a pain. Which is why I would rather not become too dependent on things that might later prove to be a constraint.

Having said that, this might explain why I would rather try to integrate AngularJS with ASP.NET Ajax. The latter is something so convenient to me that I’d hate to give it up. Similarly, although jQuery is one major library that I’m working with daily and by far it is a huge success, I’m still using the good ol’ ASP.NET Ajax over jQuery’s Ajax wrappers. I manage to integrate them together with ease so that’s a comfy.

The problem was that after I experimented a little with AngularJS “hello world” samples, I tried to do the most trivial thing: make an Ajax call outside of AngularJS framework and then apply the results using AngularJS. In short, I wasn’t trying to code “in AngularJS” but to use it for templating only. Using jsrender, what I would do is perform a simple ajax call and in the callback use the results and templates, followed by injecting the html onto the page. Very simple. In AngularJS, initially, I couldn’t find a way to do that. That is, I could not find a way to change the $scope property from outside the controller. Perhaps there are ways to do that, but I failed to make a simple ajax call and set the results as a datasource.

After some time it occurred to me that I was working wrong with AngularJS. I figured that if I can’t call the AngularJS controller from outside, I need to change the datasource from the inside. Therefore I made the PageMethods ajax call from within the controller and then it worked as expected.

So the result is as follows. This is my ASP.NET Default.aspx:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>

<!DOCTYPE html>
<html>
<head runat="server">
    <title></title>
</head>
<body ng-app>
    <form id="form1" runat="server">
    <asp:ScriptManager runat="server" EnablePageMethods="true">
        <Scripts>
            <asp:ScriptReference Path="http://code.angularjs.org/1.0.8/angular.js" />
            <asp:ScriptReference Path="~/JScript.js" />
        </Scripts>
    </asp:ScriptManager>
    <div ng-controller="TestCtrl" ng-init="init()">
        {{source}}
    </div>
    </form>
</body>
</html>
  • Line 1: Well, this is a Default.aspx page because we’re using ASP.NET here.
  • Line 8: ng-app tag to indicate an AngularJS app.
  • Line 10: Enable PageMethods.
  • Lines 12-13: AngularJS framework and my own js file containing the AngularJS controller.
  • Line 16: Use my TestCtrl controller with this div element and activate the init() function when the page (view) loads.
  • Line 17: This is the name of the property that it’s value will be displayed and changed following the ajax call. It’ll be initialized in the controller later on.

And the JavaScript AngularJS controller I wrote (JScript.js):

function TestCtrl($scope, $timeout) {
    $scope.source = 10;
    $scope.init = function () {

        PageMethods.Inc($scope.source, function (result) {
            $scope.$apply(function () {
                $scope.source = result;
                $timeout(function () {
                    $scope.init();
                }, 100);
            });
        });

    }
}
  • Line 2: The source property. Initialized to a value of 10.
  • Line 3: The init() method. This is called by AngularJS because in the html I placed an ng-init in the div.
  • Line 5: The ASP.NET Ajax call. Can be any ajax call here: jQuery, ASP.NET Ajax or whatever you prefer. Note that I’m passing the current value of ‘source’ as an argument to the server which will increment this value.
  • Line 6: In AngularJS you have to call $apply to invoke AngularJS framework for calls outside the framework. Here’s a quote from the docs:

    “$apply() is used to execute an expression in angular from outside of the angular framework. (For example from browser DOM events, setTimeout, XHR or third party libraries). Because we are calling into the angular framework we need to perform proper scope life cycle of exception handling, executing watches.”

    Without $apply this code doesn’t seem to refresh the updated binding onto the html.

  • Line 7: In the callback, change the source property of the controller.
  • Lines 8-10: Using the AngularJS $timeout function to schedule consecutive recursive calls to the init() method. The ‘source’ is expected to increment endlessly.

That’s it. If you run this code you’ll see that it endlessly calls the server using ASP.NET Ajax and updates the AngularJS ‘source’ property as expected.

“Philosophy”
Although AngularJS looks very tempting and sexy, what’s important to me is to have ease of use and not to be too dependent. Like so many frameworks that seemed so appealing at one time, AngularJS too has the potential to be looked at as an ancient dinosaur one day. That’s OK. I work with a dinosaur on a daily basis (ASP.NET WebForms). I believe that every single developer with several years of experience in any environment, can provide examples to libraries and frameworks that once were considered the pinnacle of technology. Not many survive as new technologies and ideas are being thought of everyday. Therefore I think that you should choose something that you’ll feel comfortable to work with and not just because it is now very hot. Just browse this article, and you’ll find the 10 hottest client side frameworks of today. While their fans argue between themselves which is better (Backbone.JS? AngularJS? Ember.js?), I think that it’s quite clear that not all ten will remain in that list for years to come. Something else will become “hot”. What then? you can’t just replace your website’s framework every couple of months just because some other technology became “hotter” than what you have selected. Therefore, do not pick the hottest. Pick what’s working for you and your team. If it’s AngularJS, great. If it is Backbone JS, great. If you rather keep a more classic approach by manipulating the DOM using jQuery or the amazing Vanilla JS – that’s great too. Just don’t bind yourself to something because of the wrong reasons.

Summary
In fairness, I think AngularJS looks very interesting. The concepts seems to be very good. Separation of concerns, testing and advocation of single page applications as true applications. In fact, I’m looking forward to using AngularJS in a real project. However, my main concern over AngularJS is that it seems to me like it is a somewhat “too-closed” of a framework and that you must and abide by “the rules”. While it has advantages as it conforms you to do things “properly, their way”, it is also a disadvantage because if eventually you’ll run into a situation that AngularJS doesn’t do what you expect it to – you’ll have a problem. Then you might want to revert to some non-AngularJS such as jQuery but this is strongly discouraged. You’re welcome to read this very helpful post and replies. You’ll get an idea why you shouldn’t mix the two.

In case you’re asking yourself, I consider both jQuery and ASP.NET Ajax as “open” libraries. They are not frameworks. You can work with them in your website and you can work without. You can decide to adapt newer patterns & plugins and even other libraries. You can use plain JavaScript to manipulate the DOM any way you require with or without them. In short, they are not constraining.

 
Leave a comment

Posted by on 27/10/2013 in Software Development

 

Tags: , ,

 
Follow

Get every new post delivered to your Inbox.

Join 71 other followers