RSS

Taking a [passport] photo using your camera and HTML 5 and uploading the result with ASP.NET Ajax

If you just want the sample, right click this link, save as, rename to zip and extract.

7

You can use your HTML 5 browser to capture video and photos. That is if your browser supports this feature (at the time of this writing, this example works well on Firefox and Chrome but not IE11).
I have followed some good references on the internet on how to do that, but I also needed some implementation on how to take a “passport” photo and upload the result. This is the intention of this post.

Steps:

  1. Capture video and take snapshot.
  2. Display target area.
  3. Crop the photo to a desired size.
  4. Upload the result to the server.

Step 1: Capture video and take snapshot.
This step relies mainly on the Eric Bidelman’s excellent article. After consideration I decided not to repeat the necessary steps for taking a snapshot using HTML 5, so if you require detailed explanation please read his good article. However the minimum code for this is pretty much straight forward so consider reading on. What you basically need is a browser that supports the video element and getUserMedia(). Also required is a canvas element for showing a snapshot of the video source.

<!DOCTYPE html>
<html>
<head>
    <title></title>
</head>
<body>
    <video autoplay width="320" height="240"></video>
    <canvas width='320' height='240' style="border:1px solid #d3d3d3;"></canvas>
    <div>
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

Several points of interest:

  • Line 7: Video element for showing the captured stream. My camera seems to show a default of 640×480 but here this is set to 320×240 so it will take less space on the browser. Bear this in mind, it’ll be important for later.
  • Line 8: Canvas element for the snapshots. Upon clicking ‘take photo’, the captured stream is rendered to this canvas. Note the canvas size.
  • Line 22: Drawing the snapshot image onto the canvas.
  • Line 26: Consider testing support for getUserMedia.
  • Line 30: Capture video.

The result, after starting a capture and taking a snapshot (video stream is on the left, canvas with snapshot is to the right):
2

Step 2: Display target area.
As the camera takes pictures in “landscape”, we will attempt to crop the image to the desired portrait dimensions. Therefore the idea is to place a div on top of the video element to mark the target area, where the head is to be placed.

4

The code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='320' height='240' style="border: 1px solid #d3d3d3;"></canvas>
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

As you can see, the code was modified to place the dashed area on top of the video. Points of interest:

  • Lines 20-27: note the dimensions of the target area. Also note that the target area is positioned horizontally automatically using ‘margin’.
  • Line 41: The dashed area.

Step 3: Crop picture to desired size.
Luckily the drawImage() method can not only resize a picture but also crop it. A good reference on drawImage is here, and the very good example is here. Still, this is tricky as this isn’t an existing image as shown in the example, but a captured video source which is originally not 320×240 but 640×480. It took me some time to understand that and figure out that it means that the x,y,width and height of the source arguments should be doubled (and if this understanding is incorrect I would appreciate if someone can comment and provide the correct explanation).

As cropping might be a confusing business, my suggestion is to first “crop without cropping”. This means invoking drawImage() to crop, but ensuring that the target is identical to the source in dimensions.

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        // original draw image
        //ctx.drawImage(video, 0, 0, 320, 240); 

        // crop without cropping: source args are doubled; 
        // target args are the expected dimensions
        // the result is identical to the previous drawImage
        ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240);
    }
}

The result:
5

Let’s review the arguments (skipping the first ‘video’ argument):

  • The first pair are the x,y of the starting points of the source.
  • The second pair are the width and height of the source.
  • The third pair are the x,y of the starting points of the target canvas (these can be greater than zero, for example if you would like to have some padding).
  • The fourth pair are the width and height of the target canvas, effectively allowing you also to resize the picture.

Now let’s review the dimensions in our case:
6

In this example the target area is 140×190 and starts at y=40. As the width of the capture area is 320 and the target area is 140, each margin is 90. So basically we should start cropping at x=90.

But since in the source picture everything is doubled as explained before, the drawImage looks different as the first four arguments are doubled:

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
        //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

        //instead of using the requested dimensions "as is"
        //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

        // we double the source args but not the target args
        ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
    }
}

The result:
7

Step 4: Upload the result to the server.
Finally we would like to upload the cropped result to the server. For this purpose we will take the image from the canvas and set it as a source of an img tag.

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Lines 43-44: Note that the canvas has been resized to the desired image size, and the new img element is also resized to those dimensions. If we don’t match them we might see the cropped image stretched or resized not according to the desired dimensions.
  • Line 66: We instruct the canvas to return a jpeg as a source for the image (other image formats are also possible, but this is off topic).

This is how it looks like. The video is on the left, the canvas is in the middle and the new img is to the right (it is masked with blue because of the debugger inspection). It is important to notice the debugger, which shows that the source image is a base64 string.
8

Now we can add a button to upload the base64 string to the server. The example uses ASP.NET PageMethods but obviously you can pick whatever is convenient for yourself. The client code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <form runat="server">
        <asp:ScriptManager runat="server" EnablePageMethods="true"></asp:ScriptManager>
    </form>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
        <input type="button" value="upload" onclick="upload()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function upload() {
            var base64 = document.querySelector('img').src;
            PageMethods.Upload(base64,
                function () { /* TODO: do something for success */ },
                function (e) { console.log(e); }
            );
        }

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Line 40: PageMethods support.
  • Line 60-61: Get the base64 string from the image and call the proxy Upload method.

The server side:

public partial class _Default : System.Web.UI.Page
{
    [WebMethod]
    public static void Upload(string base64)
    {
        var parts = base64.Split(new char[] { ',' }, 2);
        var bytes = Convert.FromBase64String(parts[1]);
        var path = HttpContext.Current.Server.MapPath(string.Format("~/{0}.jpg", DateTime.Now.Ticks));
        System.IO.File.WriteAllBytes(path, bytes);
    }
}
  • Line 6: As can be seen in the client debugger above, the base64 has a prefix. So we parse the string on the server side into two sections, separating the prefix metadata from the image data.
  • Line 7: Into bytes.
  • Lines 8-9: Save to a local photo. Replace with whatever you need, such as storing in the DB.

Addendum
There are several considerations you should think of:

  • What happens if the camera provides a source of different dimensions?
  • Browsers that do not support these capabilities.
  • The quality of the image. You can use other formats and get a better photo quality (at the price of a larger byte size).
  • You might be required to clear the ‘src’ attribute of the video and or img elements, if you need to reset them towards taking a new photo and ensuring a “fresh state” of these elements.
 
Leave a comment

Posted by on 01/06/2014 in Software Development

 

Tags: , , , , , ,

The curious case of System.Timers.Timer

Basically there are 3 Timer classes in .NET: WinForms timer, System.Timers.Timer and System.Threading timer. For a non-WinForms your choice is between the two latter timers. For years I have been working with System.Timers.Timer simply because I preferred it’s object model over the alternative.

The pattern that I choose to work with the Timer is always one of AutoReset=false. The reason is because except for rare cases, I do not require the same operation to be concurrently carried out. Therefore I do not require a concurrent re-entrant timer. So this is probably what my usual timer code looks like:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };
    t.Start();

    Console.ReadKey();
}

private static void DoSomething()
{
    Thread.Sleep(1000);
}
  • Line 21: First Start of timer.
  • Line 18: When the logic is done, the Elapsed restarts the timer.

The try-catch-finally structure ensures that no exceptions will crash the process and that the timer will restart regardless of problems. This works well and as expected. However, recently I needed to run the code first time immediately and not await the first interval to fire the Elapsed event. Usually that’s also not a problem because you can call DoSomething before the invocation of the first timer Start like so:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };

    DoSomething();
    
    t.Start();

    Console.ReadKey();
}

By calling DoSomething() before the first timer, we run the code once and only then turn the timer on. Thus the business code runs immediately which is great. But the problem here is that the first invocation of the DoSomething() is blocking. If the running code takes too long, the remaining code in a real-world app will be blocked. So in this case I needed the first DoSomething() invocation to run in parallel. “No problem” I thought: let’s make the first interval 1ms so that DoSomething() runs on a separate thread [almost] immediately. Then we can change the timer interval within the Elapsed event handler back to the desired 100ms:

static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 1;
    t.Elapsed += delegate
    {
        try
        {
            t.Interval = 100;

            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            t.Start();
        }
    };

    t.Start();

    Console.ReadKey();
}
  • Line 5: First time interval is set to 1ms to allow an immediate first time invocation of DoSometing() on a separate non-blocking thread.
  • Line 10: Changed the interval back to the desired 100ms as before.

I thought that this solves it. First run after 1ms followed by non re-entrant intervals of 100ms. Luckily I had logs in DoSomething() that proved me wrong. It seems like that the Elapsed event handler did in fact fire more than once at a time! I added a reference counter-like mechanism to demonstrate:

static int refCount=0;
static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 1;
    t.Elapsed += delegate
    {
        Interlocked.Increment(ref refCount);
        try
        {
            t.Interval = 100;

            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            Interlocked.Decrement(ref refCount);
            t.Start();
        }
    };

    t.Start();

    Console.ReadKey();
}

private static void DoSomething()
{
    Console.WriteLine(refCount);
    Thread.Sleep(1000);
}

As can be seen below, the reference counter clearly shows that DoSomething() is called concurrently several times.
ref1

As a workaround this behavior can be blocked using a boolean that will be set to true in the beginning of the Elapsed event and back to false at the finally clause, and testing that bool at the start of the Elapsed. If already true, not to proceed to DoSomething(). But this is ugly.

In the real world app things aren’t as clear as in a sample code and I was certain that the problem was with my code. I was absolutely confident that some other activity instantiated more than one class that contained this timer code and therefore I was witnessing multiple timers instantiated and fired. As this wasn’t expected I set out to find the other timers and instances that cause this but found none, which made me somewhat happy because I didn’t expect additional instances, but it also made me frustrated because the behavior was irrational and contradicts what I had known about timers. Finally (and after several hours of debugging!) I decided to google for it and after a while I came across a stackoverflow thread that quoted from MSDN the following (emphasis is mine):

Note If Enabled and AutoReset are both set to false, and the timer has previously been enabled, setting the Interval property causes the Elapsed event to be raised once, as if the Enabled property had been set to true. To set the interval without raising the event, you can temporarily set the AutoReset property to true.

Impossible!

OK, let’s remove the interval change within the Elapsed event handler. And behold, the timer works as expected:
ref2

WTF!? Are the writers of this Timer from Microsoft serious?? Is there any logical reason for this behavior?? This sounds like a bug that they decided not to fix but just document this behavior as if it was “by design” (and if anyone can come up with a good reason for this behavior, please comment).

Solution
Despite the MSDN doc, I wasn’t even going to see if setting “AutoReset property to true” temporarily solves it. This is even uglier than having a boolean to test if Elapsed has already fired and is working.

After considering switching to the Threading timer and see if this will provide the desired behavior, I decided to revert to what I consider as a more comfortable solution: I will not change the Interval but set it only once, and simply invoke DoSomething() for the first time in a separate thread:

static int refCount = 0;
static void Main(string[] args)
{
    System.Timers.Timer t = new System.Timers.Timer();
    t.AutoReset = false;
    t.Interval = 100;
    t.Elapsed += delegate
    {
        Interlocked.Increment(ref refCount);
        try
        {
            DoSomething();
        }
        catch (Exception ex)
        {
            // TODO log exception
        }
        finally
        {
            Interlocked.Decrement(ref refCount);
            t.Start();
        }
    };

    Task.Factory.StartNew(() =>
    {
        DoSomething();
        t.Start();
    }, TaskCreationOptions.LongRunning);

    Console.ReadKey();
}

In this solution I simply start a new long running thread, invoke DoSomething() first time and only then start the timer for the other invocations. Here’s the result:
ref3
The first run doesn’t use the timer so the ref counter is 0. The other invocations of Timer’s Elapsed event set the ref count as expected and run only once.

 
Leave a comment

Posted by on 25/04/2014 in Software Development

 

Tags:

Oracle ODP.NET provider and BLOBs over internet environment

Here’s a problem. If you run your queries in an environment where the DB and the .NET code are geographically close and the ping times are short, you are less likely to notice this issue. But when running them in totally different locations you may see very poor performance up to behavior which is impossible to work with.

To cut a long story short, after narrowing the problem down, the bad performance was unique to columns with LOB types. Googling this issue showed that this is due to Oracle’s ODP.NET provider. It seems like by default, ODP.NET will not fetch LOBs data by default but defer it until explicitly requested by the calling application.

Luckily, this behavior can be controlled by setting the InitialLOBFetchSize of the OracleCommand. By default it is set to 0 which means ‘defer’. You can set it to the number of bytes you would like to retrieve or simply to -1 to fetch it entirely. From the docs.

“By default, InitialLOBFetchSize is set to 0. If the InitialLOBFetchSize property value of the OracleCommand is left as 0, the entire LOB data retrieval is deferred until that data is explicitly requested by the application. If the InitialLOBFetchSize property is set to a nonzero value, the LOB data is immediately fetched up to the number of characters or bytes that the InitialLOBFetchSize property specifies.”

Read more here:

“When InitialLOBFetchSize is set to -1, the entire LOB data is prefetched and stored in the fetch array.”

Personally I think that the default should be exactly the opposite. It is the responsibility of the developer to include or exclude LOB columns in the SELECT clause. If the developer attempted a “SELECT *” he will notice the lag and will have to modify the query.

I also think that it is a shame that these properties must be set in code and cannot be tweaked in the config file.

Additional tips
Here are some additional tips. They do not relate to the obvious tips of fine tuning your queries or database, but to the amount of data is to be passed over the network.

  • There is also a InitialLONGFetchSize that you can set to -1 to allow prefetch of LONG and LONG RAW data types.
  • If you set the CommandTimeout property to 0, it will be infinite (that goes also for MySQL, SQLServer and DB2). You must take into consideration that setting the LOBFetchSize to -1 will solve just the prefetch problem but the data might still take a lot of time to be fetched. Also note that the different documentations do not recommend setting the timeout to infinite, but perhaps you should still increase it for queries that are supposed to retrieve lots of data.
  • You should also consider changing the Connection Timeout. This is usually done via the connection string. Consult http://www.connectionstrings.com how to do this.
  • Change your queries to retrieve not the entire table but explicit columns that you actually need.
 
Leave a comment

Posted by on 22/12/2013 in Software Development

 

Tags: , , ,

First attempt at AngularJS and ASP.NET’s PageMethods & WebMethods

If you need the example, right click this link, Save As and rename to zip.

I made a first attempt at AngularJS and was particularly interested in how I can use it alongside ASP.NET Ajax. The reason is because I am very fond of PageMethods and WebMethods for Ajax, as they are extremely simple and trivial to use. I know that just by writing this short introduction some AngularJS religious fans are shaking their heads in disagreement. That’s OK.

AngularJS seems nowadays like one of the hottest frameworks so I thought I to try it out. I did some reading (the AngularJS homepage shows really helpful examples), saw some videos and experimented a bit. My purpose: see if AngularJS can be good as a templating solution. Comparing to other solutions, what I really liked about AngularJS was that it is embedded within the html itself. Note: AngularJS is much more than “just templates” (the html is not just referred to as a collection of DOM elements but as The “View” in MV* architectures) – but that’s off topic. So in comparison for example to jsrender, which I’ve been using lately, AngularJS just integrates into the html, allowing you to manipulate it using JavaScript controllers. This is very convenient because your favorite editor will colorize everything as usual. That’s just a short description of one of the advantages. To be fair I lack the experience to describe the gazillions of advantages of AngularJS. But again, this is off topic.

What I am interested in is something that I will be comfortable working with, that will also be reliable and long lasting. Without going too much into details, there are many libraries, 3rd party controls and frameworks that failed to stand up to some or all of these requirements. Sometimes when you figure out that a library is not living up to standards, it might be after lots of your work was already written using it and it’ll be very difficult or even impossible to back out. Then you have to start working around problems and this could be a pain. Which is why I would rather not become too dependent on things that might later prove to be a constraint.

Having said that, this might explain why I would rather try to integrate AngularJS with ASP.NET Ajax. The latter is something so convenient to me that I’d hate to give it up. Similarly, although jQuery is one major library that I’m working with daily and by far it is a huge success, I’m still using the good ol’ ASP.NET Ajax over jQuery’s Ajax wrappers. I manage to integrate them together with ease so that’s a comfy.

The problem was that after I experimented a little with AngularJS “hello world” samples, I tried to do the most trivial thing: make an Ajax call outside of AngularJS framework and then apply the results using AngularJS. In short, I wasn’t trying to code “in AngularJS” but to use it for templating only. Using jsrender, what I would do is perform a simple ajax call and in the callback use the results and templates, followed by injecting the html onto the page. Very simple. In AngularJS, initially, I couldn’t find a way to do that. That is, I could not find a way to change the $scope property from outside the controller. Perhaps there are ways to do that, but I failed to make a simple ajax call and set the results as a datasource.

After some time it occurred to me that I was working wrong with AngularJS. I figured that if I can’t call the AngularJS controller from outside, I need to change the datasource from the inside. Therefore I made the PageMethods ajax call from within the controller and then it worked as expected.

So the result is as follows. This is my ASP.NET Default.aspx:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>

<!DOCTYPE html>
<html>
<head runat="server">
    <title></title>
</head>
<body ng-app>
    <form id="form1" runat="server">
    <asp:ScriptManager runat="server" EnablePageMethods="true">
        <Scripts>
            <asp:ScriptReference Path="http://code.angularjs.org/1.0.8/angular.js" />
            <asp:ScriptReference Path="~/JScript.js" />
        </Scripts>
    </asp:ScriptManager>
    <div ng-controller="TestCtrl" ng-init="init()">
        {{source}}
    </div>
    </form>
</body>
</html>
  • Line 1: Well, this is a Default.aspx page because we’re using ASP.NET here.
  • Line 8: ng-app tag to indicate an AngularJS app.
  • Line 10: Enable PageMethods.
  • Lines 12-13: AngularJS framework and my own js file containing the AngularJS controller.
  • Line 16: Use my TestCtrl controller with this div element and activate the init() function when the page (view) loads.
  • Line 17: This is the name of the property that it’s value will be displayed and changed following the ajax call. It’ll be initialized in the controller later on.

And the JavaScript AngularJS controller I wrote (JScript.js):

function TestCtrl($scope, $timeout) {
    $scope.source = 10;
    $scope.init = function () {

        PageMethods.Inc($scope.source, function (result) {
            $scope.$apply(function () {
                $scope.source = result;
                $timeout(function () {
                    $scope.init();
                }, 100);
            });
        });

    }
}
  • Line 2: The source property. Initialized to a value of 10.
  • Line 3: The init() method. This is called by AngularJS because in the html I placed an ng-init in the div.
  • Line 5: The ASP.NET Ajax call. Can be any ajax call here: jQuery, ASP.NET Ajax or whatever you prefer. Note that I’m passing the current value of ‘source’ as an argument to the server which will increment this value.
  • Line 6: In AngularJS you have to call $apply to invoke AngularJS framework for calls outside the framework. Here’s a quote from the docs:

    “$apply() is used to execute an expression in angular from outside of the angular framework. (For example from browser DOM events, setTimeout, XHR or third party libraries). Because we are calling into the angular framework we need to perform proper scope life cycle of exception handling, executing watches.”

    Without $apply this code doesn’t seem to refresh the updated binding onto the html.

  • Line 7: In the callback, change the source property of the controller.
  • Lines 8-10: Using the AngularJS $timeout function to schedule consecutive recursive calls to the init() method. The ‘source’ is expected to increment endlessly.

That’s it. If you run this code you’ll see that it endlessly calls the server using ASP.NET Ajax and updates the AngularJS ‘source’ property as expected.

“Philosophy”
Although AngularJS looks very tempting and sexy, what’s important to me is to have ease of use and not to be too dependent. Like so many frameworks that seemed so appealing at one time, AngularJS too has the potential to be looked at as an ancient dinosaur one day. That’s OK. I work with a dinosaur on a daily basis (ASP.NET WebForms). I believe that every single developer with several years of experience in any environment, can provide examples to libraries and frameworks that once were considered the pinnacle of technology. Not many survive as new technologies and ideas are being thought of everyday. Therefore I think that you should choose something that you’ll feel comfortable to work with and not just because it is now very hot. Just browse this article, and you’ll find the 10 hottest client side frameworks of today. While their fans argue between themselves which is better (Backbone.JS? AngularJS? Ember.js?), I think that it’s quite clear that not all ten will remain in that list for years to come. Something else will become “hot”. What then? you can’t just replace your website’s framework every couple of months just because some other technology became “hotter” than what you have selected. Therefore, do not pick the hottest. Pick what’s working for you and your team. If it’s AngularJS, great. If it is Backbone JS, great. If you rather keep a more classic approach by manipulating the DOM using jQuery or the amazing Vanilla JS – that’s great too. Just don’t bind yourself to something because of the wrong reasons.

Summary
In fairness, I think AngularJS looks very interesting. The concepts seems to be very good. Separation of concerns, testing and advocation of single page applications as true applications. In fact, I’m looking forward to using AngularJS in a real project. However, my main concern over AngularJS is that it seems to me like it is a somewhat “too-closed” of a framework and that you must and abide by “the rules”. While it has advantages as it conforms you to do things “properly, their way”, it is also a disadvantage because if eventually you’ll run into a situation that AngularJS doesn’t do what you expect it to – you’ll have a problem. Then you might want to revert to some non-AngularJS such as jQuery but this is strongly discouraged. You’re welcome to read this very helpful post and replies. You’ll get an idea why you shouldn’t mix the two.

In case you’re asking yourself, I consider both jQuery and ASP.NET Ajax as “open” libraries. They are not frameworks. You can work with them in your website and you can work without. You can decide to adapt newer patterns & plugins and even other libraries. You can use plain JavaScript to manipulate the DOM any way you require with or without them. In short, they are not constraining.

 
Leave a comment

Posted by on 27/10/2013 in Software Development

 

Tags: , ,

Force a website to https using IIS Custom Error pages

A well common requirement for secure websites is not only to support https but to make it mandatory. The problem is that if you require an SSL from your website, the end user receives an ugly 403.4 message that informs that SSL is required. Why doesn’t IIS have a simple check box in the “Require SSL” dialog to “auto redirect requests to https” is unclear to me, but in this post I’ll explain how simple it is to accomplish this without writing any code at all.

So, in order to force a website to https and redirect normal http requests to https you have various methods. At times I did this using server code: detect if you’re running a normal http and redirect from the server. But recently I attempted this using simple IIS configuration. The idea is as follows:

  1. Tweak IIS to require SSL. By default, this will inform the user of a 403.4 auth error.
  2. Using IIS’ Custom Errors feature, customize the 403.4 to redirect to https.

Before we start: naturally, you need a valid SSL certificate for this procedure to work. If you just need a test certificate for development and practice, you can IIS to generate a dummy certificate for you like so:

  1. In IIS Manager, select the Server name on the left.
  2. Go into Server Certificates in the Features View.
  3. In the Actions pane on the right, select Create Self-Signed Certificate.

To enable SSL on your website after you have installed an SSL certificate:

  1. In IIS Manager, select the target website.
  2. On the Actions pane on the right, click Bindings.
  3. In the opening dialog, click Add, select “https” and then select the desired certificate.
  4. Test that SSL is working by browsing to https.

Now we can configure a redirect to https.

Tweaking IIS to require SSL

Open IIS and select the target website or virtual application. In the Features View, select SSL Settings.

1Select “Require SSL” and “Accept”. Do not select “Require” or this won’t work at all.

2

Now if you try to browse to http as usual, you should see a 403.4 message like so:

5

Using Custom Error pages

In order to use custom Error pages, this feature must be installed. If you notice that your IIS does not provide the Error Pages feature, simply install it (the screenshot below is from Windows 7):

3

In IIS, select on the left the target server, website or application. On the Features View select Error Pages under IIS (note: this is NOT the same as .NET Error Pages under ASP.NET):

4

In the right pane select “Edit Features Settings…”

6

In the dialog that opens, select “Custom error pages” and click OK. This means that when you when we configure a redirect later on,  it will actually be in effect:

7

Finally, we have to define a new “error page” rule, to handle 403.4 and perform a redirect. Just click on the Add in the Actions pane to the right and fill-in the desired redirect rule details:

8

Eventually, this would look like this:

10

That’s it. Now if you browse to http you should be redirected to https. The web.config looks as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <httpErrors>
            <remove statusCode="403" subStatusCode="4" />
            <error statusCode="403" subStatusCode="4" path="https://localhost" 
responseMode="Redirect" />
        </httpErrors>
    </system.webServer>
</configuration>

 
Leave a comment

Posted by on 21/06/2013 in Software Development

 

Tags: , , ,

Handling unhandled Task exceptions in ASP.NET 4

This blog post isn’t intended to explain how to use IsFaulted or any other method for handling Task exceptions in a specific block of code, but how to provide a general solution to handling unhandled Task exceptions, specifically in ASP.NET. The problem dealt with in this post is a situation that a developer didn’t handle an exception in the code for whatever reason, and the process terminates as a result.

I came across a problematic situation – in a production environment, an ASP.NET IIS process seems to be terminating and initializing constantly. After some research, it turned out that there’s a certain Task that was raising exceptions, but the exceptions were not handled. It seems like the Application_Error handler was not raised for these exceptions. It seems that in .NET 4, unhandled Task exceptions terminate the running process. Fortunately, Microsoft changed this behavior in .NET 4.5 and by default the process is no longer terminated. It is possible to change that behavior to the previous policy by tweaking the web.config, although I find it hard to think why one would want to do that, except maybe in a development environment in order to be aware that such unhandled exceptions exist.

Back to .NET 4, we still have to prevent termination of the process. The first issue was to find how to catch unhandled Task exceptions, as it became clear that Application_Error wasn’t handling these exceptions. I googled for a solution. Turns out that there’s an UnobservedTaskException event that is designated for this exact purpose. It’s a part of the TaskScheduler class. Whenever an unhandled Task exception is thrown, it may be handled by event handlers wired to this event. The code block below is an example how this event can be put into use in a global.asax file.

    void Application_Start(object sender, EventArgs e)
    {
        // Code that runs on application startup
        System.Threading.Tasks.TaskScheduler.UnobservedTaskException += TaskScheduler_UnobservedTaskException;
    }

    void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
    {
        e.SetObserved();
    }

As you can see, when you call the SetObserved() method, this marks the exception as “handled”, and the process will not terminate.

Note, that the exception is basically thrown when the Task is GC-ed. This means that as long as there are references to the Task instance it will not be garbage collected and an exception will not be thrown.

Depending on the TaskCreationOptions, the raised events and event handling may vary in behavior. For example, if you have nested Tasks throwing exceptions, and the TaskCreationOptions is set to be attached to the parent, a single UnobservedTaskException event will be raised for all those exceptions, and you may handle each of these exceptions differently, if required. The incoming exceptions in such a case are not “flattened” but nested as well, and you may iterate recursively on the different exceptions in order to treat each and everyone independently. However, you may call on the Flatten() method to receive all the exceptions in the same “level” and handle them as if they were not nested.

In a test page, a button_click event handler raises three exceptions. The nested exceptions are AttachedToParent, which affects how they will be received in the UnobservedTaskException event handling. I added a GC_Click button and event handler to speed things up:

    protected void Button_Click(object sender, EventArgs e)
    {
        Task.Factory.StartNew(() =>
        {
            Task.Factory.StartNew(() =>
            {
                Task.Factory.StartNew(() =>
                {
                    throw new ApplicationException("deepest exception");
                }, TaskCreationOptions.AttachedToParent);

                throw new ApplicationException("internal exception");
            }, TaskCreationOptions.AttachedToParent);

            System.Threading.Thread.Sleep(100);
            throw new ApplicationException("exception");
        }, TaskCreationOptions.LongRunning);
    }

    protected void GC_Click(object sender, EventArgs e)
    {
        GC.Collect();
    }

You can see in the Watch window, how these exceptions are received in the event handler by default:
1

And with the Flatten() method:
2

Note: If the Task creation is set differently, for example to LongRunning, each exception will have its own event handling. In other words, multiple UnobservedTaskException event handlers will be raised.

Now comes the other part where you would probably want to Log the different exceptions. Assuming that you would want to log all the exceptions, regardless of their “relationship”, this is one way to do it, assuming a Log method exists:

    void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
    {
        e.SetObserved();
        e.Exception.Flatten().Handle(ex =>
        {
            try
            {
                Log(ex);
                return true;
            }
            catch { return true; }
        });
    }

The Handle method allows iteration over the different exceptions. Each exception is logged and a true if returned to indicate that it was handled. The try-catch there is designated to ensure that the logging procedure itself won’t raise exceptions (obviously, you may choose not to do that). You should also take into account that if you would like to implement some kind of particular logic that determines whether the exception should be handled or not, you may choose to return false for exceptions not handled. According to MSDN, this would throw another AggregateException method for all the exceptions not handled.

Next step: apply the exception handling to an existing application in production.
If this code is to be incorporated during development, simple change to global.asax would do. But this had to be incorporated in a running production environment. If this is a web site, then it would still be possible to patch the global.asax. Just have to schedule a downtime as ASP.NET will detect a change to global.asax and the application will be restarted. The biggest advantage is of course the ability to change the global.asax in a production environment as ASP.NET will automatically compile it.

But for a web application this is different. As the global.asax.cs is already compiled into an assembly in a production environment, there are basically two options: either compile the solution and deploy an upgrade, or write an http module. Writing a module probably makes more sense as you don’t have to compile and deploy existing assemblies. You just have to add an HttpModule to an existing web app.

Here’s an example for such a module:

    public class TaskExceptionHandlingModule : IHttpModule
    {
        public void Dispose() { }

        public void Init(HttpApplication context)
        {
            System.Threading.Tasks.TaskScheduler.UnobservedTaskException += TaskScheduler_UnobservedTaskException;
        }

        void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
        {
            e.SetObserved();
            e.Exception.Flatten().Handle(ex =>
            {
                try
                {
                    Log(ex);
                    return true;
                }
                catch { return true; }
            });
        }

        private void Log(Exception ex)
        {
            // TODO: log the exception
        }
    }

All that is left to do is place the compiled http module in the bin folder and add a reference to it in the web.config. Note: The location within the web.config may vary depending on the version of the IIS used.

<httpModules>
<add name="TaskExceptionHandlingModule" type="TaskExceptionHandlingModule.TaskExceptionHandlingModule, TaskExceptionHandlingModule"/>
</httpModules>

Notice that changing the web.config and placing the module in the bin folder of the web application will cause it to restart, so this has to be done at a coordinated downtime.

Assuming the module was placed in the correct location and that the web.config was properly configured, the process is no longer expected to terminate due to unhandled task exceptions, and those exceptions should now be logged.

 

 
Leave a comment

Posted by on 26/05/2013 in Software Development

 

Tags: ,

Getting the parameters and values of the SoapMessage

Sometimes it’s a good practice to be able to process a WebService request’s argument. You may want to consider various things: logging the arguments, detecting specific arguments for some business logic implementation or validation of the passed-in values (security).

For the sake of this post, the arguments will be logged using the following code, but then again – you should think “bigger”. Anyway, here’s the logging code:

public static class Logger
{
    public static void LogParameters(NameValueCollection map)
    {
        foreach (string item in map)
        {
            System.Diagnostics.Debug.WriteLine("{0}={1}", item, map[item]);
        }
    }
}

Simple POST and GET
“Regular” POST or GET requests are quite simple to process. All you have to do is place code in the constructor of the WebService, get the collection of arguments and do whatever you want with them:

[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class MyService : System.Web.Services.WebService
{
    public MyService()
    {
        var request = this.Context.Request;
        NameValueCollection map = null;
        if (request.HttpMethod == "POST")
        {
            map = request.Form;

        }
        else if (request.HttpMethod == "GET")
        {
            map = request.QueryString;
        }

        if (map != null)
            Logger.LogParameters(map);
    }

    [WebMethod]
    [TraceExtension]
    public string GetGreeting(string name)
    {
        return "Hello " + name;
    }
}

SOAP
However, for SOAP this is a little more difficult. Although you can try to get the request contents and parse the arguments and their values, there’s a much more convenient way for doing so. .NET already parses the Soap message so why not use it? Apparently you can write a class that receives the parsed SoapMessage and take the arguments from there. In order to do so, you have to write two classes: one class will receive the SoapMessage, and the other class is an Attribute that will perform the hook between the called WebMethod and your class.

If you noticed, the GetGreeting WebMethod above has a TraceExtension Attribute (line 24 in the previous code block). Here is the code for it:

[AttributeUsage(AttributeTargets.Method)]
public class TraceExtensionAttribute : SoapExtensionAttribute
{
    public override Type ExtensionType { get { return typeof(TraceExtension); } }
    private int priority;
    public override int Priority
    {
        get { return this.priority; }
        set { this.priority = value; }
    }
}

Note that the Attribute inherits from SoapExtensionAttribute, which is a requirement to perform the hook to your class.
Also note that on line 4 above, the returned Type is that of the actual class that is going to receive the SoapMessage and handle it. See the class below:

public class TraceExtension : SoapExtension
{
    public override void ProcessMessage(SoapMessage message)
    {
        switch (message.Stage)
        {
            case SoapMessageStage.AfterDeserialize:
                var map = new NameValueCollection();
                for (int i = 0; i < message.MethodInfo.InParameters.Length; ++i)
                {
                    var p = message.MethodInfo.InParameters[i];
                    object val = message.GetInParameterValue(i);
                    map.Add(p.Name, val.ToString());
                }

                Logger.LogParameters(map);
                break;

            case SoapMessageStage.AfterSerialize:
                break;

            case SoapMessageStage.BeforeDeserialize:
                break;

            case SoapMessageStage.BeforeSerialize:
                break;
        }
    }

    public override object GetInitializer(Type serviceType)
    {
        return null;
    }

    public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
    {
        return null;
    }

    public override void Initialize(object initializer)
    {
    }
}

The important code here is the override method of ProcessMessage. Here the SoapMessage is received in various stages of the deserialization (and serialization) that allow you flexibility should you require it. As you can see, in lines 8-14 there’s an iteration that loops over the input parameters. This way we receive the already parsed parameters which is what we wanted in the first place. Once we have the parameters in a NameValueCollection, we pass it to the handling method for further processing (logging, in this particular example).

Credits:
The code here is based mainly on the SoapExtension class example from MSDN.

If you’re interested to learn more about the life of the SoapMessage or other stuff that you can do with it, this link may interest you.

 
Leave a comment

Posted by on 03/04/2013 in Software Development

 

Tags:

 
Follow

Get every new post delivered to your Inbox.

Join 65 other followers