RSS

Force a website to https using IIS Custom Error pages

A well common requirement for secure websites is not only to support https but to make it mandatory. The problem is that if you require an SSL from your website, the end user receives an ugly 403.4 message that informs that SSL is required. Why doesn’t IIS have a simple check box in the “Require SSL” dialog to “auto redirect requests to https” is unclear to me, but in this post I’ll explain how simple it is to accomplish this without writing any code at all.

So, in order to force a website to https and redirect normal http requests to https you have various methods. At times I did this using server code: detect if you’re running a normal http and redirect from the server. But recently I attempted this using simple IIS configuration. The idea is as follows:

  1. Tweak IIS to require SSL. By default, this will inform the user of a 403.4 auth error.
  2. Using IIS’ Custom Errors feature, customize the 403.4 to redirect to https.

Before we start: naturally, you need a valid SSL certificate for this procedure to work. If you just need a test certificate for development and practice, you can IIS to generate a dummy certificate for you like so:

  1. In IIS Manager, select the Server name on the left.
  2. Go into Server Certificates in the Features View.
  3. In the Actions pane on the right, select Create Self-Signed Certificate.

To enable SSL on your website after you have installed an SSL certificate:

  1. In IIS Manager, select the target website.
  2. On the Actions pane on the right, click Bindings.
  3. In the opening dialog, click Add, select “https” and then select the desired certificate.
  4. Test that SSL is working by browsing to https.

Now we can configure a redirect to https.

Tweaking IIS to require SSL

Open IIS and select the target website or virtual application. In the Features View, select SSL Settings.

1Select “Require SSL” and “Accept”. Do not select “Require” or this won’t work at all.

2

Now if you try to browse to http as usual, you should see a 403.4 message like so:

5

Using Custom Error pages

In order to use custom Error pages, this feature must be installed. If you notice that your IIS does not provide the Error Pages feature, simply install it (the screenshot below is from Windows 7):

3

In IIS, select on the left the target server, website or application. On the Features View select Error Pages under IIS (note: this is NOT the same as .NET Error Pages under ASP.NET):

4

In the right pane select “Edit Features Settings…”

6

In the dialog that opens, select “Custom error pages” and click OK. This means that when you when we configure a redirect later on,  it will actually be in effect:

7

Finally, we have to define a new “error page” rule, to handle 403.4 and perform a redirect. Just click on the Add in the Actions pane to the right and fill-in the desired redirect rule details:

8

Eventually, this would look like this:

10

That’s it. Now if you browse to http you should be redirected to https. The web.config looks as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <httpErrors>
            <remove statusCode="403" subStatusCode="4" />
            <error statusCode="403" subStatusCode="4" path="https://localhost" 
responseMode="Redirect" />
        </httpErrors>
    </system.webServer>
</configuration>

 
3 Comments

Posted by on 21/06/2013 in Software Development

 

Tags: , , ,

Handling unhandled Task exceptions in ASP.NET 4

This blog post isn’t intended to explain how to use IsFaulted or any other method for handling Task exceptions in a specific block of code, but how to provide a general solution to handling unhandled Task exceptions, specifically in ASP.NET. The problem dealt with in this post is a situation that a developer didn’t handle an exception in the code for whatever reason, and the process terminates as a result.

I came across a problematic situation – in a production environment, an ASP.NET IIS process seems to be terminating and initializing constantly. After some research, it turned out that there’s a certain Task that was raising exceptions, but the exceptions were not handled. It seems like the Application_Error handler was not raised for these exceptions. It seems that in .NET 4, unhandled Task exceptions terminate the running process. Fortunately, Microsoft changed this behavior in .NET 4.5 and by default the process is no longer terminated. It is possible to change that behavior to the previous policy by tweaking the web.config, although I find it hard to think why one would want to do that, except maybe in a development environment in order to be aware that such unhandled exceptions exist.

Back to .NET 4, we still have to prevent termination of the process. The first issue was to find how to catch unhandled Task exceptions, as it became clear that Application_Error wasn’t handling these exceptions. I googled for a solution. Turns out that there’s an UnobservedTaskException event that is designated for this exact purpose. It’s a part of the TaskScheduler class. Whenever an unhandled Task exception is thrown, it may be handled by event handlers wired to this event. The code block below is an example how this event can be put into use in a global.asax file.

    void Application_Start(object sender, EventArgs e)
    {
        // Code that runs on application startup
        System.Threading.Tasks.TaskScheduler.UnobservedTaskException += TaskScheduler_UnobservedTaskException;
    }

    void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
    {
        e.SetObserved();
    }

As you can see, when you call the SetObserved() method, this marks the exception as “handled”, and the process will not terminate.

Note, that the exception is basically thrown when the Task is GC-ed. This means that as long as there are references to the Task instance it will not be garbage collected and an exception will not be thrown.

Depending on the TaskCreationOptions, the raised events and event handling may vary in behavior. For example, if you have nested Tasks throwing exceptions, and the TaskCreationOptions is set to be attached to the parent, a single UnobservedTaskException event will be raised for all those exceptions, and you may handle each of these exceptions differently, if required. The incoming exceptions in such a case are not “flattened” but nested as well, and you may iterate recursively on the different exceptions in order to treat each and everyone independently. However, you may call on the Flatten() method to receive all the exceptions in the same “level” and handle them as if they were not nested.

In a test page, a button_click event handler raises three exceptions. The nested exceptions are AttachedToParent, which affects how they will be received in the UnobservedTaskException event handling. I added a GC_Click button and event handler to speed things up:

    protected void Button_Click(object sender, EventArgs e)
    {
        Task.Factory.StartNew(() =>
        {
            Task.Factory.StartNew(() =>
            {
                Task.Factory.StartNew(() =>
                {
                    throw new ApplicationException("deepest exception");
                }, TaskCreationOptions.AttachedToParent);

                throw new ApplicationException("internal exception");
            }, TaskCreationOptions.AttachedToParent);

            System.Threading.Thread.Sleep(100);
            throw new ApplicationException("exception");
        }, TaskCreationOptions.LongRunning);
    }

    protected void GC_Click(object sender, EventArgs e)
    {
        GC.Collect();
    }

You can see in the Watch window, how these exceptions are received in the event handler by default:
1

And with the Flatten() method:
2

Note: If the Task creation is set differently, for example to LongRunning, each exception will have its own event handling. In other words, multiple UnobservedTaskException event handlers will be raised.

Now comes the other part where you would probably want to Log the different exceptions. Assuming that you would want to log all the exceptions, regardless of their “relationship”, this is one way to do it, assuming a Log method exists:

    void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
    {
        e.SetObserved();
        e.Exception.Flatten().Handle(ex =>
        {
            try
            {
                Log(ex);
                return true;
            }
            catch { return true; }
        });
    }

The Handle method allows iteration over the different exceptions. Each exception is logged and a true if returned to indicate that it was handled. The try-catch there is designated to ensure that the logging procedure itself won’t raise exceptions (obviously, you may choose not to do that). You should also take into account that if you would like to implement some kind of particular logic that determines whether the exception should be handled or not, you may choose to return false for exceptions not handled. According to MSDN, this would throw another AggregateException method for all the exceptions not handled.

Next step: apply the exception handling to an existing application in production.
If this code is to be incorporated during development, simple change to global.asax would do. But this had to be incorporated in a running production environment. If this is a web site, then it would still be possible to patch the global.asax. Just have to schedule a downtime as ASP.NET will detect a change to global.asax and the application will be restarted. The biggest advantage is of course the ability to change the global.asax in a production environment as ASP.NET will automatically compile it.

But for a web application this is different. As the global.asax.cs is already compiled into an assembly in a production environment, there are basically two options: either compile the solution and deploy an upgrade, or write an http module. Writing a module probably makes more sense as you don’t have to compile and deploy existing assemblies. You just have to add an HttpModule to an existing web app.

Here’s an example for such a module:

    public class TaskExceptionHandlingModule : IHttpModule
    {
        public void Dispose() { }

        public void Init(HttpApplication context)
        {
            System.Threading.Tasks.TaskScheduler.UnobservedTaskException += TaskScheduler_UnobservedTaskException;
        }

        void TaskScheduler_UnobservedTaskException(object sender, System.Threading.Tasks.UnobservedTaskExceptionEventArgs e)
        {
            e.SetObserved();
            e.Exception.Flatten().Handle(ex =>
            {
                try
                {
                    Log(ex);
                    return true;
                }
                catch { return true; }
            });
        }

        private void Log(Exception ex)
        {
            // TODO: log the exception
        }
    }

All that is left to do is place the compiled http module in the bin folder and add a reference to it in the web.config. Note: The location within the web.config may vary depending on the version of the IIS used.

<httpModules>
<add name="TaskExceptionHandlingModule" type="TaskExceptionHandlingModule.TaskExceptionHandlingModule, TaskExceptionHandlingModule"/>
</httpModules>

Notice that changing the web.config and placing the module in the bin folder of the web application will cause it to restart, so this has to be done at a coordinated downtime.

Assuming the module was placed in the correct location and that the web.config was properly configured, the process is no longer expected to terminate due to unhandled task exceptions, and those exceptions should now be logged.

 

 
Leave a comment

Posted by on 26/05/2013 in Software Development

 

Tags: ,

Getting the parameters and values of the SoapMessage

Sometimes it’s a good practice to be able to process a WebService request’s argument. You may want to consider various things: logging the arguments, detecting specific arguments for some business logic implementation or validation of the passed-in values (security).

For the sake of this post, the arguments will be logged using the following code, but then again – you should think “bigger”. Anyway, here’s the logging code:

public static class Logger
{
    public static void LogParameters(NameValueCollection map)
    {
        foreach (string item in map)
        {
            System.Diagnostics.Debug.WriteLine("{0}={1}", item, map[item]);
        }
    }
}

Simple POST and GET
“Regular” POST or GET requests are quite simple to process. All you have to do is place code in the constructor of the WebService, get the collection of arguments and do whatever you want with them:

[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class MyService : System.Web.Services.WebService
{
    public MyService()
    {
        var request = this.Context.Request;
        NameValueCollection map = null;
        if (request.HttpMethod == "POST")
        {
            map = request.Form;

        }
        else if (request.HttpMethod == "GET")
        {
            map = request.QueryString;
        }

        if (map != null)
            Logger.LogParameters(map);
    }

    [WebMethod]
    [TraceExtension]
    public string GetGreeting(string name)
    {
        return "Hello " + name;
    }
}

SOAP
However, for SOAP this is a little more difficult. Although you can try to get the request contents and parse the arguments and their values, there’s a much more convenient way for doing so. .NET already parses the Soap message so why not use it? Apparently you can write a class that receives the parsed SoapMessage and take the arguments from there. In order to do so, you have to write two classes: one class will receive the SoapMessage, and the other class is an Attribute that will perform the hook between the called WebMethod and your class.

If you noticed, the GetGreeting WebMethod above has a TraceExtension Attribute (line 24 in the previous code block). Here is the code for it:

[AttributeUsage(AttributeTargets.Method)]
public class TraceExtensionAttribute : SoapExtensionAttribute
{
    public override Type ExtensionType { get { return typeof(TraceExtension); } }
    private int priority;
    public override int Priority
    {
        get { return this.priority; }
        set { this.priority = value; }
    }
}

Note that the Attribute inherits from SoapExtensionAttribute, which is a requirement to perform the hook to your class.
Also note that on line 4 above, the returned Type is that of the actual class that is going to receive the SoapMessage and handle it. See the class below:

public class TraceExtension : SoapExtension
{
    public override void ProcessMessage(SoapMessage message)
    {
        switch (message.Stage)
        {
            case SoapMessageStage.AfterDeserialize:
                var map = new NameValueCollection();
                for (int i = 0; i < message.MethodInfo.InParameters.Length; ++i)
                {
                    var p = message.MethodInfo.InParameters[i];
                    object val = message.GetInParameterValue(i);
                    map.Add(p.Name, val.ToString());
                }

                Logger.LogParameters(map);
                break;

            case SoapMessageStage.AfterSerialize:
                break;

            case SoapMessageStage.BeforeDeserialize:
                break;

            case SoapMessageStage.BeforeSerialize:
                break;
        }
    }

    public override object GetInitializer(Type serviceType)
    {
        return null;
    }

    public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
    {
        return null;
    }

    public override void Initialize(object initializer)
    {
    }
}

The important code here is the override method of ProcessMessage. Here the SoapMessage is received in various stages of the deserialization (and serialization) that allow you flexibility should you require it. As you can see, in lines 8-14 there’s an iteration that loops over the input parameters. This way we receive the already parsed parameters which is what we wanted in the first place. Once we have the parameters in a NameValueCollection, we pass it to the handling method for further processing (logging, in this particular example).

Credits:
The code here is based mainly on the SoapExtension class example from MSDN.

If you’re interested to learn more about the life of the SoapMessage or other stuff that you can do with it, this link may interest you.

 
1 Comment

Posted by on 03/04/2013 in Software Development

 

Tags:

HTML5 Drag and Drop Ajax file upload using jQuery File Upload plugin

If you just want the sample, right click this link, save as, rename to zip and extract.

This post is in a way a continuation of an earlier post, which explains how you can achieve an Ajax File Upload with ASP.NET. Only in this post I’ll focus on how to achieve this using HTML5 Drag and Drop features from a Windows Explorer.

Note: IE9 or below have limited Drag/Drop support so don’t expect this solution to work on those browsers.
Note 2: The FileUpload and jQuery used in this post are not the latest, so some changes maybe required to make this work well with newer versions.

In order to implement drag and drop to an existing File Upload solution, all you have to do is bind three events, two of them toggle CSS classes and have nothing to do with the upload itself, and one of them handles the actual ‘drop’. It’s not that the CSS classes are mandatory in order to get upload to work, it’s simply one of the ways to provide a feedback to the end-user that a drag operation takes place.

Before dragging, the page looks like so:
beforedrag

During the drag operation CSS classes provide a feedback to the user:
drag

The JavaScript used here:

$(function () {
    // file upload
    $('#fileupload').fileupload({
        replaceFileInput: false,
        formData: function (form) {
            return [{ name: "name1", value: "value1" }, { name: "name2", value: "value2"}];
        },
        dataType: 'json',
        url: '<%= ResolveUrl("AjaxFileHandler.ashx") %>',
        done: function (e, data) {
            $.each(data.result, function (index, file) {
                $('<p/>').text(file).appendTo('.divUpload');
            });
        }
    });

    // handle drag/drop
    $('body').bind("dragover", function (e) {
        $('.divUpload').addClass('drag');
        e.originalEvent.dataTransfer.dropEffect = 'copy';
        return false;
    }).bind("dragleave", function (e) {
        $('.divUpload').removeClass('drag');
        return false;
    }).bind("drop", function (e) {
        e.preventDefault();
        $('.divUpload').removeClass('drag');
        var list = $.makeArray(e.originalEvent.dataTransfer.files);
        $('#fileupload').fileupload('add', { files: list });
        return false;
    });
});

The code it quite self-explanatory, but nevertheless here’s a short explanation:

  • Lines 3-15: this is the same code from the previous post. It uses the jQuery FileUpload plugin for Ajax file upload.
  • Lines 18-21: handle a “drag start” event. This simply adds a CSS class to provide a feedback to the user.
  • Lines 22-24: handle a “drag leave” event, and removes the CSS class.
  • Lines 25-30: handles the “drop” event. This is the important stuff. We remove the CSS class that provides a feedback on the dragging, take the select files and add them to the File Upload plugin, which starts the upload. Note that the original files object from the event is converted to an array (“list”), and only then the information is passed on to the File Upload plugin. If you skip this the plugin might not work.
  • Lines 11-13: When done, the file names will be displayed as HTML elements. Change this to whatever you require.

The HTML looks like this:

<div class='divUpload'>
    <div class='file'><input id="fileupload" type="file" name="file" multiple="multiple" /></div>
    <div class='dropzone'><div>drag files here</div></div>
</div>

Not much here either. Mainly different DIVs which are displayed according to the drag/drop events.

  • Line 2: The original file input field.
  • Line 3: The DIV which appears upon drag.

Finally there’s the CSS. I have implemented it one way but naturally this can be done completely differently. The “trick” is to add the “drag” class on the top DIV, which causes the browser to use different CSS rules on the nested DIV elements. The file input field becomes hidden and the drag feedback is shown.

body, html, .divUpload
{
    height: 100%;
    margin: 0;
    overflow: hidden;
    background-color: beige;
}

.divUpload
{
    margin: 8px;
}

.divUpload.drag .file
{
    display: none;
}

.divUpload.drag .dropzone
{
    display: table;
}

.dropzone
{
    border: 3px dashed black;
    display: none;
    font-size: 20px;
    font-weight: bold;
    height: 95%;
    position: relative;
    text-align: center;
    vertical-align: middle;
    width: 99%;
    background-color: rgba(37, 255, 78, 0.33);
}

.dropzone div
{
    display: table-cell;
    vertical-align: middle;
}
  • Line 27: The dropzone is not hidden by default.
  • Line 16: When dragging takes place, the default file field is being hidden.

The result looks like this:
afterdrag

 
4 Comments

Posted by on 24/02/2013 in Software Development

 

Tags: , , ,

.NET 4 URL Routing for Web Services (asmx)

This post describes how to perform Url Routing to a Web Service. If you’re just interested in the solution, click here.

OK, this was a tough one.

The goal was to implement URL routing for an existing web site that has asmx Web Services. The intention was to allow white-labeling of the Web Services. You might argue why one would want to do that, so I’ll summarize by writing that one of the reasons to do so was that I wanted to enjoy URL Routing advantages, without having to rewrite the existing Web Services into a more “modern” alternative such as MVC. Other reasons are obvious: Web APIs are becoming more and more URL friendly and allow great flexibility.

Note that the web site is written in .NET 4 over IIS7, so I won’t get into how to configure routing or web services. You can read here about Routing, and specifically about Routing in WebForms here.

Unlike routing in MVC which you can’t really do without, in WebForms this is less trivial. In WebForm’s Routing we have the MapPageRoute method, but we have nothing specific for Web Services or other handlers. I already blogged about how to route to a handler (ashx), but I found it much less trivial to accomplish routing for Web Services, and in this post I’ll try to explain why.

Setup
The Web Service in this example has a simple HelloWorld method that returns a greeting:

using System.Web;
using System.Web.Services;

[WebService(Namespace = "http://evolpin/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.Web.Script.Services.ScriptService]
public class MyWebService : System.Web.Services.WebService
{
    [WebMethod]
    public string HelloWorld(string name)
    {
        string product = HttpContext.Current.Request.RequestContext.RouteData.Values["product"] as string;
        return string.Format("Hello {0} from {1}", name, product);
    }
}
  • Line 6: very important – allow the Web Service to be invoked from a client script.
  • Line 12 I attempt to retrieve the {product} part from the Route data. This is an example for the desired white-labeling.

The Global.asax looks like this:

<%@ Application Language="C#" %>
<%@ Import Namespace="System.Web.Routing" %>
<script runat="server">
    void Application_Start(object sender, EventArgs e)
    {
        RegisterRoutes(RouteTable.Routes);
    }

    public static void RegisterRoutes(RouteCollection routes)
    {
        routes.Add(new System.Web.Routing.Route("{product}/xxx/{*pathInfo}", new WebServiceRouteHandler("~/MyWebService.asmx")));

        routes.MapPageRoute("", "{*catchall}", "~/Test.aspx");
    }
</script>

The web.config looks like this:

<?xml version="1.0"?>
<configuration>
  <system.web>
    <webServices>
      <protocols>
        <add name="HttpGet"/>
        <add name="HttpPost"/>
      </protocols>
    </webServices>
    <compilation debug="true" targetFramework="4.0" />
  </system.web>
</configuration>

The test page (Test.aspx) looks like this:
test page
And its code:

<%@ Page Language="C#" %>

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.8.3.min.js"></script>
</head>
<body>
    <form id="form1" runat="server">
    <script type="text/javascript">
        function doGet() {
            $.ajax({
                url: "http://localhost/Routing/" + $('#product').val() + "/xxx/HelloWorld?name=evolpin",
                type: "GET",
                success: function (result) {
                    alert(result.firstChild.textContent);
                }
            });
        }

        function doPostXml() {
            $.ajax({
                url: "http://localhost/Routing/" + $('#product').val() + "/xxx/HelloWorld",
                type: "POST",
                data: { name: 'evolpin' },
                success: function (result) {
                    alert(result.firstChild.textContent);
                }
            });
        }

        function doPostJson() {
            $.ajax({
                url: "http://localhost/Routing/" + $('#product').val() + "/xxx/HelloWorld",
                type: "POST",
                contentType: 'application/json',
                data: JSON.stringify({ name: 'evolpin' }),
                success: function (result) {
                    alert(result.d);
                }
            });
        }
    </script>
    <div>
        <input type="text" id='product' value='MyProduct' />
        <input type="button" value='GET' onclick='doGet();' />
        <input type="button" value='POST xml' onclick='doPostXml();' />
        <input type="button" value='POST json' onclick='doPostJson();' />
    </div>
    </form>
</body>
</html>
  • Lines 11-19 perform a GET with a query string.
  • Lines 21-30 perform a POST with regular text/xml content type.
  • Lines 32-42 perform a POST using JSON.

Note that all these JavaScript methods take the “product name” from the ‘product’ text box.

And finally, the C# client test code which will invoke the Web Service using SOAP looks like this:

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            using (localhost.MyWebService ws = new localhost.MyWebService())
            {
                ws.Url = "http://localhost/Routing/MyProduct/xxx";
                string s = ws.HelloWorld("evolpin");
            }
        }
    }
}

Line 9: Changing the target Url to the Route.

The code
After googling quite a lot (it seems like there’s very little information about this), I saw several posts that used the following solution:

using System.Web;
using System.Web.Routing;
public class WebServiceRouteHandler : IRouteHandler
{
    private string virtualPath;

    public WebServiceRouteHandler(string virtualPath)
    {
        this.virtualPath = virtualPath;
    }

    public IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        return new System.Web.Services.Protocols.WebServiceHandlerFactory().GetHandler(HttpContext.Current, "*", this.virtualPath, HttpContext.Current.Server.MapPath(this.virtualPath));
    }
}

While this solution works for SOAP, if you try to GET or POST, the route won’t work well and you’ll end up receiving the auto generated WSDL documentation instead of calling your actual method. This occurs because the route that I was using was an extensionless route and for reasons explained below, the factory was unable to establish which protocol to use, so it ended-up with the Documentation handler. The WebServiceHandlerFactory attempts to understand which handler should be used to serve the request, and to do so it queries the different protocols it supports (e.g. SOAP, GET, POST etc.). I’ll use a GET request in order to explain why this doesn’t work well. The GET request used in this example is:

http://localhost/Routing/MyProduct/xxx/HelloWorld?name=evolpin
  • ‘xxx’ is the Web Service alternate route name.
  • ‘MyProduct’ is the sample product using the white-label feature.
  • ‘HelloWorld’ is the web method name that I would like to invoke.
  • ‘name’ is the name of the argument.

WebServiceHandlerFactory

The above image shows how the factory iterates over the available server protocols in an attempt to locate one suitable to serve the request. Here are the different supported protocols:

ServerProtocolFactories

If we peak into one of the supported protocols, GET, we can see that it queries the PathInfo. If the PathInfo is too short, a null is returned and the factory will continue to search for a suitable protocol.

HttpGetServerProtocolFactory

The PathInfo in this sample GET request is empty (“”). This is because I’m using an extensionless route. To make a long story short, after drilling down into the internals, the PathInfo in IIS7 is built on the understanding what the actual path info is (e.g. /HelloWorld). Because of the extensionless URL, the IIS7 fails to determine what the PathInfo is and sets it to empty, as seen below:

IIS7WorkerRequest

Now that I realized that there’s a problem with extensionless URLs, as IIS can’t parse the PathInfo correctly, I tried changing the route by adding ‘.asmx’ to it, hoping that this will help to resolve the correct path info. So I changed Global.asax and Test.aspx to use the .asmx, and the new route looks like this:

http://localhost/Routing/MyProduct/xxx.asmx/HelloWorld

When I tried GET or POST (using xml for the response content type), the new route worked! IIS7WorkerRequest was able to parse the PathInfo and the GET/POST protocol classes “were content” and used successfully. However, when I tried to use JSON I got back the following error:

System.InvalidOperationException: Request format is invalid: application/json; charset=UTF-8.
   at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters()
   at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()

Looking at HttpServerProtocol.ReadParameters, I could see that there’s an iteration which attempts to determine which reader class should be used, according not only to the request type (e.g. POST), but also according to the content type:

HttpServerProtocolReadParameters

So I placed a breakpoint in the Route handler and drilled down to see what readerTypes exist and found out that there were two: UrlParameterReader and HtmlFormParameterReader:

ref1

As the debugger showed that hasInputPayload was ‘true’, I realized that the HtmlFormParameterReader was the one used. Looking at the code there, it became clear why using JSON failed:

HtmlFormParameterReader

I realized that unless I am missing something, WebServiceHandlerFactory was simply not good enough because while it did solve the GET/POST and xml content-type routing issue, it wasn’t able to handle the JSON content, which I wasn’t willing to give up on.

I tried looking for a different approach, placed a breakpoint in the HelloWorld method and invoked it “old fashion way”, without any routing or JSON. This is what I saw in the debugger when I was searching for which handler was used:

ScriptHandlerFactory1

What’s this? It seems like the built-in ASP.NET handlers are not using the WebServiceHandlerFactory directly, but a certain ScriptHandlerFactory. OK, so let’s review the debugger when a JSON request is sent:

ScriptHandlerFactory2

It seems like for JSON, the “built-in mechanism” is not using the WebServiceHandlerFactory at all, but a RestHandlerFactory, wrapped by the ScriptHandlerFactory. I have been using the wrong factory all along! Here’s the GetHandler of ScriptHandlerFactory:

ScriptHandlerFactory

As you can see, the ScriptHandlerFactory is a wrapper that instantiates other handlers and wrappers according to the request method and content types. Using the ScriptHandlerFactory seems like the correct option in order to have routing invoke the web service correctly, with different request methods and content types.

Solution:
Unfortunately, for some reason ScriptHandlerFactory is internal and non-public. So I had to use Reflection to invoke it. However, this didn’t work well because I kept receiving wrong handler classes (especially when I removed the .asmx added before to the route). After doing some more Reflection research, and remembering that the logic for retrieving the handlers and protocols was very dependent on the PathInfo, I was looking for a way to change it. PathInfo is actually a getter-only property which is dependent how IIS resolves the path. Fortunately, it seems like the path can be changed by calling the HttpContext.RewritePath method. So the resulting code looks like this:

using System;
using System.Web;
using System.Web.Routing;
public class WebServiceRouteHandler : IRouteHandler
{
    private static IHttpHandlerFactory ScriptHandlerFactory;
    static WebServiceRouteHandler()
    {
        var assembly = typeof(System.Web.Script.Services.ScriptMethodAttribute).Assembly;
        var type = assembly.GetType("System.Web.Script.Services.ScriptHandlerFactory");
        ScriptHandlerFactory = (IHttpHandlerFactory)Activator.CreateInstance(type, true);
    }

    private string virtualPath;
    public WebServiceRouteHandler(string virtualPath)
    {
        this.virtualPath = virtualPath;
    }

    public IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        string pathInfo = requestContext.RouteData.Values["pathInfo"] as string;
        if (!string.IsNullOrWhiteSpace(pathInfo))
            pathInfo = string.Format("/{0}", pathInfo);

        requestContext.HttpContext.RewritePath(this.virtualPath, pathInfo, requestContext.HttpContext.Request.QueryString.ToString());
        var handler = ScriptHandlerFactory.GetHandler(HttpContext.Current, requestContext.HttpContext.Request.HttpMethod, this.virtualPath, requestContext.HttpContext.Server.MapPath(this.virtualPath));
        return handler;
    }
}
  • Lines 6-12 create a static ScriptHandlerFactory (no need to recreate it every time that the RouteHandler is used.
  • Lines 14-18 is a constructor that receives the virtual path for which the RouteHandler is instantiated for.
  • Lines 22-24: determine the pathInfo we want.
  • Line 26 modifies the PathInfo. This is the magic!
  • Line 27 invokes the logic of the ScriptHandlerFactory, which returns the appropriate handler according to the requset method and content type.

And the result (GET/POST, xml/JSON):

test page 2

SOAP (this was tested working with Soap 1.1 and 1.2):

soap1

Summary
I would be more than happy if someone has a better solution to this. The described solution here is still not complete. Besides the used Reflection, what’s missing is that the Documentation protocol (i.e. WSDL auto generation) should invoke the different methods using the {product} of the Url Route. If you try to use this solution and activate the HelloWorld method using the WSDL help page,  you’ll see that it uses the wrong url for invocation.

What’s good about this solution is that it’s very simple, it allows SOAP, GET, POST and JSON. It also allows you to use Url routing advantages, and without having to use the asmx extension in the route. In other words, complete Url Routing flexibility. I also successfully tested a version of this solution in IIS6.

Credits: I used the excellent ILSpy for Reflection, which is a very good substitute to Red-Gate’s not-free-any-more Reflector.

 
9 Comments

Posted by on 30/12/2012 in Software Development

 

Tags: ,

InvalidCastException when using Assembly.LoadFile

Here is something that I have come across which puzzled me. If you are required to write an extensible application that uses plugins, and you have used dynamically loaded assemblies to accomplish that, then you might have encountered the following exception ([path] and [other path] are placeholders for actual file paths):

“[A]Plugin.MyPlugin cannot be cast to [B]Plugin.MyPlugin. Type A originates from ‘Plugin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null’ in the context ‘LoadNeither’ at location ‘[path]Plugin.dll’. Type B originates from ‘Plugin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null’ in the context ‘Default’ at location ‘[other path]Plugin.dll’.”

Setup
I’ll describe a sample scenario how this exception might occur. In this sample I’ll be using Xml Serialization for storing the plugin’s state, but the issue is not unique to Xml Serialization and you might encounter it regardless.

So, in this sample the plugin is an ordinary class, within a Class Library, that implements a regular .NET Interface (in a different Class Library). There is one executable that will use reflection to load the plugin and use it via the interface:

  1. IPlugin is the interface plugins are required to implement (Class Library project ‘PluginLib’).
  2. MyPlugin class is the inheriting plugin (Class Library project ‘Plugin’ referencing ‘PluginLib’).
  3. MainProg is the main extensible application (exe referencing PluginLib, which will dynamically load ‘MyPlugin’, residing in the ‘Plugin’ assembly).

PluginLib consists of a simple interface:

namespace PluginLib
{
    public interface IPlugin
    {
        void DoSomething();
    }
}

‘Plugin’ project is a very simply one (the ‘State’ property is not required for demonstrating this issue, and only serves as an excuse to use Xml Serialization in this sample):

namespace Plugin
{
    public class MyPlugin : PluginLib.IPlugin
    {
        public string State { get; set; }
        public void DoSomething()
        {
            // do something
        }
    }
}

MainProg dynamically loads the plugin for use (note that no try-catch clauses or any validations are shown here, but you would probably want to have those in “real world” code):

// [path] is a place holder for an absolute path located elsewhere
string pluginPath = @"[path]Plugin.dll";

// load the plugin from the specified path
Assembly assembly = Assembly.LoadFile(pluginPath);

// detect the first Type that implements IPlugin (you should test the result for 'null')
var type = assembly.GetTypes().FirstOrDefault(t => typeof(IPlugin).IsAssignableFrom(t));

// instantiate the plugin using the detected Type
IPlugin plugin = (IPlugin)Activator.CreateInstance(type);

// use the plugin
plugin.DoSomething();

So far so good. This works perfect. Now let’s assume that we would like to use Xml Serialization to maintain the plugin’s state. Although there are many ways to maintain a state, Xml Serialization is a very convenient method for doing so. I’ll revise the code a little:

// [path] is a place holder for an absolute path located elsewhere
string pluginPath = @"[path]Plugin.dll";
Assembly assembly = Assembly.LoadFile(pluginPath);

var type = assembly.GetTypes().FirstOrDefault(t => typeof(IPlugin).IsAssignableFrom(t));
IPlugin plugin = (IPlugin)Activator.CreateInstance(type);
plugin.DoSomething();

XmlSerializer serializer = new XmlSerializer(plugin.GetType());
using (MemoryStream ms = new MemoryStream())
{
    serializer.Serialize(ms, plugin);
}
  • Line 9: Notice that the XmlSerializer uses the actual Type of the created instance.
  • Line 12: This is where the exception will eventually occur.

For now, this will also work well. However, as I wanted to use the plugin “out of the box” without having the end-user to configure anything, I added a reference from MainProg directly to Plugin (“Add Reference…”). This way MyPlugin will be available at “design time” and will surely exist in the bin folder for deployment. This is when I got the exception on line 12.

Explanation
After googling for this exception and making several attempts to modify my code in order to try and get it to work, I came across this excellent post, which sums up the main differences between the different static Load methods that exist in the Assembly class. As the exception states, and as can be understood from the post, there are different contexts where assemblies are loaded to:

  • The ‘Load’ context is the Default context. ‘Design time’ referenced assemblies would load there (i.e. GAC assemblies or assemblies located in the private Bin folder etc.) In other words, where .NET assemblies are normally loaded by the process of probing. In her interview, Suzanne defines the Load method as the recommended good practice for loading assemblies.
  • The ‘LoadFrom’ context to my understanding is when you would like to have the assembly loader load an assembly which can’t be found by regular probing. Assemblies can be loaded to this context by explicitly calling Assembly.LoadFrom(). There are several advantages to this method, for example if there are other dependencies to be loaded referenced by this assembly.
  • The ‘Neither’ (or rather ‘LoadNeither’) context is used when Fusion in not involved (In her interview, Suzanne explains that Fusion is the part that is responsible for locating an assembly, for example in the private Bin folder, or GAC. So Fusion is not the Loader). Assemblies are loaded here when using Assembly.Load(byte[]), Reflection.Emit or Assembly.LoadFile(). To my understanding, this context is to be used when you would like save probing costs or have more control over the assemblies loaded. There are many articles and blogs that relate to Assembly.LoadFile() as a bad alternative, but I’m not sure that it is. As in other programming areas, I assume that this context addresses a need for particular projects. In the interview, Suzanne explains that there might be situations that you are required to recompile your assembly and reload it without using a different AppDomain, so that why the Neither context may come in handy.
  • There is another context not mentioned in that post (as it’s from 2003), and that’s the ReflectionOnly context. In this context you can load assemblies only to be examined using reflection, but you cannot run the code. For example, you may want to examine an assembly compiled for 64 bit on a 32 bit application, or check if an assembly is compatible with specific requirements prior to loading it into an executable context.

Now the exception is a lot clearer. By using Assembly.LoadFile(), the assembly plugin was loaded into the ‘LoadNeither’ context, whereas the Xml Serializer attempts to use a MyPlugin class already loaded into the Default Load context. Although these are the same classes as far as I’m concerned, they differ in the contexts used and therefore are considered as different classes by .NET.

Solution
So, a decision is to be taken here:

  • Either remove the “direct reference” to the plugin and always load it dynamically using LoadFile (or better, use LoadFrom). Only a single MyPlugin will exist in the different contexts and therefore the exception will be prevented. Or,
  • Figure out how to load the assembly into the Load context (or rather, use the already existing “design time” Plugin assembly in the private Bin folder even if differs from the file path specified by LoadFile).

I wanted the second option because as far as my app, it was OK to assume that no two plugins of the same name would co-exist, and if the dll was to be loaded dynamically (just like the plugin that it is), it will be OK to prefer the dll within the private Bin folder. So, I just needed to figure out how to load my plugin using the Load context, because Assembly.Load() does not have an overload for loading from a file path.

Luckily, reading the comments in above post provided a solution. Turns out that you can use the static AssemblyName.GetAssemblyName(string assemblyFile) to retrieve a valid AssemblyName object for assemblyFile. Once you have an AssemblyName, you can pass it as an argument to Assembly.Load(…). If an assembly by that AssemblyName is already loaded, that assembly would be used; or, if an assembly that corresponds to the AssemblyName is found within the probing paths, it will be favored and loaded. Only if an assembly that corresponds to that AssemblyName was not found, then the assembly you specified in the file path will be loaded. Suzanne commented that this might be a little costly in probing performance, but the behavior is exactly what I wanted it to be, and I wasn’t bothered by a possible performance issue in my particular case as my app doesn’t load assemblies all day long.

So, the modified code is as follows:

// [path] is a place holder for an absolute path located elsewhere
string pluginPath = @"[path]Plugin.dll";
AssemblyName name = AssemblyName.GetAssemblyName(pluginPath);
Assembly assembly = Assembly.Load(name);

var type = assembly.GetTypes().FirstOrDefault(t => typeof(IPlugin).IsAssignableFrom(t));
IPlugin plugin = (IPlugin)Activator.CreateInstance(type);

XmlSerializer serializer = new XmlSerializer(plugin.GetType());

using (MemoryStream ms = new MemoryStream())
{
    serializer.Serialize(ms, plugin);
}

The problem is solved, although one must take into account that this solution means that the specified plugin in the file path might not be the one actually used at run-time. If you have to ensure that the specified plugin is indeed the one used, you’ll have to use one of the other Load contexts and handle the possibility of receiving the exception which started it all.

Other references
Here are some interesting references that I came across when reading about this (no particular order):

Summary
When using reflection for dynamically loading your assemblies, there are different methods of doing so. As a rule of thumb, you should always try to load your assemblies to the default Load context. Next alternative is the LoadFrom context, and LoadFile is your last alternative and should be used only for particular cases when the other two might not be adequate.

If you are going to use the method described in this post (i.e. loading using AssemblyName in order to ensure loading into the Load default context), you should remember that plugins might have identical names to assemblies found in the probing path or simply other assemblies that are already loaded. If your application is one that requires extensive plugin usage, you should probably develop some method of checking whether those dynamically loaded plugins have matching AssemblyNames, and possibly consider notifying users that there is a problem. Otherwise you might run into unexpected behavior. Same goes for development and debugging: you may think that you have loaded an assembly by specifying a valid path name, but end-up using an assembly from a different path (or from the GAC), with a different behavior and classes.

 
2 Comments

Posted by on 11/11/2012 in Software Development

 

Tags: , , ,

The CORS

Intro
It’s a common practice to use Ajax, Silverlight or Flash to communicate with the server on the same domain. However, at times, you are required to access another domain, and this may be interpreted by browsers as a security issue. This post specifically addresses how to do so using CORS: “Cross-origin resource sharing”.

You might say that the purpose of CORS is to help browsers understand that accessing a resource from a different domain is legit. Consider this example: There are web services available from a certain other domain. You wish to access them from a browser using Ajax, but the browser interprets this a security issue and forbids it. CORS is the now-standard way to perform some sort of a handshake between the browser and the server before the Web Services can be consumed. The browser queries the server whether it’s OK to access a Web Service and acts according to the reply. Note, that although the server is what needs to be configured to allow access to your client, it is the browser that performs the limitation and will stop the request if no such access was granted. A different client such as a .NET application need not worry about CORS, unless for some reason the developer is required to support it, as the server does not enforce clients to use CORS.

Supporting CORS on the server side is merely done by sending several Response Headers to the client browser. On IIS this is done either by adding headers to the outgoing responses or by tweaking web.config as required.

Problem 1: Cross-site scripting
Ingredients:

  • 1 Web Service which needs to be consumed.
  • 1 IIS to host that Web Service on a domain other than the client browser domain.
  • 1 IIS to host a client script trying to consume that Web Service.
  • 1 Firefox browser (tested version is 14.0.1) and 1 Chrome browser (tested version is 20).

Let’s view the problem: our client JavaScript runs on ‘localhost’ and will try to consume a Web Service running in domain ‘OtherServer’. This is the server code:

using System.Web.Services;

[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.Web.Script.Services.ScriptService]
public class MyService : System.Web.Services.WebService
{
    [WebMethod]
    public string HelloWorld(string name)
    {
        return "Hello " + name;
    }
}

web.config setup:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.web>
    <compilation debug="true" targetFramework="4.0">
    </compilation>
    <webServices>
      <protocols>
        <add name="HttpGet" />
        <add name="HttpPost" />
      </protocols>
    </webServices>
  </system.web>
</configuration>

The client code (using jQuery 1.7.2):

$.ajax({
    type: 'get',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: { name: "Joe" },
    contentType: 'text/plain',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

Note that the client code does not send nor expect a JSON type. this is deliberate and will be explained later on why.

Here is the result of this request:

Running the above client results in somewhat conflicting results on Firefox and Chrome. In FF, exploring Firebug Net tab shows as if the request and response went as expected. As you can see in the screenshot below, the request was sent from localhost to ‘otherserver’, and returned with a 200 (OK). But, as you can also see the response text is empty. Server side debugging proves that the request arrived as expected. However client side debugging shows a somewhat different picture as the status of the request returns as 0 and not 200. In Chrome, the Network tab shows an almost immediate error and sets the status of the request to ‘(canceled’). When reverting to a more “native” use of XMLHttpRequest rather then jQuery, it seems like the status of the request is also returned as 0 instead of 200.

BTW: running this exact client script from within ‘otherserver’ domain will work OK, with the expected xml returned.

Solution 1: Access-Control-Allow-Origin
In order to make this work, the server has to send back the “Access-Control-Allow-Origin” response header, acknowledging the client. The value of this response header can be either ‘*’ or an actual expected client domain, thus allowing a more controlled CORS interaction, if required. You may have noticed in the previous screenshot that the browser automatically sends an ‘Origin’ header when the request is a cross-site request. That is the name of the domain that the server should return so that the request will be allowed.

Sending back the “Access-Control-Allow-Origin” response header can be done either by adding a single line of code, as below:

using System.Web;
using System.Web.Services;

[System.Web.Script.Services.ScriptService]
public class MyService : System.Web.Services.WebService
{
    [WebMethod]
    public string HelloWorld(string name)
    {
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", "http://localhost");
        return "Hello " + name;
    }
}

Or by tweaking the web.config to return that response header, for all requests in this sample (IIS 7 or above):

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Origin" value="*" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

Note: If you do decide to specify an explicit origin, you must provide the origin as it is sent in the request headers. In this example it is ‘http://localhost&#8217;, but naturally this has to be the actual domain name a consuming script is running from.

Now that we resend the request, we receive the result as expected (I used the web.config tweak in this example):

Problem 2: Sending JSON types
In a previous post I described the “magic” behind sending JSON strings to the server, so that the server will perform auto binding to server side types. Let’s try it now, with a cross domain server request:

$.ajax({
    type: 'POST',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: JSON.stringify({ name: "Joe" }),
    contentType: 'application/json',
    dataType: 'json',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

Running this client script fails. Note the traffic:

As you can see, in this case the browser did not send a POST request as expected, but an OPTIONS request. This procedure is called “Preflight”, which means that the browser sends an implicit request to the server, asking whether the request is legit. Only if the server replies that the request is indeed legit, using response headers again, will the browser continue with the original request as planned. So, looking at the screenshot above, you can see several things:

  1. The browser sends an OPTIONS request.
  2. The browser sends two new request headers: “Access-Control-Request-Method” with a value of “POST”, and “Access-Control-Request-Headers” with a value of “content-type”. This quite clearly states that the browser is asking whether it can use POST and whether it can send a “content-type” header in the request.
  3. The server replies that “POST” is indeed allowed (amongst other methods), and maintains the origin as ‘*’ like before.
  4. The server doesn’t tell the browser anything about the content-type header request, which is the problem here.

In this case the browser was not satisfied with the preflight response from the server and therefore the original request was not sent.

Important: As you can see from problem/solution 1, not all requests are preflighted. FF docs sum this as follows. A request will be preflighted in the following cases:

  • It uses methods other than GET or POST. Also, if POST is used to send request data with a Content-Type other than application/x-www-form-urlencoded, multipart/form-data, or text/plain, e.g. if the POST request sends an XML payload to the server using application/xml or text/xml, then the request is preflighted.
  • It sets custom headers in the request (e.g. the request uses a header such as X-PINGOTHER)

So by sending a content-type of ‘application/json’, we have made this request perform a “preflight” request.

Solution 2: Access-Control-Allow-Headers
The solution to this problem is simply to tweak the server to acknowledge the content-type header, thus causing the browser to understand that the content-type header request is legit. Here goes:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

When we try our request again it works as expected:

Note: You may wish to limit or explicitly detail which methods of requesting information are supported by using the “Access-Control-Allow-Methods” response header like so:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

Problem 3: (Basic) Authentication
If the server resource is secured and requires authentication for accessing it before a client can actually consume it, you’ll have to modify your request accordingly, or you’ll end-up with a 401 authorization response from the server.

For example, if you protect your Web Service with Basic Authentication (and disable Anonymous Authentication, of course), then you have to have your request conform to the Basic Authentication requirements, which means that you have to add a request header of “Authorization” with a value that begins with “Basic” followed by a Base64 string of a combined username:password string (which is by no way encrypted – but that is another matter). So borrowing Wikipedia’s sample, if we have an “Aladdin” user account with a password of “open sesame”, we have to convert to Base64 the string of “Aladdin:open sesame” and add it as an Authorization request header. The request header should be: “Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==”.

UPDATE: Wikipedia’s relevant page was modified till this post was published their excellent Aladdin sample was somewhat changed.

$.ajax({
    type: 'POST',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: JSON.stringify({ name: "Joe" }),
    contentType: 'application/json',
    beforeSend: function ( xhr ) {
        xhr.setRequestHeader("Authorization", "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==");
    },
    dataType: 'json',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

We also have to change the “Access-Control-Allow-Headers” to allow a request header of “Authorization”, or the OPTIONS will fail again:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type,Authorization" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

However, this change is not sufficient, at least not for “preflighted” requests. If you change the security settings in your IIS web site to Basic Authentication but require a preflighted request, it turns out that the preflight itself will fail authentication. This is because the preflight request does NOT send the “Authorization” header. IIS automatically responds with a 401 (Unauthorized) as it cannot authenticate the preflight request. If you would test this on a Firefox browser, the preflight will fail and the original request will not be sent:

Interesting enough, on Chrome this works fine, despite the returned 401 from the server:

In order to get this to work on FF, I made several attempts using various combinations of the following (you can read about all of them in the jQuery.ajax documentation):

  • I added withCredentials to the request (if you use this you have to have your server return a Access-Control-Allow-Credentials: true, and have Access-Control-Allow-Origin return an explicit origin).
  • I added username/password to the request.
  • I added the patch designated to fix the known FF bug of getAllResponseHeaders() not returning the response headers correctly.
  • Also added the “Access-Control-Allow-Credentials” to the server response.
  • Changed the “Access-Control-Allow-Origin” from “*” to the “http://localhost&#8221; (request origin) as the FF docs specify.

I also reverted from jQuery to the “native” XMLHttpRequest to no avail. Nothing I tried made it work on FF.

So, which browser is “right”? Is Firefox right to block the request because a 401 was received, or is Chrome right for ignoring the 401 on the preflight? After all, it would make sense that because the Authorization header is not sent on the preflight, the browser will be “clever enough” to ignore the 401. However, as you can read in this reported FF bug it turns out that the CORS spec requires that the server will return a 200 (OK) on the preflight before proceeding with the original request. According to that, Firefox has the correct implementation and Chrome has a bug (if you follow the reported bug you’ll learn that this is actually a webkit bug, although this is yet to be concluded.)

Solution 3: Handle OPTIONS “manually”
Here comes the bad part. While for Integrated Application Pools, you can code a custom module to bypass IIS behavior and return a 200 for the preflight, you cannot do that for a Classic Application Pool. In an Integrated Application Pool, ASP.NET is integrated into IIS’ pipeline, allowing you to customize the authentication mechanism in this particular case. However a Classic Application Pool means that module code will run only after IIS has authorized the request (or rather – failed to authorize in this case).

First, let’s review the Integrated Application Pool patch, in the form of an HttpModule:

public class CORSModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.PreSendRequestHeaders += delegate
        {
            if (context.Request.HttpMethod == "OPTIONS")
            {
                var response = context.Response;
                response.StatusCode = (int)HttpStatusCode.OK;
            }
        };
    }
}

The code above is very non-restrictive – it allows all preflights (or other usage of the OPTIONS verb) to get away without authentication. So you better consider revising it to your needs.

The web.config in IIS7 Integrated App Pool now incorporates the following to support the HttpModule:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type,Authorization" />
      </customHeaders>
    </httpProtocol>
    <modules>
      <add name="CORSModule" type="CORSModule" />
    </modules>
  </system.webServer>

The result is working as expected:

For Classic Application Pool, there isn’t an easy solution. All my attempts to reconfigure IIS to allow OPTIONS through despite the Basic Authentication have failed. If anyone knows how to do this – please comment. A minor attempt to dispute over the decision to reject a preflight based on the authorization issue has failed. In particular to the Classic Application Pool issue (i.e. that older web servers cannot be tweaked to allow the OPTIONS request), the response was that we should wait till these servers are obsoleted (??)

However, you might consider a different solution – you can revert from the default IIS Basic Authentication module back to Anonymous authentication, and handle the authorization yourself (or use an open source like MADAM.) This solution means that preflights will not fail authentication, but you are still able to require credentials for accessing the different resources (explanation can be found below the code):

void Application_AuthenticateRequest(object sender, EventArgs e)
{
    bool authorized = false;
    string authorization = Request.Headers["Authorization"];
    if (!string.IsNullOrWhiteSpace(authorization))
    {
        string[] parts = authorization.Split(' ');
        if (parts[0] == "Basic")//basic authentication
        {
            authorization = UTF8Encoding.UTF8.GetString(Convert.FromBase64String(parts[1]));
            parts = authorization.Split(':');
            string username = parts[0];
            string password = parts[1];

            // TODO: perform authentication
            //authorized = FormsAuthentication.Authenticate(username, password);
            //authorized = Membership.ValidateUser(username, password);
            using (PrincipalContext context = new PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Machine))
            {
                authorized = context.ValidateCredentials(username, password);
            }

            if (authorized)
            {
                HttpContext.Current.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(username), null);
            }
        }
    }

    if (!authorized)
    {
Response.AddHeader("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", Request.Url.Host));
        Response.StatusCode = (int)System.Net.HttpStatusCode.Unauthorized;
        Response.End();
    }
}

As you can see from the code above, Basic Authentication is handled “manually”:

  • Line 1 shows that this code runs upon a request authentication.
  • Line 4 queries whether there’s an Authorization header.
  • Lines 7-8 proceed to authentication only for Basic Authentication (you can customize this to your needs). This particular example demonstrates Basic Authentication.
  • Lines 16-21 demonstrate different methods of authentication. You have to choose what’s best for you or add your own.
  • Line 23-26 are optional and can be used to populate the current context’s User property with an identity that you might need in your Web Service later on.
  • Lines 30-35 return a 401 and a WWW-Authenticate header which indicate to the client which authentication methods are allowed (that is, in case the user was not authenticated).

This solution is a variant of the solution described in a previous post I made about having a “Mixed Authentication” for a website.

Summary

I really can’t tell, but it seems like CORS is something that is here to stay. Unlike JSONP which is a workaround that utilizes a security hole in todays browsers, one that might be dealt with someday, CORS is an attempt to formalize a more secure way to protect the browsing user. The thing is that CORS is not trivial as I would have preferred it to be. True, once you get the hang of it, it makes sense. But it has it’s limitations and complexity to get it right without spending too much time to configure it correctly. As for the IIS Classic App Pool and non-anonymous configuration issue, well, that is something that seems to be like a real problem. You can try to follow this thread and see if something comes up.

Credits

Lame, but this post’s title is credited to The Corrs.

 
7 Comments

Posted by on 12/10/2012 in Software Development

 

Tags: ,

IIS “Mixed Authentication”: Securing a web site with Basic Authentication and Forms Authentication

I have come across a situation where I needed to secure a specific web service with Basic Authentication, in a web site that is secured using Forms Authentication. Unfortunately in IIS this is not as trivial as I hoped it would be. This isn’t another case of having a website for Intranet and Internet users, thus attempting to configure your website to both Forms authentication and Windows authentication and Single Sign-On (SSO) which you can read about here or here. In this case, the website is secured with Forms Authentication but exposes a web service for mobile devices to consume, so no Intranet and SSO are relevant. Authentication is to be performed in this case using Basic Authentication, which means that the client (mobile) has to send over an Authorization header with a Base64 encoding of the username and password (note that the username and password are not secured in this scenario as they are not encrypted, but you can use SSL).

I really banged my head over this one. I tried quite a few options to get this to work using standard IIS configuration. I listed the most noticeable below:

Create a virtual folder and change the authorization rules for that folder
This was truly my preferred solution. If you could have taken a specific file or folder and provide it with different authentication settings, the solution could have been simple and straight forward. Unfortunately IIS shows the following warnings when you try that:

What you see here is a MixedAuth web application that is setup for Forms Authentication, and a ‘Sec’ folder which should have been secured with Basic Authentication. As you can see, when setting Basic Authentication to Enabled and clicking on the Forms Authentication in order to disable it (after successfully disabling anonymous authentication, IIS displays two reservations:

  1. The conflicting authentication modes “cannot be used simultaneously”.
  2. Forms Authentication is readonly and cannot be disabled.

You’ll get the same errors if you try securing the other way around: The web app secured with Basic Authentication and a virtual folder secured with Forms Authentication.

Create a secured web application and change the authorization rules for that folder
This looks like the next best thing. I thought that just like you can create a web application under “Default Web Site”, and change it’s authorization settings to whatever you require, why not make a “sub” web application instead of a virtual folder, and change it’s settings completely. Here goes:

Voila! I thought I had it. So easy! so trivial! so wrong…

Yes, this mode will provide you with a “mixed authentication mode”, but…

  1. These are entirely two different applications. Even if you choose the same application pool for both, they have different App Domains, different Sessions etc.
  2. Even if you decide that it is OK that these are entirely different applications, they do not share the same Bin or App_Code folders. So, if you rely on those folders in your code, you’ll have to duplicate them (and no, creating a Bin virtual folder under “Sec” and pointing it towards the physical path of the Bin under “MixedAuth” will not work).

In other words, creating a “sub web application” is no different than creating any other application and therefore is unlikely to answer your needs.

Enter MADAM
I decided to google some more for possible solutions and found an interesting article hosted in MSDN which was written in 2006 (ASP.NET 1 to 2). The proposed solution is an assembly called MADAM (“Mixed Authentication Disposition ASP.NET Module”), which basically provides a simple solution:

MADAM’s objective is to allow a page developer to indicate that, under certain conditions, Forms authentication should be muted in favor of the standard HTTP authentication protocol.

In other words, using a Module, certain configuration will determine whether a page will be processed using Forms Authentication or Basic Authentication (or another). Here’s another quote that can help understanding this:

Much like how FormsAuthenticationModule replaces the 401 status returned by the authorization module with a 302 status code in order to redirect the user to the login page, MADAM’s HTTP module reverses the effect by switching the 302 back to a 401.

Custom solution
The MADAM article is quite thorough, so if you would like to skip it and to jump right in, I suggest that you download the code sample at the beginning of the article. It contains not only the source code for MADAM but a web.config which you can copy-paste from the desired configuration. In my particular case I needed a different authentication module than the ones supported by MADAM, and therefore I thought that perhaps I should implement a more custom and straightforward solution that serves my needs.

To my understanding, the only way I could combine “Mixed Authentications” was to have my website set to Anonymous Authentication in IIS, and use Global.asax to restrict access to certain pages by returning a 401 (Classic App Pool). For those particular pages, if the user is not already logged in, I would check if there’s an Authorization request header and perform authentication as required. It is important to note, that how you perform the authentication on the server side is completely up to you and will not be performed by IIS in such a case. Therefore I have included within the following code several authentication methods samples that you need to choose from (or add your own). This code is embedded within Global.asax and is compatible with a Classic Application Pool. For Integrated Application Pool you may choose an alternative and use HttpModule to accomplish similar behavior.

Note: This following code supports and follows Basic Authentication “protocol”. Just as a reminder, this means that the client sends an Authorization request header with the word “Basic” and the credentials (“username:password”) encoded as a base64 string (e.g. “Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==”). If you require something different, you will have to adjust the code accordingly (explanation is below the code).

<%@ Application Language="C#" %>
<%@ Import Namespace="System.DirectoryServices.AccountManagement" %>

    void Application_AuthenticateRequest(object sender, EventArgs e)
    {
        if (User == null || !User.Identity.IsAuthenticated)
        {
            string page = Request.Url.Segments.Last();
            if ("Secured.aspx".Equals(page, StringComparison.InvariantCultureIgnoreCase) || Request.Url.Segments.Any(s=>s.Equals("WSSecured.asmx/", StringComparison.InvariantCultureIgnoreCase)))
            {
                bool authorized = false;
                string authorization = Request.Headers["Authorization"];
                if (!string.IsNullOrWhiteSpace(authorization))
                {
                    string[] parts = authorization.Split(' ');
                    if (parts[0] == "Basic")//basic authentication
                    {
                        authorization = UTF8Encoding.UTF8.GetString(Convert.FromBase64String(parts[1]));
                        parts = authorization.Split(':');
                        string username = parts[0];
                        string password = parts[1];

                        // TODO: perform authentication
                        //authorized = FormsAuthentication.Authenticate(username, password);
                        //authorized = Membership.ValidateUser(username, password);
                        using (PrincipalContext context = new PrincipalContext(ContextType.Machine))
                        {
                            authorized = context.ValidateCredentials(username, password);
                        }

                        if (authorized)
                        {
                            HttpContext.Current.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(username), null);
                            FormsAuthentication.SetAuthCookie(HttpContext.Current.User.Identity.Name, false);
                        }
                    }
                }

                if (!authorized)
                {
                    HttpContext.Current.Items["code"] = 1;
                    Response.End();
                }
            }
        }
    }

    void Application_EndRequest(object sender, EventArgs e)
    {
        if (HttpContext.Current.Items["code"] != null)
        {
            Response.Clear();
            Response.AddHeader("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", Request.Url.Host));
            Response.SuppressContent = true;
            Response.StatusCode = (int)System.Net.HttpStatusCode.Unauthorized;
            Response.End();
        }
    }

Some explanation for the code above:

  • Line 6: attempt authentication only if the user is not authenticated.
  • Lines 8-9: Secured.aspx and WSSecured.asmx are the secured areas. All other pages should be processed as usual using Forms Authentication. Naturally, you need to replace this with something adequate to your needs and less ugly.
  • Lines 12-13 checks whether the Authorization header exists.
  • Lines 16-21 retrieve the username and password from the Authorization header.
  • Line 24 demonstrates how to accomplish authentication using Forms Authentication.
  • Line 25 demonstrates how to use Membership to perform authentication (this could be built-in ASP.NET membership provides or a custom membership.)
  • Lines 26-29 (and line 2) are required if you would like to perform Authentication using Windows Accounts (currently it is set to local server accounts, but you can change the ContextType to domain).
  • Lines 31-35 are processed if authentication was successful. In this case, I used the idea from MADAM to set the User to a Generic Principal, but you may choose to replace this with a Windows Principal if you use actual Windows Accounts. I also set the FormsAuthentication cookie in order to prevent unnecessary authentications for those clients that support cookies.
  • Lines 39-43: If not authenticated, end the response and indicate that a 401 is to be returned. Setting the 401 in this location will not work with Forms Authentication, because Forms Authentication would change it to 302 (redirect to the login page). So we change the return code to 401 in the actual End Request event.
  • Lines 48-58 change the return code to 401 and return a WWW-Authenticate response header, which tells the client which authentication methods are supported by the server. In this case we tell the client that Basic Authentication is supported. For a browser, this will cause a credentials dialog to popup, and the browser will encode typed-in credentials to a Base64 string and send it over in the Authorization request header.

As you can see below, at first, the browser receives a 401 and a WWW-Authenticate header. The browser pops-up the credentials dialog as expected:

And when we type-in the credentials and hit OK, authentication is processed as expected and we receive the following:

As you can see, the browser encoded the credentials to a Base64 string and sent it with “Basic” in the Authorization header, thus indicating that the client wishes to perform authentication using Basic Authentication.

Here’s another example for a client, this time not a browser but a simple console application:

using System;
using System.Net;
using System.Text;

namespace Client
{
    class Program
    {
        static void Main(string[] args)
        {
            using (WebClient client = new WebClient())
            {
                string value = Convert.ToBase64String(UTF8Encoding.UTF8.GetBytes("Aladdin:open sesame"));
                client.Headers.Add("Authorization", "Basic " + value);
                string url = "http://localhost/MixedAuth/WSSecured.asmx/HelloWorld";
                var data = client.DownloadString(url);
            }
        }
    }
}

The C# console application above is attempting to consume the secured Web Service. If not for the request header in line 14, we would have gotten a 401.

Summary

I would have preferred it if IIS would have supported different authentication settings in the same Web Application, without requiring custom code or configuring a different application for secure content. From a brief examination IIS8 is no different so this workaround will probably be relevant there too. If you have any better idea or an alternate solution, please comment.

 
3 Comments

Posted by on 24/08/2012 in Software Development

 

Tags: ,

Posting complex types using ASP.NET Ajax, WebForms

This is a minor post, which is some sort of a completion to the previous post dealing with the same topic only for using MVC. So, if you require background, you may want to consider reading that post first.

Fortunately, ASP.NET Ajax makes it really easy to send JSON back and forth. Using jQuery, there is some tweaking that had to be done in order to get this to work. However in WebForms usnig ASP.NET Ajax PageMethods or WebMethods – ASP.NET takes care about the technical details, leaving you to implement only the logic. Note that the same technique and guidelines are used just like in MVC: the data is sent from the Client browser to the Server using POST and in JSON format, with a Content-Type of application/json. All properties must match in data-type and names.

This is the client code:

    <form id="form1" runat="server">
    <asp:ScriptManager runat="server" EnablePageMethods="true" />
    <input type="button" value='click me to test' onclick='Test();' />
    <script type="text/javascript">
        function Test() {
            var data =
            {
                people: [
                    {name: 'Joe', age: 20},
                    {name: 'Jane', age: 30}
                ]
            };

            PageMethods.Test(data, function (result){
                alert("Done");
            });
        }
    </script>
    </form>

And the server side, containing classes matching the client type:

    [WebMethod]
    public static void Test(MyPeople data)
    {

    }

    public class MyPeople
    {
        public IEnumerable<Person> People { get; set; }
    }

    public class Person
    {
        public string Name { get; set; }
        public int Age { get; set; }
    }

And the traffic:

As you can see for yourself, this is pretty much straight forward. ASP.NET Ajax client infrastructure takes care to POST the data and send it in JSON format and the appropriate application/json content-type, which is great.

 
Leave a comment

Posted by on 27/07/2012 in Software Development

 

Tags: , , , ,

Posting complex JavaScript types to MVC, using Ajax

This post demonstrates how to send Complex client types to MVC using JSON. If you require the same information but for ASP.NET WebForms, you can go and read this post instead.

Note: I published an updated post for MVC6 and ASP.NET core here.

Background
Sending data to MVC using jQuery is something that I wrote about in several blog posts. Just as a quick reminder, this post discusses the basics of sending data, including simple-typed arrays. The problem is that I had to send over an array of complex-typed objects. Originally, this was intended to be a post which performs some sort of a work around by sending a JSON string to the server, and uses a JavaScriptSerializer & dynamics to retrieve the objects. But, after Googling some more it turns out by sending and receiving data in a certain way, MVC will perform this process out of the box, by binding the sent data to your custom server side classes. For simple types, sending data to the server is very “forgiving” and easy. But if you require to send complex types which involve JSON strings – you have to be more strict and follow “the rules”.

If you’re just into the solution, click here. If you would like to see some basic examples and the problem explained, read on.

Basic implementation
Here’s a reminder on how to send simple types, slightly complex types and simple typed arrays. First, this is the server side code:

    public class HomeController : Controller
    {
        public ActionResult Index() { return View("Test"); }

        public ActionResult GotSimple(string name, int age)
        {
            return Content(string.Format("name: {0}; age: {1}", name, age));
        }

        public ActionResult GotArrays(string[] names, int[] ages)
        {
            return Content(string.Format("names: {0}; ages: {1}",
                string.Join(",", names),
                string.Join(",", ages)));
        }

        public ActionResult GotComplexType(string name, int age)
        {
            return Content(string.Format("name: {0}; age: {1}", name, age));
        }
    }

And this is the client code (excluding the buttons html):

<script type="text/javascript">// <![CDATA[
        $(function () {
            $('#sendSimple').click(function () {
                 $.get("@Url.Action("GotSimple")",
                     {
                        name: "Joe" ,
                        age: 20
                     },
                     function (result){
                         alert("Server replied: " + result);
                     }
                 );
            });

            $('#sendArray').click(function () {
                 $.get("@Url.Action("GotArrays")",
                     $.param({
                        names: ["Joe", "Jane"],
                        ages: [20,30]
                     }, true),
                     function (result){
                         alert("Server replied: " + result);
                     }
                 );
            });

            $('#sendComplexType').click(function () {
                var Joe = { name: 'Joe', age: 20};
                $.get("@Url.Action("GotComplexType")",
                    Joe,
                    function (result){
                        alert("Server replied: " + result);
                    }
                );
            });
        });
// ]]></script>

There are three examples here:

  • ‘Simple’ demonstrates sending two simple types and the server receives them “as-is”.
  • ‘Array’ demonstrates that you can send simple typed arrays using $.param with the traditional flag set to true. The server receives them as expected.
  • ‘ComplexType’ shows an even cooler example, that you can send an simple object and receive it’s properties on the server side similarly to the ‘Simple’ example.

Note the traffic sent over to the server: ‘Simple’ (1) and ‘Complex’ (3) send the data exactly the same way, so clearly the server handles the sent data in a similar way. ‘Array’ (2) sends the data in duplicate keys which is received and interpreted by MVC as arrays.

Moreover: MVC also supports receiving the arguments directly to a custom class (e.g. Person), which is really cool:


public ActionResult GotComplexType(Person person)
{
   return Content(string.Format("name: {0}; age: {1}", person.name, person.age));
}

public class Person
{
   public string name { get; set; }
   public int age { get; set; }
}

This actually works (You can read about it here, under “JavaScript and AJAX Improvements”.

Complex typed arrays

But… when I wanted to send an array of complex types, this was more of a problem. First, lets review the client code:

            $('#sendComplexTypeArrays').click(function () {
                var Joe = { name: 'Joe', age: 20};
                var Jane = { name: 'Jane', age: 30};
                $.get("@Url.Action("GotComplexTypeArrays")",
                    $.param({
                        people: [Joe,Jane]
                    }, true),
                    function (result){
                        alert("Server replied: " + result);
                    }
                );
            });

And here is how it is sent over by the browser:

Clearly, this is not what I wanted. OK, let’s try to remove the traditional flag from $.param:

Looks more promising. Let’s review the server side code this time:

public ActionResult GotComplexTypeArrays(dynamic data)
{
    return Content("");
}

Unfortunately “data” is just an object, nothing more, and I did not get what I wanted.

Solution
I Googled and found this post which gave me a good direction. Turns out that MVC will know how to parse your sent data from the client on the server side, but there are a few catches I had to overcome before this properly worked:

  1. (Client) You must send a JSON content-type in the form of: ‘application/json’.
  2. (Client) The data must be POSTed and in JSON format (use JSON.stringify).
  3. (Server) The server data type must match that of the client:
    • Property names must match (although it seems like case-sensitivity is not an issue);
    • Client arrays must match a server IEnumerable type (array, list etc.);
    • Client data type must match the server side data type to a certain extent or you’re risking losing data (e.g. can’t send a client string and receive it as an int on the server side.)
  4. (Server) All properties must be, well, Properties (e.g. get;set; implemented). Can’t use global fields. They should also be public.
  5. (Server) Apparently the name of the argument cannot be identical to one of the properties. I’m not sure why, but this seem to confuse the binder and results in ‘null’. So you cannot call your argument ‘people’, and have a ‘people’ property in that class at the same time. At least not in the immediate class of the argument.

So after making a few changes to my code, here is the client side:

$.ajaxSetup({
    dataType: 'json',
    contentType: 'application/json, charset=utf-8'
});

The above code ensures that Ajax calls sent to the server are with a JSON content-type, and that JSON is to be received from the server (you may choose to add more global jQuery Ajax settings here such as error handling, or POST method etc.).

Important: if you don’t need all of your Ajax calls to be in JSON format, you can simply use the dataType and contentType in explicit Ajax calls. Just replace $.post with $.ajax and use POST.

$('#sendComplexTypes').click(function () {
    var Joe = { name: 'Joe', age: 20};
    var Jane = { name: 'Jane', age: 30};

    $.post("@Url.Action("GotComplexTypes")",
        JSON.stringify({ people: [ Joe,Jane ]}),
        function (result){
            alert("Server replied: " + result);
        }
    );
});

The code above converts the data into JSON format, and will require a ‘people’ property on the server side for binding purposes.

Below is the modified server side code. It receives a ‘data’ (not ‘people’ – remember?), performs binding to the different attributes, matching name to name and data type to data type. Properties and not global fields.

public ActionResult GotComplexTypes(MyPeople data)
{
    return Json(string.Format("People: {0}", data.People.Count()), JsonRequestBehavior.AllowGet);
}

public class MyPeople
{
    public IEnumerable<Person> People { get; set; }
}

public class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

Here’s the traffic shown which can help understand whether you’re POSTing JSON strings correctly:

Summary
Apart for being a really cool feature, to be able to bind JSON strings to server side objects, it can save a lot of effort if you thought of doing so yourself. However, this was no picnic. If you don’t follow “the rules” specified above, expect a hard-time and lots of frustration. But don’t despair – it’s worthwhile once it works, and after the first time, its supposed to get much easier.

 
5 Comments

Posted by on 22/07/2012 in Software Development

 

Tags: , , ,