RSS

Tag Archives: IIS

Force a website to https using IIS Custom Error pages

A well common requirement for secure websites is not only to support https but to make it mandatory. The problem is that if you require an SSL from your website, the end user receives an ugly 403.4 message that informs that SSL is required. Why doesn’t IIS have a simple check box in the “Require SSL” dialog to “auto redirect requests to https” is unclear to me, but in this post I’ll explain how simple it is to accomplish this without writing any code at all.

So, in order to force a website to https and redirect normal http requests to https you have various methods. At times I did this using server code: detect if you’re running a normal http and redirect from the server. But recently I attempted this using simple IIS configuration. The idea is as follows:

  1. Tweak IIS to require SSL. By default, this will inform the user of a 403.4 auth error.
  2. Using IIS’ Custom Errors feature, customize the 403.4 to redirect to https.

Before we start: naturally, you need a valid SSL certificate for this procedure to work. If you just need a test certificate for development and practice, you can IIS to generate a dummy certificate for you like so:

  1. In IIS Manager, select the Server name on the left.
  2. Go into Server Certificates in the Features View.
  3. In the Actions pane on the right, select Create Self-Signed Certificate.

To enable SSL on your website after you have installed an SSL certificate:

  1. In IIS Manager, select the target website.
  2. On the Actions pane on the right, click Bindings.
  3. In the opening dialog, click Add, select “https” and then select the desired certificate.
  4. Test that SSL is working by browsing to https.

Now we can configure a redirect to https.

Tweaking IIS to require SSL

Open IIS and select the target website or virtual application. In the Features View, select SSL Settings.

1Select “Require SSL” and “Accept”. Do not select “Require” or this won’t work at all.

2

Now if you try to browse to http as usual, you should see a 403.4 message like so:

5

Using Custom Error pages

In order to use custom Error pages, this feature must be installed. If you notice that your IIS does not provide the Error Pages feature, simply install it (the screenshot below is from Windows 7):

3

In IIS, select on the left the target server, website or application. On the Features View select Error Pages under IIS (note: this is NOT the same as .NET Error Pages under ASP.NET):

4

In the right pane select “Edit Features Settings…”

6

In the dialog that opens, select “Custom error pages” and click OK. This means that when you when we configure a redirect later on,  it will actually be in effect:

7

Finally, we have to define a new “error page” rule, to handle 403.4 and perform a redirect. Just click on the Add in the Actions pane to the right and fill-in the desired redirect rule details:

8

Eventually, this would look like this:

10

That’s it. Now if you browse to http you should be redirected to https. The web.config looks as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <httpErrors>
            <remove statusCode="403" subStatusCode="4" />
            <error statusCode="403" subStatusCode="4" path="https://localhost" 
responseMode="Redirect" />
        </httpErrors>
    </system.webServer>
</configuration>

 
1 Comment

Posted by on 21/06/2013 in Software Development

 

Tags: , , ,

The CORS

Intro
It’s a common practice to use Ajax, Silverlight or Flash to communicate with the server on the same domain. However, at times, you are required to access another domain, and this may be interpreted by browsers as a security issue. This post specifically addresses how to do so using CORS: “Cross-origin resource sharing”.

You might say that the purpose of CORS is to help browsers understand that accessing a resource from a different domain is legit. Consider this example: There are web services available from a certain other domain. You wish to access them from a browser using Ajax, but the browser interprets this a security issue and forbids it. CORS is the now-standard way to perform some sort of a handshake between the browser and the server before the Web Services can be consumed. The browser queries the server whether it’s OK to access a Web Service and acts according to the reply. Note, that although the server is what needs to be configured to allow access to your client, it is the browser that performs the limitation and will stop the request if no such access was granted. A different client such as a .NET application need not worry about CORS, unless for some reason the developer is required to support it, as the server does not enforce clients to use CORS.

Supporting CORS on the server side is merely done by sending several Response Headers to the client browser. On IIS this is done either by adding headers to the outgoing responses or by tweaking web.config as required.

Problem 1: Cross-site scripting
Ingredients:

  • 1 Web Service which needs to be consumed.
  • 1 IIS to host that Web Service on a domain other than the client browser domain.
  • 1 IIS to host a client script trying to consume that Web Service.
  • 1 Firefox browser (tested version is 14.0.1) and 1 Chrome browser (tested version is 20).

Let’s view the problem: our client JavaScript runs on ‘localhost’ and will try to consume a Web Service running in domain ‘OtherServer’. This is the server code:

using System.Web.Services;

[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.Web.Script.Services.ScriptService]
public class MyService : System.Web.Services.WebService
{
    [WebMethod]
    public string HelloWorld(string name)
    {
        return "Hello " + name;
    }
}

web.config setup:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.web>
    <compilation debug="true" targetFramework="4.0">
    </compilation>
    <webServices>
      <protocols>
        <add name="HttpGet" />
        <add name="HttpPost" />
      </protocols>
    </webServices>
  </system.web>
</configuration>

The client code (using jQuery 1.7.2):

$.ajax({
    type: 'get',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: { name: "Joe" },
    contentType: 'text/plain',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

Note that the client code does not send nor expect a JSON type. this is deliberate and will be explained later on why.

Here is the result of this request:

Running the above client results in somewhat conflicting results on Firefox and Chrome. In FF, exploring Firebug Net tab shows as if the request and response went as expected. As you can see in the screenshot below, the request was sent from localhost to ‘otherserver’, and returned with a 200 (OK). But, as you can also see the response text is empty. Server side debugging proves that the request arrived as expected. However client side debugging shows a somewhat different picture as the status of the request returns as 0 and not 200. In Chrome, the Network tab shows an almost immediate error and sets the status of the request to ‘(canceled’). When reverting to a more “native” use of XMLHttpRequest rather then jQuery, it seems like the status of the request is also returned as 0 instead of 200.

BTW: running this exact client script from within ‘otherserver’ domain will work OK, with the expected xml returned.

Solution 1: Access-Control-Allow-Origin
In order to make this work, the server has to send back the “Access-Control-Allow-Origin” response header, acknowledging the client. The value of this response header can be either ‘*’ or an actual expected client domain, thus allowing a more controlled CORS interaction, if required. You may have noticed in the previous screenshot that the browser automatically sends an ‘Origin’ header when the request is a cross-site request. That is the name of the domain that the server should return so that the request will be allowed.

Sending back the “Access-Control-Allow-Origin” response header can be done either by adding a single line of code, as below:

using System.Web;
using System.Web.Services;

[System.Web.Script.Services.ScriptService]
public class MyService : System.Web.Services.WebService
{
    [WebMethod]
    public string HelloWorld(string name)
    {
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", "http://localhost");
        return "Hello " + name;
    }
}

Or by tweaking the web.config to return that response header, for all requests in this sample (IIS 7 or above):

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Origin" value="*" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

Note: If you do decide to specify an explicit origin, you must provide the origin as it is sent in the request headers. In this example it is ‘http://localhost&#8217;, but naturally this has to be the actual domain name a consuming script is running from.

Now that we resend the request, we receive the result as expected (I used the web.config tweak in this example):

Problem 2: Sending JSON types
In a previous post I described the “magic” behind sending JSON strings to the server, so that the server will perform auto binding to server side types. Let’s try it now, with a cross domain server request:

$.ajax({
    type: 'POST',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: JSON.stringify({ name: "Joe" }),
    contentType: 'application/json',
    dataType: 'json',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

Running this client script fails. Note the traffic:

As you can see, in this case the browser did not send a POST request as expected, but an OPTIONS request. This procedure is called “Preflight”, which means that the browser sends an implicit request to the server, asking whether the request is legit. Only if the server replies that the request is indeed legit, using response headers again, will the browser continue with the original request as planned. So, looking at the screenshot above, you can see several things:

  1. The browser sends an OPTIONS request.
  2. The browser sends two new request headers: “Access-Control-Request-Method” with a value of “POST”, and “Access-Control-Request-Headers” with a value of “content-type”. This quite clearly states that the browser is asking whether it can use POST and whether it can send a “content-type” header in the request.
  3. The server replies that “POST” is indeed allowed (amongst other methods), and maintains the origin as ‘*’ like before.
  4. The server doesn’t tell the browser anything about the content-type header request, which is the problem here.

In this case the browser was not satisfied with the preflight response from the server and therefore the original request was not sent.

Important: As you can see from problem/solution 1, not all requests are preflighted. FF docs sum this as follows. A request will be preflighted in the following cases:

  • It uses methods other than GET or POST. Also, if POST is used to send request data with a Content-Type other than application/x-www-form-urlencoded, multipart/form-data, or text/plain, e.g. if the POST request sends an XML payload to the server using application/xml or text/xml, then the request is preflighted.
  • It sets custom headers in the request (e.g. the request uses a header such as X-PINGOTHER)

So by sending a content-type of ‘application/json’, we have made this request perform a “preflight” request.

Solution 2: Access-Control-Allow-Headers
The solution to this problem is simply to tweak the server to acknowledge the content-type header, thus causing the browser to understand that the content-type header request is legit. Here goes:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

When we try our request again it works as expected:

Note: You may wish to limit or explicitly detail which methods of requesting information are supported by using the “Access-Control-Allow-Methods” response header like so:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

Problem 3: (Basic) Authentication
If the server resource is secured and requires authentication for accessing it before a client can actually consume it, you’ll have to modify your request accordingly, or you’ll end-up with a 401 authorization response from the server.

For example, if you protect your Web Service with Basic Authentication (and disable Anonymous Authentication, of course), then you have to have your request conform to the Basic Authentication requirements, which means that you have to add a request header of “Authorization” with a value that begins with “Basic” followed by a Base64 string of a combined username:password string (which is by no way encrypted – but that is another matter). So borrowing Wikipedia’s sample, if we have an “Aladdin” user account with a password of “open sesame”, we have to convert to Base64 the string of “Aladdin:open sesame” and add it as an Authorization request header. The request header should be: “Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==”.

UPDATE: Wikipedia’s relevant page was modified till this post was published their excellent Aladdin sample was somewhat changed.

$.ajax({
    type: 'POST',
    url: 'http://OtherServer/MyServer/MyService.asmx/HelloWorld',
    data: JSON.stringify({ name: "Joe" }),
    contentType: 'application/json',
    beforeSend: function ( xhr ) {
        xhr.setRequestHeader("Authorization", "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==");
    },
    dataType: 'json',
    success: function (data) {
        alert("Data Loaded: " + data);
    }
});

We also have to change the “Access-Control-Allow-Headers” to allow a request header of “Authorization”, or the OPTIONS will fail again:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type,Authorization" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

However, this change is not sufficient, at least not for “preflighted” requests. If you change the security settings in your IIS web site to Basic Authentication but require a preflighted request, it turns out that the preflight itself will fail authentication. This is because the preflight request does NOT send the “Authorization” header. IIS automatically responds with a 401 (Unauthorized) as it cannot authenticate the preflight request. If you would test this on a Firefox browser, the preflight will fail and the original request will not be sent:

Interesting enough, on Chrome this works fine, despite the returned 401 from the server:

In order to get this to work on FF, I made several attempts using various combinations of the following (you can read about all of them in the jQuery.ajax documentation):

  • I added withCredentials to the request (if you use this you have to have your server return a Access-Control-Allow-Credentials: true, and have Access-Control-Allow-Origin return an explicit origin).
  • I added username/password to the request.
  • I added the patch designated to fix the known FF bug of getAllResponseHeaders() not returning the response headers correctly.
  • Also added the “Access-Control-Allow-Credentials” to the server response.
  • Changed the “Access-Control-Allow-Origin” from “*” to the “http://localhost&#8221; (request origin) as the FF docs specify.

I also reverted from jQuery to the “native” XMLHttpRequest to no avail. Nothing I tried made it work on FF.

So, which browser is “right”? Is Firefox right to block the request because a 401 was received, or is Chrome right for ignoring the 401 on the preflight? After all, it would make sense that because the Authorization header is not sent on the preflight, the browser will be “clever enough” to ignore the 401. However, as you can read in this reported FF bug it turns out that the CORS spec requires that the server will return a 200 (OK) on the preflight before proceeding with the original request. According to that, Firefox has the correct implementation and Chrome has a bug (if you follow the reported bug you’ll learn that this is actually a webkit bug, although this is yet to be concluded.)

Solution 3: Handle OPTIONS “manually”
Here comes the bad part. While for Integrated Application Pools, you can code a custom module to bypass IIS behavior and return a 200 for the preflight, you cannot do that for a Classic Application Pool. In an Integrated Application Pool, ASP.NET is integrated into IIS’ pipeline, allowing you to customize the authentication mechanism in this particular case. However a Classic Application Pool means that module code will run only after IIS has authorized the request (or rather – failed to authorize in this case).

First, let’s review the Integrated Application Pool patch, in the form of an HttpModule:

public class CORSModule : IHttpModule
{
    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.PreSendRequestHeaders += delegate
        {
            if (context.Request.HttpMethod == "OPTIONS")
            {
                var response = context.Response;
                response.StatusCode = (int)HttpStatusCode.OK;
            }
        };
    }
}

The code above is very non-restrictive – it allows all preflights (or other usage of the OPTIONS verb) to get away without authentication. So you better consider revising it to your needs.

The web.config in IIS7 Integrated App Pool now incorporates the following to support the HttpModule:

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Headers" value="Content-Type,Authorization" />
      </customHeaders>
    </httpProtocol>
    <modules>
      <add name="CORSModule" type="CORSModule" />
    </modules>
  </system.webServer>

The result is working as expected:

For Classic Application Pool, there isn’t an easy solution. All my attempts to reconfigure IIS to allow OPTIONS through despite the Basic Authentication have failed. If anyone knows how to do this – please comment. A minor attempt to dispute over the decision to reject a preflight based on the authorization issue has failed. In particular to the Classic Application Pool issue (i.e. that older web servers cannot be tweaked to allow the OPTIONS request), the response was that we should wait till these servers are obsoleted (??)

However, you might consider a different solution – you can revert from the default IIS Basic Authentication module back to Anonymous authentication, and handle the authorization yourself (or use an open source like MADAM.) This solution means that preflights will not fail authentication, but you are still able to require credentials for accessing the different resources (explanation can be found below the code):

void Application_AuthenticateRequest(object sender, EventArgs e)
{
    bool authorized = false;
    string authorization = Request.Headers["Authorization"];
    if (!string.IsNullOrWhiteSpace(authorization))
    {
        string[] parts = authorization.Split(' ');
        if (parts[0] == "Basic")//basic authentication
        {
            authorization = UTF8Encoding.UTF8.GetString(Convert.FromBase64String(parts[1]));
            parts = authorization.Split(':');
            string username = parts[0];
            string password = parts[1];

            // TODO: perform authentication
            //authorized = FormsAuthentication.Authenticate(username, password);
            //authorized = Membership.ValidateUser(username, password);
            using (PrincipalContext context = new PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Machine))
            {
                authorized = context.ValidateCredentials(username, password);
            }

            if (authorized)
            {
                HttpContext.Current.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(username), null);
            }
        }
    }

    if (!authorized)
    {
Response.AddHeader("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", Request.Url.Host));
        Response.StatusCode = (int)System.Net.HttpStatusCode.Unauthorized;
        Response.End();
    }
}

As you can see from the code above, Basic Authentication is handled “manually”:

  • Line 1 shows that this code runs upon a request authentication.
  • Line 4 queries whether there’s an Authorization header.
  • Lines 7-8 proceed to authentication only for Basic Authentication (you can customize this to your needs). This particular example demonstrates Basic Authentication.
  • Lines 16-21 demonstrate different methods of authentication. You have to choose what’s best for you or add your own.
  • Line 23-26 are optional and can be used to populate the current context’s User property with an identity that you might need in your Web Service later on.
  • Lines 30-35 return a 401 and a WWW-Authenticate header which indicate to the client which authentication methods are allowed (that is, in case the user was not authenticated).

This solution is a variant of the solution described in a previous post I made about having a “Mixed Authentication” for a website.

Summary

I really can’t tell, but it seems like CORS is something that is here to stay. Unlike JSONP which is a workaround that utilizes a security hole in todays browsers, one that might be dealt with someday, CORS is an attempt to formalize a more secure way to protect the browsing user. The thing is that CORS is not trivial as I would have preferred it to be. True, once you get the hang of it, it makes sense. But it has it’s limitations and complexity to get it right without spending too much time to configure it correctly. As for the IIS Classic App Pool and non-anonymous configuration issue, well, that is something that seems to be like a real problem. You can try to follow this thread and see if something comes up.

Credits

Lame, but this post’s title is credited to The Corrs.

 
7 Comments

Posted by on 12/10/2012 in Software Development

 

Tags: ,

IIS “Mixed Authentication”: Securing a web site with Basic Authentication and Forms Authentication

I have come across a situation where I needed to secure a specific web service with Basic Authentication, in a web site that is secured using Forms Authentication. Unfortunately in IIS this is not as trivial as I hoped it would be. This isn’t another case of having a website for Intranet and Internet users, thus attempting to configure your website to both Forms authentication and Windows authentication and Single Sign-On (SSO) which you can read about here or here. In this case, the website is secured with Forms Authentication but exposes a web service for mobile devices to consume, so no Intranet and SSO are relevant. Authentication is to be performed in this case using Basic Authentication, which means that the client (mobile) has to send over an Authorization header with a Base64 encoding of the username and password (note that the username and password are not secured in this scenario as they are not encrypted, but you can use SSL).

I really banged my head over this one. I tried quite a few options to get this to work using standard IIS configuration. I listed the most noticeable below:

Create a virtual folder and change the authorization rules for that folder
This was truly my preferred solution. If you could have taken a specific file or folder and provide it with different authentication settings, the solution could have been simple and straight forward. Unfortunately IIS shows the following warnings when you try that:

What you see here is a MixedAuth web application that is setup for Forms Authentication, and a ‘Sec’ folder which should have been secured with Basic Authentication. As you can see, when setting Basic Authentication to Enabled and clicking on the Forms Authentication in order to disable it (after successfully disabling anonymous authentication, IIS displays two reservations:

  1. The conflicting authentication modes “cannot be used simultaneously”.
  2. Forms Authentication is readonly and cannot be disabled.

You’ll get the same errors if you try securing the other way around: The web app secured with Basic Authentication and a virtual folder secured with Forms Authentication.

Create a secured web application and change the authorization rules for that folder
This looks like the next best thing. I thought that just like you can create a web application under “Default Web Site”, and change it’s authorization settings to whatever you require, why not make a “sub” web application instead of a virtual folder, and change it’s settings completely. Here goes:

Voila! I thought I had it. So easy! so trivial! so wrong…

Yes, this mode will provide you with a “mixed authentication mode”, but…

  1. These are entirely two different applications. Even if you choose the same application pool for both, they have different App Domains, different Sessions etc.
  2. Even if you decide that it is OK that these are entirely different applications, they do not share the same Bin or App_Code folders. So, if you rely on those folders in your code, you’ll have to duplicate them (and no, creating a Bin virtual folder under “Sec” and pointing it towards the physical path of the Bin under “MixedAuth” will not work).

In other words, creating a “sub web application” is no different than creating any other application and therefore is unlikely to answer your needs.

Enter MADAM
I decided to google some more for possible solutions and found an interesting article hosted in MSDN which was written in 2006 (ASP.NET 1 to 2). The proposed solution is an assembly called MADAM (“Mixed Authentication Disposition ASP.NET Module”), which basically provides a simple solution:

MADAM’s objective is to allow a page developer to indicate that, under certain conditions, Forms authentication should be muted in favor of the standard HTTP authentication protocol.

In other words, using a Module, certain configuration will determine whether a page will be processed using Forms Authentication or Basic Authentication (or another). Here’s another quote that can help understanding this:

Much like how FormsAuthenticationModule replaces the 401 status returned by the authorization module with a 302 status code in order to redirect the user to the login page, MADAM’s HTTP module reverses the effect by switching the 302 back to a 401.

Custom solution
The MADAM article is quite thorough, so if you would like to skip it and to jump right in, I suggest that you download the code sample at the beginning of the article. It contains not only the source code for MADAM but a web.config which you can copy-paste from the desired configuration. In my particular case I needed a different authentication module than the ones supported by MADAM, and therefore I thought that perhaps I should implement a more custom and straightforward solution that serves my needs.

To my understanding, the only way I could combine “Mixed Authentications” was to have my website set to Anonymous Authentication in IIS, and use Global.asax to restrict access to certain pages by returning a 401 (Classic App Pool). For those particular pages, if the user is not already logged in, I would check if there’s an Authorization request header and perform authentication as required. It is important to note, that how you perform the authentication on the server side is completely up to you and will not be performed by IIS in such a case. Therefore I have included within the following code several authentication methods samples that you need to choose from (or add your own). This code is embedded within Global.asax and is compatible with a Classic Application Pool. For Integrated Application Pool you may choose an alternative and use HttpModule to accomplish similar behavior.

Note: This following code supports and follows Basic Authentication “protocol”. Just as a reminder, this means that the client sends an Authorization request header with the word “Basic” and the credentials (“username:password”) encoded as a base64 string (e.g. “Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==”). If you require something different, you will have to adjust the code accordingly (explanation is below the code).

<%@ Application Language="C#" %>
<%@ Import Namespace="System.DirectoryServices.AccountManagement" %>

    void Application_AuthenticateRequest(object sender, EventArgs e)
    {
        if (User == null || !User.Identity.IsAuthenticated)
        {
            string page = Request.Url.Segments.Last();
            if ("Secured.aspx".Equals(page, StringComparison.InvariantCultureIgnoreCase) || Request.Url.Segments.Any(s=>s.Equals("WSSecured.asmx/", StringComparison.InvariantCultureIgnoreCase)))
            {
                bool authorized = false;
                string authorization = Request.Headers["Authorization"];
                if (!string.IsNullOrWhiteSpace(authorization))
                {
                    string[] parts = authorization.Split(' ');
                    if (parts[0] == "Basic")//basic authentication
                    {
                        authorization = UTF8Encoding.UTF8.GetString(Convert.FromBase64String(parts[1]));
                        parts = authorization.Split(':');
                        string username = parts[0];
                        string password = parts[1];

                        // TODO: perform authentication
                        //authorized = FormsAuthentication.Authenticate(username, password);
                        //authorized = Membership.ValidateUser(username, password);
                        using (PrincipalContext context = new PrincipalContext(ContextType.Machine))
                        {
                            authorized = context.ValidateCredentials(username, password);
                        }

                        if (authorized)
                        {
                            HttpContext.Current.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(username), null);
                            FormsAuthentication.SetAuthCookie(HttpContext.Current.User.Identity.Name, false);
                        }
                    }
                }

                if (!authorized)
                {
                    HttpContext.Current.Items["code"] = 1;
                    Response.End();
                }
            }
        }
    }

    void Application_EndRequest(object sender, EventArgs e)
    {
        if (HttpContext.Current.Items["code"] != null)
        {
            Response.Clear();
            Response.AddHeader("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", Request.Url.Host));
            Response.SuppressContent = true;
            Response.StatusCode = (int)System.Net.HttpStatusCode.Unauthorized;
            Response.End();
        }
    }

Some explanation for the code above:

  • Line 6: attempt authentication only if the user is not authenticated.
  • Lines 8-9: Secured.aspx and WSSecured.asmx are the secured areas. All other pages should be processed as usual using Forms Authentication. Naturally, you need to replace this with something adequate to your needs and less ugly.
  • Lines 12-13 checks whether the Authorization header exists.
  • Lines 16-21 retrieve the username and password from the Authorization header.
  • Line 24 demonstrates how to accomplish authentication using Forms Authentication.
  • Line 25 demonstrates how to use Membership to perform authentication (this could be built-in ASP.NET membership provides or a custom membership.)
  • Lines 26-29 (and line 2) are required if you would like to perform Authentication using Windows Accounts (currently it is set to local server accounts, but you can change the ContextType to domain).
  • Lines 31-35 are processed if authentication was successful. In this case, I used the idea from MADAM to set the User to a Generic Principal, but you may choose to replace this with a Windows Principal if you use actual Windows Accounts. I also set the FormsAuthentication cookie in order to prevent unnecessary authentications for those clients that support cookies.
  • Lines 39-43: If not authenticated, end the response and indicate that a 401 is to be returned. Setting the 401 in this location will not work with Forms Authentication, because Forms Authentication would change it to 302 (redirect to the login page). So we change the return code to 401 in the actual End Request event.
  • Lines 48-58 change the return code to 401 and return a WWW-Authenticate response header, which tells the client which authentication methods are supported by the server. In this case we tell the client that Basic Authentication is supported. For a browser, this will cause a credentials dialog to popup, and the browser will encode typed-in credentials to a Base64 string and send it over in the Authorization request header.

As you can see below, at first, the browser receives a 401 and a WWW-Authenticate header. The browser pops-up the credentials dialog as expected:

And when we type-in the credentials and hit OK, authentication is processed as expected and we receive the following:

As you can see, the browser encoded the credentials to a Base64 string and sent it with “Basic” in the Authorization header, thus indicating that the client wishes to perform authentication using Basic Authentication.

Here’s another example for a client, this time not a browser but a simple console application:

using System;
using System.Net;
using System.Text;

namespace Client
{
    class Program
    {
        static void Main(string[] args)
        {
            using (WebClient client = new WebClient())
            {
                string value = Convert.ToBase64String(UTF8Encoding.UTF8.GetBytes("Aladdin:open sesame"));
                client.Headers.Add("Authorization", "Basic " + value);
                string url = "http://localhost/MixedAuth/WSSecured.asmx/HelloWorld";
                var data = client.DownloadString(url);
            }
        }
    }
}

The C# console application above is attempting to consume the secured Web Service. If not for the request header in line 14, we would have gotten a 401.

Summary

I would have preferred it if IIS would have supported different authentication settings in the same Web Application, without requiring custom code or configuring a different application for secure content. From a brief examination IIS8 is no different so this workaround will probably be relevant there too. If you have any better idea or an alternate solution, please comment.

 
3 Comments

Posted by on 24/08/2012 in Software Development

 

Tags: ,