RSS

Posting JavaScript types to MVC 6 in .NET core, using Ajax

This is an update to several posts I did years ago, mainly: https://evolpin.wordpress.com/2012/07/22/posting-complex-types-to-mvc. If you are interested in the source code then download it from here.

In this post are included several methods to post data to .NET core 2.2 MVC controller. The example uses the default MVC template that comes with bootstrap, jQuery validate and unobtrusive scripts.

If you rather just visit a possible custom model binding solution to send multiple objects in a POST JSON request, click here.

The test form is quite simple: a scaffolded Person form with several test buttons.

Note the Scripts section beneath the html. It provides the unobtrusive and validate jQuery code. You don’t have to use it and you may very well use whatever validation you prefer. I just preferred to use it as it comes out-of-the-box with the Visual Studio MVC template.

The ViewModel I used for this example is that of a Person:

    public class Person
    {
        [Required]
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

One more code change that I introduced into my Startup.cs, is to use the DefaultContractResolver. This will override the, well…, default behavior in MVC 6 that returns camel cased JSON (e.f. firstName instead of FirstName). I prefer the original casing.

        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure<CookiePolicyOptions>(options =>
            {
                // This lambda determines whether user consent for non-essential cookies is needed for a given request.
                options.CheckConsentNeeded = context => true;
                options.MinimumSameSitePolicy = SameSiteMode.None;
            });

            services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2)
                .AddJsonOptions(f => f.SerializerSettings.ContractResolver = new DefaultContractResolver());
        }

‘Regular post’ button

The first button is a regular post of the form data. This being a submit button, it’ll auto activate validation.

You can use either this controller code, which receives the IFormCollection (including any form items such as the AntiForgery token).

        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult Create(IFormCollection collection)
        {
            try
            {
                if (!ModelState.IsValid)
                    throw new Exception($"not valid");

                foreach (var item in collection)
                {
                    _logger.LogInformation($"{item.Key}={item.Value}");
                }

                return RedirectToAction(nameof(Index));
            }
            catch
            {
                return View();
            }
        }

Alternatively you can use this code which focuses on binding the posted form variables to the Person C# object:

        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult Create(Person person)
        {
            try
            {
                if (!ModelState.IsValid)
                    throw new Exception($"not valid");

                _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
                _logger.LogInformation($"{nameof(person.LastName)}={person.LastName}");

                return RedirectToAction(nameof(Index));
            }
            catch
            {
                return View();
            }
        }

Enough of this, where’s the Ajax?

The ‘Create ajax’ button uses the following jQuery to post. Note the ‘beforeSend’ which adds the Anti Forgery token. You must specify this or the request will fail due to the controller code [ValidateAntiForgeryToken] attribute that validates it.

            // Ajax POST using regular content type: 'application/x-www-form-urlencoded'
            $('#btnFormAjax').on('click', function () {

                if (myForm.valid()) {
                    var first = $('#FirstName').val();
                    var last = $('#LastName').val();
                    var data = { FirstName: first, LastName: last };

                    $.ajax({
                        url: '@Url.Action("CreateAjaxForm")',
                        type: 'POST',
                        beforeSend: function (xhr) {
                            xhr.setRequestHeader("RequestVerificationToken",
                                $('input:hidden[name="__RequestVerificationToken"]').val());
                        },
                        data: data
                    }).done(function (result) {
                        alert(result.FullName);
                    });
                }

            });

The controller code is quite the same (I omitted the try-catch for abbreviation purposes). Note, that the returned result in this example is JSON but it doesn’t have to be.

        // Ajax POST using regular content type: 'application/x-www-form-urlencoded' (non-JSON)
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxForm(Person person)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            _logger.LogInformation($"{nameof(person.LastName)}={person.LastName}");

            return Json(new { FullName = $"{person.FirstName} {person.LastName}" });
        }

This is the Ajax request:

OK, but I want to POST JSON and not application/x-www-form-urlencoded

I prefer to post and receive JSON. It is more consistent and will allow me more flexibility to use more complex objects as will be shown later on.

The default ‘contentType’ for $.ajax is ‘application/x-www-form-urlencoded; charset=UTF-8‘ so we did not have to specify it earlier. To send JSON we now specify a content type of ‘application/json; charset=utf-8‘.

The ‘data’ is now converted to JSON using JSON.stringify() common browser method.

The ‘dataType’ indicates expecting a JSON as the return object.

            $('#btnFormAjaxJson').on('click', function () {

                if (myForm.valid()) {
                    var first = $('#FirstName').val();
                    var last = $('#LastName').val();
                    var data = { FirstName: first, LastName: last };

                    $.ajax({
                        url: '@Url.Action("CreateAjaxFormJson")',
                        type: 'POST',
                        beforeSend: function (xhr) {
                            xhr.setRequestHeader("RequestVerificationToken",
                                $('input:hidden[name="__RequestVerificationToken"]').val());
                        },
                        dataType: 'json',
                        contentType: 'application/json; charset=utf-8',
                        data: JSON.stringify(data)
                    }).done(function (result) {
                        alert(result.FullName)
                    });
                }

            });

The controller code has one small but important change: The [FromBody] attribute of the Person argument. Without this, person will not be populated with the values from the request payload and we will waste a lot of time understanding why.

        // Ajax POST using JSON (content type: 'application/json')
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxFormJson([FromBody] Person person)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            _logger.LogInformation($"{nameof(person.LastName)}={person.LastName}");

            return Json(new { FullName = $"{person.FirstName} {person.LastName}" });
        }

This time, the request looks like this:

But what if I want to POST more data?

This is where it gets tricky. Unlike the good old WebMethod/PageMethods which allowed you to post and receive multiple JSON and complex data transparently, unfortunately, in MVC controllers this can’t be done and you can have only a single [FromBody] parameter (why??) as you can read here: https://docs.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/parameter-binding-in-aspnet-web-api#using-frombody.

So, if you want to send a more complex type, that includes for example arrays or multiple objects, you need to receive them as a single argument. The ‘Create ajax json complex type’ button demonstrates this. The View Model used here is:

    public class Person
    {
        [Required]
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

    public class People
    {
        public Person[] SeveralPeople { get; set; }
    }

In the sending Javascript, it is very important to note the identically structured ‘people’ object as highlighted below:

             // form ajax with json complex type
            $('#btnFormAjaxJsonComplexType').on('click', function () {

                var joe = { FirstName: 'Joe' };
                var jane = { FirstName: 'Jane' };
                var people = { SeveralPeople: [joe, jane] };

                $.ajax({
                    url: '@Url.Action("CreateAjaxFormJsonComplexType")',
                    type: 'POST',
                    beforeSend: function (xhr) {
                        xhr.setRequestHeader("RequestVerificationToken",
                            $('input:hidden[name="__RequestVerificationToken"]').val());
                    },
                    dataType: 'json',
                    contentType: 'application/json; charset=utf-8',
                    data: JSON.stringify(people)
                }).done(function (result) {
                    alert(result.Count)
                });

            });

The controller code receives a single object:

        // Ajax POST of a more complex type using JSON (content type: 'application/json')
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxFormJsonComplexType([FromBody] People people)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            foreach (var person in people.SeveralPeople)
            {
                _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            }

            return Json(new { Count = people.SeveralPeople.Count() });
        }

And the request:

A step back

While ‘people’ might make sense to send as one array object, what happens if you want to send multiple objects which are less related? Again, in past WebMethods this was trivial and transparent. Here, you are required to send them as a single object. For some reason, Microsoft decided to take a step back from a well working and decent WebMethods that exists for years.

Consider the Javascript below. It looks very much alike the previous example, but this time the ‘data’ is not related to the People class in our server side. Instead, it simply binds two objects together to be sent over as JSON.

Please note that I named the parameters ‘one’ and ‘two’. This will be important for the server side binding.

             // form ajax with multiple json complex types
            $('#btnFormAjaxJsonMultipleComplexTypes').on('click', function () {

                var joe = { FirstName: 'Joe' };
                var jane = { FirstName: 'Jane' };
                var data = { one: joe, two: jane };

                $.ajax({
                    url: '@Url.Action("CreateAjaxFormJsonMultipleComplexType")',
                    type: 'POST',
                    beforeSend: function (xhr) {
                        xhr.setRequestHeader("RequestVerificationToken",
                            $('input:hidden[name="__RequestVerificationToken"]').val());
                    },
                    dataType: 'json',
                    contentType: 'application/json; charset=utf-8',
                    data: JSON.stringify(data)
                }).done(function (result) {
                    alert(result.Count)
                });

            });

The server side controller that we would like to have has two parameters (‘one and ‘two’ as send from the client Javascript), but explained earlier, the default model binder in MVC today does not support it and it would not work.

        // this would FAIL and not work
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxFormJsonMultipleComplexType([FromBody] Person one, [FromBody] Person two)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            var people = new[] { one, two };
            foreach (var person in people)
            {
                _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            }

            return Json(new { Count = people.Length });
        }

I was looking for a way this can be done. Fortunately it seems that we can tailor our own model binder, meaning that we can implement a custom class to bind the request body to our parameters. Here are a couple of references that helped me out:

https://docs.microsoft.com/en-us/aspnet/core/mvc/advanced/custom-model-binding?view=aspnetcore-2.2

https://www.c-sharpcorner.com/article/custom-model-binding-in-asp-net-core-mvc/

Writing a custom binder seems easy enough (although I reckon there is much to learn). You need to code a ‘binding provider’. This gets called once per parameter.

    public class CustomModelBinderProvider : IModelBinderProvider
    {
        public IModelBinder GetBinder(ModelBinderProviderContext context)
        {
            if (/* some condition to decide whether we invoke the custom binder or not */)
                return new CustomModelBinder();

            return null;
        }
    }

The CustomModelBinder needs to do some work parsing content and create an object. That object will be passed to your controller method. This method will also be called once per parameter.

    public class CustomModelBinder : IModelBinder
    {
        public Task BindModelAsync(ModelBindingContext bindingContext)
        {
            var obj =/* some code to populate the parameter that will be passed to your controller method */

            bindingContext.Result = ModelBindingResult.Success(obj);

            return Task.CompletedTask;
        }
    }

You need to modify the ConfigureServices in your Startup class to use this binder.

            services.AddMvc(
                config => config.ModelBinderProviders.Insert(0, new CustomModelBinderProvider()))
                .SetCompatibilityVersion(CompatibilityVersion.Version_2_2)
                .AddJsonOptions(f => f.SerializerSettings.ContractResolver = new DefaultContractResolver());

In my case I wanted to populate not a specific model type (as can be seen in the 2 links above), but something that will populate any parameter type. In other words, I wanted something like the [FromBody] that will work with multiple parameters. So I named it [FromBody2]… I even placed a Required option so if a parameter is missing from the request, it may trigger an exception if set.

    [AttributeUsage(AttributeTargets.Property | AttributeTargets.Parameter, AllowMultiple = false, Inherited = true)]
    public class FromBody2 : Attribute
    {
        public bool Required { get; set; } = false;
    }

Note: originally I did not use [FromBody2], and the custom binder worked for ALL parameters. But then I thought it might be better to have some sort of attribute to gain better control, as input arguments in various requests might be different than what we expect.

My binder provider class is checking whether the parameter has the [FromBody2] parameter. If found, it will also pass it to the custom binder so it can be used internally.

    public class CustomModelBinderProvider : IModelBinderProvider
    {
        public IModelBinder GetBinder(ModelBinderProviderContext context)
        {
            var metaData = context.Metadata as Microsoft.AspNetCore.Mvc.ModelBinding.Metadata.DefaultModelMetadata;
            var attr = metaData?.Attributes?.Attributes?.FirstOrDefault(a => a.GetType() == typeof(FromBody2));
            if (attr != null)
                return new CustomModelBinder((FromBody2)attr);

            return null;
        }
    }

Now the binder itself. This was a bit tricky to write because I am consuming the request body stream, which can be done just once, whereas the custom binder is called once per parameter. Therefore I read the stream once and store it in the HttpContext.Items bag for other parameters. I reckon that there could be much more elegant solutions, but this will do for now. Explanation follows the example.

    public class CustomModelBinder : IModelBinder
    {
        private FromBody2 _attr;
        public CustomModelBinder(FromBody2 attr)
        {
            this._attr = attr;
        }

        public Task BindModelAsync(ModelBindingContext bindingContext)
        {
            if (bindingContext == null)
                throw new ArgumentNullException(nameof(bindingContext));

            var httpContext = bindingContext.HttpContext;
            var body = httpContext.Items["body"] as Dictionary<string, object>;

            // read the request stream once and store it for other items
            if (body == null)
            {
                string json;
                using (StreamReader sr = new StreamReader(httpContext.Request.Body))
                {
                    json = sr.ReadToEnd();

                    body = JsonConvert.DeserializeObject<Dictionary<string, object>>(json);
                    httpContext.Items["body"] = body;
                }
            }

            // attempt to find the parameter in the body stream
            if (body.TryGetValue(bindingContext.FieldName, out object obj))
            {
                JObject jsonObj = obj as JObject;

                if (jsonObj != null)
                {
                    obj = jsonObj.ToObject(bindingContext.ModelType);
                }
                else
                {
                    obj = Convert.ChangeType(obj, bindingContext.ModelType);
                }

                // set as result
                bindingContext.Result = ModelBindingResult.Success(obj);
            }
            else
            {
                if (this._attr.Required)
                {
                    // throw an informative exception notifying a missing field
                    throw new ArgumentNullException($"Missing field: '{bindingContext.FieldName}'");
                }
            }

            return Task.CompletedTask;
        }
    }

Explanation:

  • Line 6: Stores the [FromBody2] instance of the parameter. This will be used later to check on the Required property if the request is missing the expected parameter data.
  • Line 14: Get the HttpContext.
  • Line 15: Check whether we have already cached the body payload.
  • Lines 18-28: A one time (per http request) read of the body payload. It will be deserialized to a Dictionary<string, object> and stored in the HttpContext for later parameters to use. Consider adding further validation code here, maybe checking on the ContentType etc.
  • Line 31: This is the nice part. The ModelBinder is given the name of the parameter. As a reminder, our desired method signature had a couple of arguments: Person ‘one’ and Person ‘two’. So the FieldName would be ‘one’ and in a second invocation ‘two’. This is very useful because we can extract it from the Dictionary.
  • Lines 33-35: We attempt to cast the object to JObject (JSON complex types). If we are successful, we further convert it to the actual parameter type.
  • Lines 39-42: If the object is a primitive non-JObject, we simply cast it according to the expected type. You may consider further validations here or just let it fail if the casting fails.
  • Line 45: We take the final object and pass it as a ‘successful result’. This would be handed over to our controller method.
  • Lines 49-53: If the request payload did not contain the expected parameter name, we may decide to throw an exception if it is marked as Required.

Finally, the controller code looks almost identical to the [failed] example somewhere above. Only this time, the parameters are decorated with [FromBody2]. As a reminder, we did not have to use [FromBody2] at all. This was only in order to gain more control over the binding process and avoid future situations in which this solution might not be suitable.

        // Ajax POST of multiple complex type using JSON (content type: 'application/json')
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxFormJsonMultipleComplexType([FromBody2] Person one, [FromBody2] Person two)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            var people = new[] { one, two };
            foreach (var person in people)
            {
                _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            }

            return Json(new { Count = people.Length });
        }

This looks like this:

We can even add more parameters e.g. a Required ‘three’. The current calling Javascript does not pass ‘three’ so an informative exception will be raised, specifying that ‘three’ is missing.

        // Ajax POST of multiple complex type using JSON (content type: 'application/json')
        [HttpPost]
        [ValidateAntiForgeryToken]
        public ActionResult CreateAjaxFormJsonMultipleComplexType([FromBody2] Person one,
[FromBody2] Person two,
[FromBody2(Required = true)] Person three)
        {
            if (!ModelState.IsValid)
                throw new Exception($"not valid");

            var people = new[] { one, two };
            foreach (var person in people)
            {
                _logger.LogInformation($"{nameof(person.FirstName)}={person.FirstName}");
            }

            return Json(new { Count = people.Length });
        }

Summary: it is really nice to have complete control over everything, but in the end I would expect Microsoft to fix this and provide an out of the box implementation for receiving multiple JSON objects as multiple parameters.

Advertisements
 
1 Comment

Posted by on 09/02/2019 in Software Development

 

Tags: , , ,

How to hide WebMethods unhandled exception stacktrace using Response.Filter

If you’re interested in the code shown in this post, right-click here, click Save As and rename to zip.

Here’s a nice trick how to manipulate outgoing ASP.NET Response. Specifically, in situations that you would like to provide global handling of errors, removing the stacktrace.

In case you are wondering why you would remove the stacktrace – it is considered a security issue for attackers to be able to review the stacktrace (https://www.owasp.org/index.php/Missing_Error_Handling).

Consider this code, a web method raising an exception.

[WebMethod]
public static void RaiseException()
{
	throw new ApplicationException("test");
}

And the page code:

<asp:ScriptManager runat="server" EnablePageMethods="true" />
<input type="button" value="Go" id="btn" />
<script>
	window.onload = function () {
		var btn = document.getElementById('btn');
		btn.onclick = function () {
			PageMethods.RaiseException(function () { }, function (error) {
				alert(error.get_stackTrace());
			});
		}
	}
</script>

Running this code as is and clicking the button would result with a JSON response that includes the error message, exception type and stacktrace:

exception1

To remove the stacktrace it is possible to use ASP.NET’s Response.Filter. The Filter property is actually a Stream that allows us to override the default stream behavior. In this particular case the custom stream is simply a wrapper. All the stream methods except the method Write() will simply invoke the original stream’s methods.

public class ExceptionFilterStream : Stream
{
    private Stream stream;
    private HttpResponse response;
    public ExceptionFilterStream(HttpResponse response)
    {
        this.response = response;
        this.stream = this.response.Filter;
    }

    public override bool CanRead { get { return this.stream.CanRead; } }
    public override bool CanSeek { get { return this.stream.CanSeek; } }
    public override bool CanWrite { get { return this.stream.CanWrite; } }
    public override long Length { get { return this.stream.Length; } }
    public override long Position { get { return this.stream.Position; } set { this.stream.Position = value; } }
    public override void Flush() { this.stream.Flush(); }
    public override int Read(byte[] buffer, int offset, int count) { return this.stream.Read(buffer, offset, count); }
    public override long Seek(long offset, SeekOrigin origin) { return this.stream.Seek(offset, origin); }
    public override void SetLength(long value) { this.stream.SetLength(value); }

    public override void Write(byte[] buffer, int offset, int count)
    {
        if (this.response.StatusCode == 500 && this.response.ContentType.StartsWith("application/json"))
        {
            string response = System.Text.Encoding.UTF8.GetString(buffer);

            var serializer = new JavaScriptSerializer();
            var map = serializer.Deserialize<Dictionary<string, object>>(response);
            if (map != null && map.ContainsKey("StackTrace"))
            {
                map["StackTrace"] = "Forbidden";

                response = serializer.Serialize(map);
                buffer = System.Text.Encoding.UTF8.GetBytes(response);
                this.stream.Write(buffer, 0, buffer.Length);

                return;
            }
        }

        this.stream.Write(buffer, offset, count);
    }
}

Note that all the methods simply call the original stream’s methods. Points of interest:

  • Line 8: Apparently if you don’t call the original Filter property “on time” you will run into an HttpException: “Exception Details: System.Web.HttpException: Response filter is not valid”.
  • Line 23: From this point on the customization of the Write() is up to you. As can be seen, I selected to intercept error codes of 500 (“internal server errors”) where the response are application/json. The code simply reads the text, deserializes it and if the key “StackTrace” appears, replace its value with the word “Forbidden”. Note that there’s a return statement following this replacement so the code does not proceed to line 41. In all other cases, the original stream’s Write() kicks-in (Line 41).

Finally, we need to actually use the custom Stream. I used Application_BeginRequest in Global.asax like so:

<script runat="server">

    protected void Application_BeginRequest(Object sender, EventArgs e)
    {
        Response.Filter = new ExceptionFilterStream(Response);
    }

</script>

The result:
exception2

Obviously you can use this technique to manipulate outgoing responses for totally different scenarios.

 
Leave a comment

Posted by on 29/11/2015 in Software Development

 

Tags: , , ,

ASP.NET 5, DNX, stepping into MS source code

One feature which I find awesome in ASP.NET 5/vNext is the ability to download MS source code and debug it locally (I discussed this here under the global.json section). This was discussed in several sessions with Scott Hanselman and demonstrated well with Damian Edwards (Deep Dive into ASP.NET 5 – approx minute 16:20 into the session). I find this feature great because there have been countless times that I tried to understand why .NET was behaving the way it does. Despite .NET source code being available for download and debug sometimes, it was hectic getting it to work and eventually I used reflection tools to read .NET assemblies and trying to understand why things are working (or not). So being able to simply download the code and using it locally sounds great.

However despite the session with Damien I still had problems getting it to work and Damian was kind enough to assist me where I had trouble. I assume that MS will make it easier in the future. On stage Scott and Damian discussed having a context menu option to “switch to source code”. Indeed it could be great, especially if that will also download the correct version of the source code – because that is mainly what was giving me trouble.

So here is a walk through (I used VS2015 RC):

  1. Create an ASP.NET 5 app using the template website.1
  2. Run it (F5), just in case to see that it works and that if you encounter errors in the process, they are not related to something else which is faulty.
  3. Download the correct tag of MVC from Github. Note: this is what fooled me. I initially downloaded the ongoing “dev” tag when I should have actually downloaded “6.0.0-beta4”. As you can see, you can tell the correct tag as it is specified in the Solution Explorer: 2Follow the steps in the image below:3
  4. Once downloaded and extracted, edit global.json in your solution and add the local path to the “projects” element. Note that the slashes have to be forward slashes. As soon as you click save you will witness the Solution Explorer going berserk as VS automatically loads the local source code. More importantly, you can see that the icon in VS for MVC has changed to that of local files.4
  5. Basically that’s it. You can now re-run the code and step through. Taking Damian’s example from Build 2015, I inserted the new <cache> tag and placed a breakpoint within the CacheTagHelper class.5

Once again, thanks Damian.

 
Leave a comment

Posted by on 31/05/2015 in Software Development

 

Tags: , ,

vNext and Class Libraries

This is a followup to the previous post discussing my first steps with vNext. I suggest reading it first.

vNext class libraries are similar in concept to a regular .NET library. Like ASP.NET vNext they do not actually build to a dll. What is great about them is that you can basically change the vNext library using notepad and it will be compiled on-the-fly when you re-run the page. However unfortunately code changes done in vNext, including vNext libraries, will require a restart of the host process in order to take effect. It would have been great if code changes did not require this restart and the state of the app would be maintained.

The good news is that you can reference regular .NET class libraries from vNext. This may sound trivial, but up until the recent beta 2 and VS2015 CTP 5 it wasn’t possible unless you pulled a github update and manually “k wrapped” your .NET assembly with a vNext library as explained here. Fortunately CTP 5 allows referencing a regular .NET library but it is still buggy as VS might raise build errors (as it does on my machine), but the reference will actually work and running code from a vNext MVC site actually invokes the compiled .NET dll.

Here’s how:

1. I create and open a new ASP.NET vNext web application. This time I’m using an Empty project and adding the bear minimum of code required to run an MVC app.
class lib1

project.json: Add a dependency to “Microsoft.AspNet.Mvc”.

{
    "webroot": "wwwroot",
    "version": "1.0.0-*",
    "exclude": [
        "wwwroot"
    ],
    "packExclude": [
        "node_modules",
        "bower_components",
        "**.kproj",
        "**.user",
        "**.vspscc"
    ],
    "dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2"
    },
    "frameworks" : {
        "aspnet50" : { },
        "aspnetcore50" : { }
    }
}

Startup.cs: Add configuration code and a basic HomeController like so:

using Microsoft.AspNet.Builder;
using Microsoft.Framework.DependencyInjection;

namespace WebApplication3
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app)
        {
            app.UseMvc();
        }
    }

    public class HomeController
    {
        public string Index()
        {
            return "hello";
        }
    }
}

Running the code now “as is” should show a “hello” string in the web browser.

2. I am adding a vNext class library (this isn’t mandatory for running a .NET lib but I do it anyway for the sake of the demo).
class lib2

The class lib has a single class like so:

namespace ClassLibrary1
{
    public class vNextLib
    {
        public static string GetString()
        {
            return "hello from vNext lib";
        }
    }
}

It is now possible to “add reference” to this new ClassLibrary1 as usual or simple modify the project.json like so (partial source listed below):

	"dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2",
		"ClassLibrary1": ""
	},

Finally change the calling ASP.NET vNext web app to call the library and display the string (Startup.cs change):

    public class HomeController
    {
        public string Index()
        {
            return ClassLibrary1.vNextLib.GetString();
        }
    }

At this point I recommend to “Start without debugging” (Ctrl F5). If you don’t do that, any code changes will stop IISExpress and you’ll have to F5 again from VS. You can do that but it will not be easy to take advantage of the “on the fly” compilation.

Note the process ID in iisexpress. 2508.
class lib3

Now changing the ClassLibrary1 code to say “hello from vNext lib 2” and saving causes iisexpress to restart (note the different Process ID). A simple browser refresh shows the result of the change – no build to the Class Library was required:
class lib4

3. Now create a new Class Library, but this time not a vNext library but a regular .NET library.
class lib5

The code I use is very similar:

namespace ClassLibrary2
{
    public class RegLib
    {
        public static string GetString()
        {
            return "hello from .NET lib";
        }
    }
}

This time you can’t just go to project.json directly as before because the .NET library isn’t a “dependency” as the vNext library. You must first add a reference the “old fashion way” (VS 2015 – CTP 5, remember?):
class lib6

This automates several things:

  • It “wraps” the .NET dll with a vNext lib wrapper which can be viewed in a “src” sibling “wrap” folder. It actually performs what was previously “k wrap” from that github pull mentioned earlier:
    class lib7
  • It adds the “wrap” folder to the “sources” entry in the global.json. If you recall from the previous post, global.json references source folders for the vNext solution so this is a clever way to include references to the class lib:
    {
      "sources": [
        "src",
        "test",
        "wrap"
      ]
    }
    

Now that you have a “ghost vNext lib” wrapping the real .NET code. This isn’t enough. You need to add a dependency to the project.json just like with a regular vNext lib (you may have to build your .NET DLL first before the ClassLibrary2 is available):

	"dependencies": {
		"Microsoft.AspNet.Server.IIS": "1.0.0-beta2",
		"Microsoft.AspNet.Mvc": "6.0.0-beta2",
		"ClassLibrary1": "",
		"ClassLibrary2": ""
	},

Now changing the Startup.cs code:

    public class HomeController
    {
        public string Index()
        {
            //return ClassLibrary1.vNextLib.GetString();
            return ClassLibrary2.RegLib.GetString();
        }
    }

Now it seems like in VS2015 CTP 5 this is buggy because building results with an error that complains of a missing ClassLibrary2:

Error CS0103 The name ‘ClassLibrary2’ does not exist in the current context.

If you ignore the build error and still run using ctrl+F5 – it will work:
class lib9

Changing the regular .NET code and saving has no effect as expected, until you recompile it.

Why referencing a regular .NET lib so important? I can think of several reasons:

  1. Migrating a .NET web app to vNext. You may want to use vNext for the ASP.NET tier and use your older business logic and data layers regular .NET assemblies until you decide to migrate them.
  2. Security: If you deploy a vNext app which is not precompiled, a hacker might change your code and vNext will compile on-the-fly. You may claim that you should not deploy non-precompiled apps, but I think differently. I see it as a great advantage to be able to make changes on a server using notepad, in order to perform a quick fix or debug an issue. The downside is the potential hacking. Not sure if MS will allow a single vNext class lib to be precompiled while the rest of the site to be deployed with its source code. If not, perhaps the solution is to use a pre-compiled regular .NET class library for sensitive code, deployed with a non-precompiled vNext app.
  3. Referencing a 3rd party .NET library which does not have a vNext package might be useful.

 

Summary

I assume that Microsoft will spend more time making this wrapping mechanism transparent and flawless. Basically I see no reason why adding the dependency to the wrapping vNext lib isn’t a part of adding a reference to a .NET lib.

A good reference to vNext Class Libraries can be found in the video session of Scott Hanselman and David Fowler: http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DEV-B411.

 
1 Comment

Posted by on 25/01/2015 in Software Development

 

Tags: , ,

First steps with ASP.NET vNext

This is a big topic. I have had trouble writing about this because it is so big. So I decided to split it into several blog posts. This first post is divided into several parts: intro, cool features, getting started and the K* world.

Disclaimer: Many of the stuff written here is based on beta1 and then beta2 which was released as I was into writing. Lots of stuff is bound to change including the names of the various stuff that can be seen here.

Introduction
ASP.NET 5 a.k.a vNext (temp name) is a major change to ASP.NET programming. It includes many things. This isn’t just an upgrade of several .NET assemblies, but it is also a concept change. There are quite a few resources available on the internet detailing the vNext changes in depth so instead of repeating them here I will settle for providing several references:

Cool vNext features
There are quite a few cool features but I find the following two to be most significant:

Cross Platform
The ability to run ASP.NET over Linux or Mac is a long awaited revolution (did someone just shout Mono?). Time and again I encountered customers who were opposed to use Windows based servers just because they are Microsoft’s. The option to develop .NET on a Windows machine and run it over a non-Windows server is something I really look forward to. For Windows based servers, IIS is supported but not required. Your app can also self host itself.

Yes, I am well aware of the Mono project and that it exists for years. But it makes a world of a difference to have Microsoft officially support this capability.

On-the-fly code compilation
Up until now this feature was available only to Web Site development (File->New->Web Site). In a Web Application (File->New->Project->Web) you can change a cshtml/aspx file, refresh the browser and watch the changes. But in Web Site you can also change the “code behind” files (e.g. aspx.cs), handlers, App_Code, resx files, web services, global.asax and basically everything which isn’t a compiled DLL. This allowed you to make changes on a deployed environment. This is a very important ability as it allows you to change the code using your preferred Notepad app without requiring Visual Studio and a compiler. Why is this important? Here are several examples: a) you can insert debug messages to understand a weird behavior on a production server; b) potentially fix errors on the spot without having to redeploy the entire app or a hotfix; or c) simply change a resource file. Time and again I used this ability on different servers for various purposes. In fact, to me this cool “on the fly” compilation was probably the most significant capability that made me time and again choose Web Site over Web Application. But that also means Web Forms over MVC.

On a side note, to be honest, up until now I am not convinced that MVC is a better paradigm than Web Forms. I also prefer Web Forms because of the great PageMethods/WebMethods js client proxies. Unfortunately it is clear that Web Forms will become more and more obsolete as Microsoft pushes MVC.

vNext works with MVC. Not with Web Forms. Web Forms continues to be supported for now but it will not enjoy the vNext features. But I was very thrilled to hear and see that vNext allows on the fly compilation too. Not only can you change Controllers and Models and run your software, but you can also change vNext Class Libraries and change them using your favorite Nodepad – and that – is something that you cannot do in a Web Site – you cannot change a compiled DLL using notepad! This is a very cool feature indeed, although it does made you wonder how you are supposed to protect your deployed code from hackers who may want to change your license mechanism or bypass your security. When I asked Scott Hunter about this, he replied that there is a pre-compilation module and that it will be possible to precompile the entire site. I’m not sure that it means that you can pre-compile a single assembly. Perhaps the way to over come this will be to reference a regular class library from vNext (it is doable – you can “wrap” an regular assembly as a vNext assembly and use it from a vNext app (I will demonstrate this in the next blog post).

However there is currently a difference in the on-the-fly compilation in favor of Web Site over vNext: In vNext, every change to the code (excluding cshtml) – will trigger a reset of the host. For example changing Controller code while running in IIS Express will not only restart the App Domain but will actually terminate the process and recreate a different one. This means losing any state your application may have had (statics, caching, session etc). With Web Site you can change quite a few files without losing your state (App_Code or resx files will restart the App Domain, but almost any other file change such as ashx/asmx that have no code behind or aspx.cs/ascx.cs – will work great). So making changes during development will not lose your state, or making changes on a production server – does not necessarily mean that all your users get kicked out or lose their session state. The current behavior in vNext is different and limited. I emailed this to Scott Hunter and he replied that if they do decide to support this, it’ll be most likely post RTM. I sure hope that this will be on their TODO list.

Other cool features
There are several other features which are also important. Using vNext you can have side-by-side .NET runtimes. Imagine the following scenario: you code an application using vNext and deploy it. Several months afterwards you develop another app using vNext “version 2”. You need to run both apps on the same server. Today you cannot do that because the developed apps all use the .NET installed on the same machine. An app developed to run using .NET 4 might break if you install .NET 4.5 on the same server to support a different app. With vNext you should be able to deploy your app with the .NET runtime assemblies that it was designated to be used with. More over, vNext supports a “core set” of .NET dlls for the server. So not only you do not have to install irrelevant Windows Updates (e.g. hotfixes for WPF), but the size of the deployed dlls is greatly reduced and makes it feasible to be deployed with your app.

One more feature I find interesting the the ASP.NET Console Application. What!? ASP.NET Console Application!?  Basically this means writing a Console application which has no exe. No build. The same vNext mechanism, compiling on-the-fly your vNext MVC app will be the one compiling and running your Console app. Eventually when released, perhaps vNext will not be categorized as an ASP.NET only component. vNext (or however it will be eventually named) will probably be categorized as a “new .NET platform”, capable of running MVC, Console app or whatever Microsoft is planning to release based on the on-the-fly-no-build platform. Although I did not try it, I assume that you can run a vNext Console app on a Linux or Mac (disclaimer: I haven’t yet tried running vNext on a non-Windows platform).

console


Getting started

  1. Download and install VS 2015 Preview build (http://www.visualstudio.com/en-us/downloads/visual-studio-2015-downloads-vs.aspx).
  2. Create a new vNext Web Application using File->New->Project->Web->ASP.NET Web Application. Select ASP.NET 5 Started Web with the vNext icon.
  3. After the starter project opens, notice the “run” menu (F5). It should point to IIS Express by default. Run using F5.

If all goes well, the demo website will be launched using IIS Express. Going back to VS in vNext there are several noticeable changes that can be seen in the Solution Explorer:

  • global.json: this is a solution level config file. Currently it holds references to source and test folders. I assume further solution and global projects related configuration will be placed in this file. A usage of this would be if you would like to specify external source files and packages. Taking Scott Hanselman’s example: you can download MVC source in order to fix a bug point to it locally instead of using the official release – at least until Microsoft fixes it. Personally I see myself using it to debug and step through .NET code. I have had more than enough situations struggling to understand under the hood behavior by disassembling .NET dlls and trying to figure out what I was doing wrong. As somehow the ability to download the .NET released PDBs never worked well for me, downloading the actual source code and debugging it locally seems useful. Another usage of this file is for wrapped .NET class libraries (beta2 CTP5) which I will describe in the next blog post.
  • project.json: this somewhat resembles the web.config. It contains “different kinds” of configurations, commands, dependencies etc. Dependencies are basically “packages”: NuGet packages or your very own custom vNext class libraries.
  • Static files of the website reside under wwwroot. This way there’s a clean separation from the website’s static files and the actual MVC code. (You can change from wwwroot to a different root.)
  • Startup.cs. This is the entry point in vNext. If you place breakpoints in the different methods in this file you will see that when the website starts this is indeed the starting point. Taking a look at the code in Startup.cs of the starter project you may notice that there are many things that we are used to receive for granted such as identity authentication, static files and MVC itself. In vNext we are supposed to “opt in” to the services we want, giving us full control of the http pipeline.

            // Add static files to the request pipeline.
            app.UseStaticFiles();

            // Add cookie-based authentication to the request pipeline.
            app.UseIdentity();

            // Add MVC to the request pipeline.
            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller}/{action}/{id?}",
                    defaults: new { controller = "Home", action = "Index" });

                // Uncomment the following line to add a route for porting Web API 2 controllers.
                // routes.MapWebApiRoute("DefaultApi", "api/{controller}/{id?}");
            });

The K* world

An ASP.NET vNext application is not supposed to be bound to IIS or IIS Express as we are used to. In fact, it can be self-hosted and if I understood correctly, it should also run well on other platforms which are compatible with OWIN. It basically means that IIS is no longer a mandatory requirement for running ASP.NET applications. Any web server compatible with OWIN should be able to run a vNext application! OWIN:

OWIN defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application

A quick demonstration of this capability is to change the “run” menu from the default IIS Express to “web” (in the starter project) and then running the web site (this is available in the latest update of the VS2015 preview – CTP5).

run menu

Notice that the running process is shown within a console window. It is the klr.exe and it is located within C:\Users\evolpin\.kre\packages\KRE-CLR-x86.1.0.0-beta2\bin. To understand what happened here we need to further understand how vNext works and clarify the meaning of KRE, KVM and K.

As previously mentioned the vNext world will have the capability to run side-by-side. It is also an open source, so you can download it and make changes. It means that development will change from what we are used to. Today when you start development you know that you’re usually bound to a specific version of the framework that is installed on your dev machine (e.g. .NET 4.5.1). In the vNext world you may have multiple .NET runtimes for different projects running on the same machine. Development could become an experience which allows you to develop alongside the latest .NET builds. Microsoft is moving away from today’s situation that you have to wait for the next .NET update to get the latest bug fixes or features, to a situation that you can download in-development code bits or the latest features without having to wait for an official release.

Getting back to the version of the framework, the KRE (K Runtime Environment) basically is a “container” of the .NET packages and assemblies of the runtime. It is the actual .NET CLR and packages that you will be running your code with (and again, you can target different versions of the KRE on the same server or dev machine).

Now that we have a more or less understanding of what is the KRE and that you may have different KRE versions, it is time to get acquainted with a command line utility called KVM (K Version Manager). This command line utility is supposed to assist in setting up and configuring the KRE runtime that you will use. It is used to setup multiple KRE runtimes, switch between them etc. Currently in the beta stages you need to download the KVM manually but I assume that it’ll be deployed with a future VS release.

To download and install go to: https://github.com/aspnet/home#install-the-k-version-manager-kvm. There is a Windows powershell command line to install. Just copy-paste it to a cmd window and within a short time KVM will be available for you to use.

kvm-install

If you open the highlighted folder you will see the KVM scripts. From now on you should be able to use the KVM from every directory as it is in the PATH. If you go to the parent folder you can see aliases (KREs may have aliases) and packages, which actually contain different KRE versions you have locally. If you open <your profile>\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta1\KRE-CoreCLR-x86.1.0.0-beta1\bin (will be different if you have a different beta installed), you will actually see all the .NET stuff of the Cloud Optimized version. That 13MB is the .NET used to run your cloud optimized server code (see Scott Hunter and Scott Hanselman’s video).

Re-open a cmd window (to allow the PATH changes to take effect). If you type ‘kvm’ and press enter you’ll see a list of kvm commands. An important kvm command is ‘kvm list‘. It will list the installed KRE versions that you have ready for your account:

kvm-list

As you can see there are 4 runtime versions installed for my user account so far, including 32/64 versions and CLR and CoreCLR. To make a particular runtime active you need to ‘kvm use‘ it. For example: ‘kvm use 1.0.0-beta1 -r CoreCLR’ will mark it as Active:

kvm-list2

As you can see, ‘kvm use’ changes the PATH environment variable to point to the Active KRE. We need this PATH change to allow us to use the ‘k‘ batch file that exists in the targeted KRE folder. The K batch file allows to perform several things within the currently active KRE env (see below). In short, this is one way to switch between different KRE versions on the same machine for the current user account (side-by-side versioning.)

Just to make things a little clearer, I went to the KRE folders (C:\Users\evolpin\.kre\packages on my machine and account) and found the k.cmd batch file in each of them. I edited a couple of them: x86 CLR and CoreCLR. In each cmd file I placed an “echo” with the version and then used the ‘kvm use’ to switch between them and run ‘k’ per active env. This is what it looks like:

kvm-list3

You can upgrade the versions of the KRE by using ‘kvm upgrade‘. Note that the ‘kvm upgrade’ does not upgrade all KRE’s at once so you may have to add some command line parameters to upgrade specific KRE versions. In the example below, I have requested to upgrade the CoreCLR KRE:

kvm upgrade

After upgrading the x86 CLR and CoreCLR I found two new folders: C:\Users\evolpin\.kre\packages\KRE-CLR-x86.1.0.0-beta2 and C:\Users\evolpin\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta2 as expected.

OK, so why did we go through all this trouble? As mentioned earlier, each KRE comes with the ‘k’ batch file (located for example in: C:\Users\evolpin\.kre\packages\KRE-CoreCLR-x86.1.0.0-beta1\bin). ‘k’ is used for various stuff such as running the vNext Console Apps or running the self-hosted env. Now is a good time to go back and remember that the website was last run not by IIS Express but by switching to “web” and running a self-hosted site using the klr.exe process (remember?). If we open the project.json file of the Starter project we can observe the “web” command in the “commands” section:


"web": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.WebListener --server.urls http://localhost:5000",

This basically means that when we run the “web” command, the Asp.Net host within Microsoft.AspNet.Hosting will launch a listener on port 5000. In the command line we can use ‘k web’ to actually instruct the currently active KRE to run the “web” command like so (must be in the same folder as project.json is located or an exception will be thrown):

k web

So now we know what happened in VS when we switched to “web” in the “run” menu earlier: VS was actually running the KRE using the “web” command similar to typing ‘k web’ from the command line.

To summarize my understanding so far:

  • KRE is the current .NET runtime being used.
  • KVM is the utility which manages the KREs for my account, allowing me to upgrade or switch between the active KRE.
  • K is a batch file which needs to be invoked in the folder of the app allowing me to run vNext commands. project.json needs to be available in order for ‘k’ to know what to do. K is relevant to the active KRE.

Conclusions

I see vNext as a major leap and I think MS is on the right path although there’s still a long way to go. I recommend going through the videos specified in the beginning of this blog post, especially the two videos with Scott Hanselman.

In my next post I will plan to discuss vNext and Class Libraries.

 

 
Leave a comment

Posted by on 04/01/2015 in Software Development

 

Tags: , ,

CSRF and PageMethods / WebMethods

If you’re interested in the code shown in this post, right-click here, click Save As and rename to zip.

In short, Cross-Site Request Forgery (CSRF) attack is one that uses a malicious website to send requests to a targeted website that the user is logged into. For example, the user is logged-in to a bank in one browser tab and uses a second tab to view a different (malicious) website, sent via email or social network. The malicious website invokes actions on the target website, using the fact that the user is logged into it in the first tab. Example for such attacks can be found on the internet (see Wikipedia).

CSRF prevention is quite demanding. If you follow the Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet you’ll noticed that the general recommendation is to use a “Synchronizer Token Pattern”. There are other prevention techniques listed but also specified are their disadvantages. The Synchronizer Token Pattern requires that request calls will have an anti-forgery token that will be tested on the server side. A lack of such token or an invalid one will result in a failure of the request.

ASP.NET WebForms are supposed to prevent attacks on two levels: “full postback” levels (which you can read here how to accomplish) and on Ajax calls. According to a post made by Scott Gu several years ago, ASP.NET Ajax web methods are CSRF-safe because they handle only POST by default (which is known today as an insufficient CSRF prevention technique) and because they require a content type header of application/json, which are not added by the browser when using html element tags for such an attack. It is more than possible that I am missing something but in my tests I found this claim to be incorrect. I found no problem invoking a web method from a different website using GET from a script tag. Unfortunately ASP.NET didn’t detect nor raise any problem doing so (as will be shown below).

Therefore I was looking into adding a Synchronizer Token Pattern onto the requests. In order not to add token to every server side methods’ arguments, one technique is to add the CSRF token to your request’s headers. There are several advantages to using this technique: you don’t need to specify a token argument in your server and client method calls, and more importantly: you do not need to modify your existing website web methods. If you’re using MVC and jQuery Ajax you can achieve this quite easily as can be shown here or you can follow this guide. However if you are using PageMethods/WebMethods, the Synchronizer Token Pattern can prove more difficult as you’ll need to intercept the http request to add that header.

Test websites
I set up a couple of websites for this solution. One website simulates the bank and the other simulates an attacker.
The bank website has PageMethods and WebMethods. The reasons I am setting up a PageMethod and a WebMethod are to demonstrate both and because the CSRF token is stored in session and for WebMethods session is not available by default (as opposed to PageMethods).

public partial class _Default : System.Web.UI.Page
{
    [WebMethod]
    public static bool TransferMoney(int fromAccount, int toAccount, int amount)
    {
        // logic

        return true;
    }
}
[System.Web.Script.Services.ScriptService]
public class MyWebService  : System.Web.Services.WebService {

    [WebMethod]
    public bool TransferMoney(int fromAccount, int toAccount, int amount)
    {
        // logic

        return true;
    }
}

A bank sample code for invoking both methods:

<asp:ScriptManager runat="server" EnablePageMethods="true">
    <Services>
        <asp:ServiceReference Path="~/MyWebService.asmx" />
    </Services>
</asp:ScriptManager>
<input type="button" value='WebMethod' onclick='useWebMethod()' />
<input type="button" value='PageMethod' onclick='usePageMethod()' />
<script type="text/javascript">
    function usePageMethod() {
        PageMethods.TransferMoney(123, 456, 789, function (result) {

        });
    }

    function useWebMethod() {
        MyWebService.TransferMoney(123, 456, 789, function (result) {

        });
    }
</script>

This is how it looks like:
1

The web.config allows invoking via GET (note: you don’t have to allow GET; I’m deliberately allowing a GET to demonstrate an easy CSRF attack and how this solution attempts to block such calls):

<configuration>
  <system.web>
    <compilation debug="true" targetFramework="4.0" />
    <webServices>
      <protocols>
        <add name="HttpGet" />
        <add name="HttpPost" />
      </protocols>
    </webServices>
  </system.web>
</configuration>

The attacking website is quite simple and demonstrates a GET attack via the script tag:

<script src='http://localhost:55555/MyWebsite/MyWebService.asmx/TransferMoney?fromAccount=111&toAccount=222&amount=333'></script>
<script src='http://localhost:55555/MyWebsite/Default.aspx/TransferMoney?fromAccount=111&toAccount=222&amount=333'></script>

As you can see, running the attacking website easily calls the bank’s WebMethod:
2

Prevention
The prevention technique is as follows:

  1. When the page is rendered, generate a unique token which will be inserted into the session.
  2. On the client side, add the token to the request headers.
  3. On the server side, validate the token.

Step 1: Generate a CSRF validation token and store it in the session.

public static class Utils
{
    public static string GenerateToken()
    {
        var token = Guid.NewGuid().ToString();
        HttpContext.Current.Session["RequestVerificationToken"] = token;
        return token;
    }
}
  • Line 5: I used a Guid, but any unique token generation function can be used here.
  • Line 6: Insert token into the session. Note: you might consider ensuring having a Session.

Note: you might have to “smarten up” this method to handle error pages and internal redirects as you might want to skip token generation in certain circumstances. You may also check if a token already exists prior to generating one.

Step 2: Add the token to client requests.

<script type="text/javascript">
    // CSRF
    Sys.Net.WebRequestManager.add_invokingRequest(function (sender, networkRequestEventArgs) {
        var request = networkRequestEventArgs.get_webRequest();
        var headers = request.get_headers();
        headers['RequestVerificationToken'] = '<%= Utils.GenerateToken() %>';
    }); 

    function usePageMethod() {
        PageMethods.TransferMoney(123, 456, 789, function (result) {

        });
    }

    function useWebMethod() {
        MyWebService.TransferMoney(123, 456, 789, function (result) {

        });
    }
</script>
  • Line 3: Luckily ASP.NET provides a client side event to intercept the outgoing request.
  • Line 6: Add the token to the request headers. The server side call to Utils.GenerateToken() executes on the server side as shown above, and the token is rendered onto the client to be used here.

Step 3: Validate the token on the server side.

To analyze and validate the request on the server side, we can use the Global.asax file and the Application_AcquireRequestState event (which is supposed to have the Session object available by now). You may choose a different location to validate the request.

protected void Application_AcquireRequestState(object sender, EventArgs e)
{
    var context = HttpContext.Current;
    HttpRequest request = context.Request;

    // ensure path exists
    if (string.IsNullOrWhiteSpace(request.PathInfo))
        return;

    if (context.Session == null)
        return;
        
    // get session token
    var sessionToken = context.Session["RequestVerificationToken"] as string;

    // get header token
    var token = request.Headers["RequestVerificationToken"];

    // validate
    if (sessionToken == null || sessionToken != token)
    {
        context.Response.Clear();
        context.Response.StatusCode = 403;
        context.Response.End();
    }
}
  • Line 7: Ensure we’re operating on WebMethods/PageMethods. For urls such as: http://localhost:55555/MyWebsite/Default.aspx/TransferMoney, TransferMoney is the PathInfo.
  • Line 10: We must have the session available to retrieve the token from. You may want to add an exception here too if the session is missing.
  • Line 14: Retrieve the session token.
  • Line 17: Retrieve the client request’s token.
  • Line 20: Token validation.
  • Line 22-24: Decide what to do if the token is invalid. One option would be to return a 403 Forbidden (you can also customize the text or provide a subcode).

When running our bank website now invoking the PageMethods you can see the token (the asmx WebMethod at this point doesn’t have a Session so it can’t be properly validated). Note that the request ended successfully.
3

When running the attacking website, note that the PageMethod was blocked, but not the asmx WebMethod.

4

  • The first TransferMoney is the unblocked asmx WebMethod, as it lacks Session support vital to retrieve the cookie.
  • The second TransferMoney was blocked as desired.

Finally we need to add Session support to our asmx WebMethods. Instead of going thru all the website’s asmx WebMethods modifying them to require the Session, we can add the session from a single location in our Global.asax file:

private static readonly HashSet<string> allowedPathInfo = new HashSet<string>(StringComparer.InvariantCultureIgnoreCase) { "/js", "/jsdebug" };
protected void Application_BeginRequest(Object sender, EventArgs e)
{
    if (".asmx".Equals(Context.Request.CurrentExecutionFilePathExtension) && !allowedPathInfo.Contains(Context.Request.PathInfo))
        Context.SetSessionStateBehavior(SessionStateBehavior.Required);
}
  • Line 1: Paths that we would like to exclude can be listed here. asmx js and jsdebug paths render the client side proxies and do not need to be validated.
  • Line 4-5: If asmx and not js/jsdebug, add the Session requirement so it becomes available for the token validation later on.

Now running the attacking website we can see that our asmx was also blocked:
5

Addendum

  • Naturally you can add additional hardening as you see fit.
  • Also important is to note that using the Session and the suggested technique has an issue when working in the same website with several tabs, as each time the Utils.GenerateToken is called it replaces the token in the session. So might consider also checking the referrer header to see whether to issue a warning instead of throwing an exception, or simply generating the token on the server side only once (i.e. checking if the token exists and only if not then generate it.)
  • Consider adding a “turn off” switch to CSRF in case you run into situations that you need cancel this validation.
  • Moreover: consider creating an attribute over methods or web services that will allow skipping the validation. You can never know when these might come in handy.
 
5 Comments

Posted by on 07/09/2014 in Software Development

 

Tags: , ,

Taking a [passport] photo using your camera and HTML 5 and uploading the result with ASP.NET Ajax

If you just want the sample, right click this link, save as, rename to zip and extract.

7

You can use your HTML 5 browser to capture video and photos. That is if your browser supports this feature (at the time of this writing, this example works well on Firefox and Chrome but not IE11).
I have followed some good references on the internet on how to do that, but I also needed some implementation on how to take a “passport” photo and upload the result. This is the intention of this post.

Steps:

  1. Capture video and take snapshot.
  2. Display target area.
  3. Crop the photo to a desired size.
  4. Upload the result to the server.

Step 1: Capture video and take snapshot.
This step relies mainly on the Eric Bidelman’s excellent article. After consideration I decided not to repeat the necessary steps for taking a snapshot using HTML 5, so if you require detailed explanation please read his good article. However the minimum code for this is pretty much straight forward so consider reading on. What you basically need is a browser that supports the video element and getUserMedia(). Also required is a canvas element for showing a snapshot of the video source.

<!DOCTYPE html>
<html>
<head>
    <title></title>
</head>
<body>
    <video autoplay width="320" height="240"></video>
    <canvas width='320' height='240' style="border:1px solid #d3d3d3;"></canvas>
    <div>
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

Several points of interest:

  • Line 7: Video element for showing the captured stream. My camera seems to show a default of 640×480 but here this is set to 320×240 so it will take less space on the browser. Bear this in mind, it’ll be important for later.
  • Line 8: Canvas element for the snapshots. Upon clicking ‘take photo’, the captured stream is rendered to this canvas. Note the canvas size.
  • Line 22: Drawing the snapshot image onto the canvas.
  • Line 26: Consider testing support for getUserMedia.
  • Line 30: Capture video.

The result, after starting a capture and taking a snapshot (video stream is on the left, canvas with snapshot is to the right):
2

Step 2: Display target area.
As the camera takes pictures in “landscape”, we will attempt to crop the image to the desired portrait dimensions. Therefore the idea is to place a div on top of the video element to mark the target area, where the head is to be placed.

4

The code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='320' height='240' style="border: 1px solid #d3d3d3;"></canvas>
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                ctx.drawImage(video, 0, 0, 320, 240);
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>

As you can see, the code was modified to place the dashed area on top of the video. Points of interest:

  • Lines 20-27: note the dimensions of the target area. Also note that the target area is positioned horizontally automatically using ‘margin’.
  • Line 41: The dashed area.

Step 3: Crop picture to desired size.
Luckily the drawImage() method can not only resize a picture but also crop it. A good reference on drawImage is here, and the very good example is here. Still, this is tricky as this isn’t an existing image as shown in the example, but a captured video source which is originally not 320×240 but 640×480. It took me some time to understand that and figure out that it means that the x,y,width and height of the source arguments should be doubled (and if this understanding is incorrect I would appreciate if someone can comment and provide the correct explanation).

As cropping might be a confusing business, my suggestion is to first “crop without cropping”. This means invoking drawImage() to crop, but ensuring that the target is identical to the source in dimensions.

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        // original draw image
        //ctx.drawImage(video, 0, 0, 320, 240); 

        // crop without cropping: source args are doubled; 
        // target args are the expected dimensions
        // the result is identical to the previous drawImage
        ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240);
    }
}

The result:
5

Let’s review the arguments (skipping the first ‘video’ argument):

  • The first pair are the x,y of the starting points of the source.
  • The second pair are the width and height of the source.
  • The third pair are the x,y of the starting points of the target canvas (these can be greater than zero, for example if you would like to have some padding).
  • The fourth pair are the width and height of the target canvas, effectively allowing you also to resize the picture.

Now let’s review the dimensions in our case:
6

In this example the target area is 140×190 and starts at y=40. As the width of the capture area is 320 and the target area is 140, each margin is 90. So basically we should start cropping at x=90.

But since in the source picture everything is doubled as explained before, the drawImage looks different as the first four arguments are doubled:

function takePhoto() {
    if (localMediaStream) {
        var ctx = canvas.getContext('2d');
        //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
        //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

        //instead of using the requested dimensions "as is"
        //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

        // we double the source args but not the target args
        ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
    }
}

The result:
7

Step 4: Upload the result to the server.
Finally we would like to upload the cropped result to the server. For this purpose we will take the image from the canvas and set it as a source of an img tag.

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Lines 43-44: Note that the canvas has been resized to the desired image size, and the new img element is also resized to those dimensions. If we don’t match them we might see the cropped image stretched or resized not according to the desired dimensions.
  • Line 66: We instruct the canvas to return a jpeg as a source for the image (other image formats are also possible, but this is off topic).

This is how it looks like. The video is on the left, the canvas is in the middle and the new img is to the right (it is masked with blue because of the debugger inspection). It is important to notice the debugger, which shows that the source image is a base64 string.
8

Now we can add a button to upload the base64 string to the server. The example uses ASP.NET PageMethods but obviously you can pick whatever is convenient for yourself. The client code:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <style type="text/css">
        .container {
            width: 320px;
            height: 240px;
            position: relative;
            border: 1px solid #d3d3d3;
            float: left;
        }

        .container video {
            width: 100%;
            height: 100%;
            position: absolute;
        }

        .container .photoArea {
            border: 2px dashed white;
            width: 140px;
            height: 190px;
            position: relative;
            margin: 0 auto;
            top: 40px;
        }

        canvas, img {
            float: left;
        }

        .controls {
            clear: both;
        }
    </style>
</head>
<body>
    <form runat="server">
        <asp:ScriptManager runat="server" EnablePageMethods="true"></asp:ScriptManager>
    </form>
    <div class="container">
        <video autoplay></video>
        <div class="photoArea"></div>
    </div>
    <canvas width='140' height='190' style="border: 1px solid #d3d3d3;"></canvas>
    <img width="140" height="190" />
    <div class="controls">
        <input type="button" value="start capture" onclick="startCapture()" />
        <input type="button" value="take snapshot" onclick="takePhoto()" />
        <input type="button" value="stop capture" onclick="stopCapture()" />
        <input type="button" value="upload" onclick="upload()" />
    </div>
    <script type="text/javascript">
        var localMediaStream = null;
        var video = document.querySelector('video');
        var canvas = document.querySelector('canvas');

        function upload() {
            var base64 = document.querySelector('img').src;
            PageMethods.Upload(base64,
                function () { /* TODO: do something for success */ },
                function (e) { console.log(e); }
            );
        }

        function takePhoto() {
            if (localMediaStream) {
                var ctx = canvas.getContext('2d');
                //ctx.drawImage(video, 0, 0, 320, 240); // original draw image
                //ctx.drawImage(video, 0, 0, 640, 480, 0, 0, 320, 240); // entire image

                //instead of
                //ctx.drawImage(video, 90, 40, 140, 190, 0, 0, 140, 190);

                // we double the source coordinates
                ctx.drawImage(video, 180, 80, 280, 380, 0, 0, 140, 190);
                document.querySelector('img').src = canvas.toDataURL('image/jpeg');
            }
        }

        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        window.URL = window.URL || window.webkitURL;

        function startCapture() {
            navigator.getUserMedia({ video: true }, function (stream) {
                video.src = window.URL.createObjectURL(stream);
                localMediaStream = stream;
            }, function (e) {
                console.log(e);
            });
        }

        function stopCapture() {
            video.pause();
            localMediaStream.stop();
        }
    </script>
</body>
</html>
  • Line 40: PageMethods support.
  • Line 60-61: Get the base64 string from the image and call the proxy Upload method.

The server side:

public partial class _Default : System.Web.UI.Page
{
    [WebMethod]
    public static void Upload(string base64)
    {
        var parts = base64.Split(new char[] { ',' }, 2);
        var bytes = Convert.FromBase64String(parts[1]);
        var path = HttpContext.Current.Server.MapPath(string.Format("~/{0}.jpg", DateTime.Now.Ticks));
        System.IO.File.WriteAllBytes(path, bytes);
    }
}
  • Line 6: As can be seen in the client debugger above, the base64 has a prefix. So we parse the string on the server side into two sections, separating the prefix metadata from the image data.
  • Line 7: Into bytes.
  • Lines 8-9: Save to a local photo. Replace with whatever you need, such as storing in the DB.

Addendum
There are several considerations you should think of:

  • What happens if the camera provides a source of different dimensions?
  • Browsers that do not support these capabilities.
  • The quality of the image. You can use other formats and get a better photo quality (at the price of a larger byte size).
  • You might be required to clear the ‘src’ attribute of the video and or img elements, if you need to reset them towards taking a new photo and ensuring a “fresh state” of these elements.
 
6 Comments

Posted by on 01/06/2014 in Software Development

 

Tags: , , , , , ,