Advantages of .Net Core, ASP.NET Core, EF Core

In this post I would like to List down the Advantages of .Net Core, ASP.NET Core, EF Core.

.Net Core Advantages

Open Source allowing source code availability & customization.

Cross-Platform runs on Windows, Linux and Mac OS.

Lightweight through newly written libraries, no dependency on windows OS libraries

High Performance in speed & efficiency

Scalability possible through Microservices & Containers supporting architecture

Disadvantages Third party library support are limited, Not available in Desktop applications.

ASP.NET Core Advantages

Additional to the open-source, cross-platform, light-weight advantages of .Net Core, following are the advantages of ASP.NET Core.

Unified story for building Web UI & Web APIs.

Testability friendly architecture through support for Interfaces.

Built-in Dependency Injection allowing singleton, scope, transient instance creation.

Host Friendly allowing hosting in IIS, Apache & Other web servers

Cloud-ready enabled for Azure, AWS hosting

EF Core Advantages

Additional to the open-source, cross-platform, light-weight advantages of .Net Core, following are the advantages of EF Core.

Batch Updates for sending multiple queries to the database thus reducing roundtrips & enhancing performance.

Alternate Keys support along with primary key.

In-Memory Provider for holding all entities in memory useful for unit testing.

Shadow Properties are those which are not in the entity class but tracked in the Change Tracker.

Mixing FromSQL with LINQ allowing SELECT * FROM statement mix with OrderBy() LINQ queries.

Note

All Core projects are complete re-write of the same.

Advertisements

AutoMapper vs. QuickMapper vs. Reflection

In this post I would like to Compare the Speed Performance between:

  • AutoMapper
  • Reflection
  • Manual Mapper

AutoMapper

AutoMapper is a well-known framework for Mapping Properties between Class Instances.  It is very useful in the case of DTO to Entity mapping & vice-versa.

Reflection

Here I am writing my own Mapping code using .Net Reflection.

Manual Mapper

Here I will be using Manual code for assigning the property values.

Scenario

I am using an Entity class of 10 Properties and Creating 100K instances.  Let us see whether AutoMapper performs better than Raw Reflection code.

Following is the Entity class.

public class Entity
{
     public string Property1 { get; set; }
     public string Property2 { get; set; }
     public string Property3 { get; set; }
     public string Property4 { get; set; }
     public string Property5 { get; set; }
     public string Property6 { get; set; }
     public string Property7 { get; set; }
     public string Property8 { get; set; }
     public string Property9 { get; set; }
     public string Property10 { get; set; }
}

Following is the Dto class.

public class Dto
{
     public string Property1 { get; set; }
     public string Property2 { get; set; }
     public string Property3 { get; set; }
     public string Property4 { get; set; }
     public string Property5 { get; set; }
     public string Property6 { get; set; }
     public string Property7 { get; set; }
     public string Property8 { get; set; }
     public string Property9 { get; set; }
     public string Property10 { get; set; }
}

Following is the AutoMapper Nuget package name.

image

Following is the Reflection code.

public class ReflectionMapper
     {
         public static List<TResult> Map<TSource, TResult>(IList<TSource> sourceList) where TResult : new()
         {
             var result = new List<TResult>(sourceList.Count);

            PropertyDescriptorCollection psrc = TypeDescriptor.GetProperties(typeof(TSource));
             PropertyDescriptorCollection presult = TypeDescriptor.GetProperties(typeof(TResult));

            TResult obj;
             Object colVal;
             string field1 = “”;
             string field2 = “”;

            foreach (TSource item in sourceList)
             {
                 obj = new TResult();

                for (int iResult = 0; iResult < presult.Count; iResult++)
                 {
                     PropertyDescriptor propResult = presult[iResult];
                     field1 = propResult.Name;

                    for (int ix = 0; ix < presult.Count; ix++)
                     {
                         PropertyDescriptor propSource = psrc[ix];

                        field2 = propSource.Name;

                        if (field1 == field2)
                         {
                             colVal = propSource.GetValue(item) ?? null;
                             propResult.SetValue(obj, colVal);
                         }
                     }
                 }

                result.Add(obj);
             }
             return result;
         }

Following is the Manual Mapping code.

public class ReflectionMapper
     {
         public static List<TResult> Map<TSource, TResult>(IList<TSource> sourceList) where TResult : new()
         {
             var result = new List<TResult>(sourceList.Count);

            PropertyDescriptorCollection psrc = TypeDescriptor.GetProperties(typeof(TSource));
             PropertyDescriptorCollection presult = TypeDescriptor.GetProperties(typeof(TResult));

            TResult obj;
             Object colVal;
             string field1 = “”;
             string field2 = “”;

            foreach (TSource item in sourceList)
             {
                 obj = new TResult();

                for (int iResult = 0; iResult < presult.Count; iResult++)
                 {
                     PropertyDescriptor propResult = presult[iResult];
                     field1 = propResult.Name;

                    for (int ix = 0; ix < presult.Count; ix++)
                     {
                         PropertyDescriptor propSource = psrc[ix];

                        field2 = propSource.Name;

                        if (field1 == field2)
                         {
                             colVal = propSource.GetValue(item) ?? null;
                             propResult.SetValue(obj, colVal);
                         }
                     }
                 }

                result.Add(obj);
             }
             return result;
         }

On The Marks!

I have used a Stopwatch for getting the Milliseconds after each operation.  Following is the testing code.

Mapper.Initialize(cfg => cfg.CreateMap<Entity, Dto>());

           IList<Entity> entities = new List<Entity>();

           Stopwatch watch = Stopwatch.StartNew();

           for (int i = 1; i <= 100000; i++)
            {
                Entity entity = new Entity()
                {
                    Property1 = “test value”,
                    Property2 = “test value”,
                    Property3 = “test value”,
                    Property4 = “test value”,
                    Property5 = “test value”,
                    Property6 = “test value”,
                    Property7 = “test value”,
                    Property8 = “test value”,
                    Property9 = “test value”,
                    Property10 = “test value”,
                };
                entities.Add(entity);
            }
            Console.WriteLine(“List Creation: ” + watch.ElapsedMilliseconds.ToString());

           watch.Start();
            IList<Dto> dtosManual = ManualMap(entities);
            Console.WriteLine(“Manual Mapper: ” + watch.ElapsedMilliseconds.ToString());

           watch.Start();
            IList<Dto> dtos = Mapper.Map<IList<Dto>>(entities);
            Console.WriteLine(“Auto Mapper: ” + watch.ElapsedMilliseconds.ToString());

           watch.Start();
            IList<Dto> dtos2 = ReflectionMapper.Map<Entity, Dto>(entities);
            Console.WriteLine(“Reflection Mapper: ” + watch.ElapsedMilliseconds.ToString());

           Console.ReadKey(false);

Following is the Results.

image

Summary

Manual Mapping is the Fastest. Recommended for N^N mapping scenarios.

AutoMapper is next.  The mapping speed is good & negligeble considering current high power machines with scalability in mind.

Reflection Code is slower.

Adding Gzip Compression to .Net Core

In this article I would like to Explore the usage of GZip compression in the Server-side for a .Net Core web application.

Scenario

I am sending a JSON object list consisting of 1 Thousand items.  In the ordinary response format it is taking 1 MB of file size and 1 Second to receive in the client-side.

I am using an Azure S2 Service for deployment & testing.

The Challenge

Following is the Chrome display of the URL Statistics.

image

The Solution

Now we are trying to achieve the solution using ASP.NET Core Response Compression.  For this we need to add the Library mentioned below to the application.

image

The Code

In the Service.cs file add the following highlighted lines of code.

public void ConfigureServices(IServiceCollection services)
         {
             services.AddMvc();

            services.AddResponseCompression(options =>
             {
                 options.Providers.Add<GzipCompressionProvider>();
                 options.MimeTypes =
                     ResponseCompressionDefaults.MimeTypes.Concat(
                         new[] { “text/json”, “application/json” });
             });


            services.Configure<GzipCompressionProviderOptions>(options =>
             {
                 options.Level = CompressionLevel.Optimal;
             });

         }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
         public void Configure(IApplicationBuilder app, IHostingEnvironment env)
         {
            app.UseResponseCompression();

            if (env.IsDevelopment())
             {
                 app.UseDeveloperExceptionPage();
             }

            app.UseMvc();
         }

Now compile, deploy & retest with the Chrome browser.

You can see there is 90% reduction in size of the response!

The response time also got reduce by 70%.

image

The Client Code

HttpClientHandler handler = new HttpClientHandler()
                {
                    AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate

               };
                var client = new HttpClient(handler);
                client.BaseAddress = new Uri(URL);
                client.DefaultRequestHeaders.Accept.Clear();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(“application/json”));

               Stopwatch watch = Stopwatch.StartNew();

               HttpResponseMessage response = client.GetAsync(“api/kpi/list”).Result;
                response.EnsureSuccessStatusCode();

               double ms = watch.ElapsedMilliseconds;

               Console.WriteLine(“Elapsed Milliseconds: ” + ms.ToString());

Summary

The above code shows the components & code required to add JSON compression to your .Net Core application.

.Net Core Advantages

In this post I would like to list few advantages of going with .Net Core. Even in Azure we can host .Net Core components.

Platform Independence

.Net Core provides Real Platform Independence.  This makes us host .Net Core applications in Linux & Mac operating systems. 

Performance

.Net Core is sleek & it provides better Performance in benchmarks compared with .Net and NodeJS blocks.

Open Source

.Net Core is open source allowing us to read the code, modify it, no wait for patches. 

Future of .Net

After 17 years, Microsoft is coming with a Sleeker .Net which is Superior & Flexible compared with other Programming arenas.  So the future of .Net seems to be with .Net Core.  It is worth investing & start developing applications using .Net Core