Advertisements

Archive

Archive for the ‘ASP.NET’ Category

Kendo grid error : kendo.all.min.js:12 Uncaught TypeError: e.slice is not a function

November 13, 2017 Leave a comment

Getting Uncaught TypeError: e.slice is not a function error message while binding kendo grid with data using controller action method

Following is the sample code for View and Controller action

View :

$(function () {
$(“#grid”).kendoGrid({
height: 400,
dataSource: {
serverPaging: true,
serverFiltering: true,
serverSorting: true,
pageSize: 10,
transport: {
read: “Employee/Read”,
contentType: “application/json”,
type: “POST”
},
},
schema: {
data: “Data”,
total: “Total”,
},

columns: [
{ field: “Salary”, format: “{0:c}”, width: “150px” },
{ field: “EmployeeName”, width: “150px” },
{ field: “SalaryColor”, width: “100px” },
{ command: “destroy”, title: “Delete”, width: “110px” }
],

editable: true, // enable editing
pageable: true,
sortable: true,
filterable: true,
toolbar: [“create”, “save”, “cancel”], // specify toolbar commands
parameterMap: function (options) {
return kendo.stringify(options);
}
});
});

Controller Read Action :

public ActionResult Read(int take, int skip, IEnumerable<Sort> sort, Kendo.DynamicLinq.Filter filter)
{
SalesERPDAL salesDal = new SalesERPDAL();

var result = salesDal.Employees.OrderBy(p => p.FirstName)
.Select(p => new EmployeeViewModel
{
EmployeeName = p.FirstName,
Salary = “5000”,
SalaryColor = “yellow”
}).ToDataSourceResult(take, skip, sort, filter);

return Json(result,JsonRequestBehavior.AllowGet);
}

Problem is in above code we are passing whole result instead of only Data part of result should be passed.

return Json(result.Data,JsonRequestBehavior.AllowGet);

Hope this help !

Advertisements
Categories: ASP.NET Tags: , , ,

AngularJS With Asp.net Web API: $http post returning XMLHttpRequest cannot load: Response for preflight has invalid HTTP status code 405

November 7, 2017 Leave a comment

When trying to POST json to Asp.net web api server using $http its returning the following error

Response for preflight has invalid HTTP status code 405

OR

MVC web api: No ‘Access-Control-Allow-Origin’ header is present on the requested resource

Following is the solution for above problem

Cors

I installed Cors in my project using nu-get command line

Install-Package Microsoft.AspNet.WebApi.Cors

and added the following code in WebApiConfig.cs file from App_Start folder.

var enableCorsAttribute = new EnableCorsAttribute("*",
                          "Origin, Content-Type, Accept",
                          "GET, PUT, POST, DELETE, OPTIONS");
config.EnableCors(enableCorsAttribute);

and removed the following from the web config

<remove name="X-Powered-By" />
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Accept, Content-Type, Origin" />
<add name="Access-Control-Allow-Methods" value="GET, PUT, POST, DELETE, OPTIONS" />

EF code first – Model compatibility cannot be checked because the database does not contain model metadata

October 10, 2017 Leave a comment

This suggests that migration table is out of sync (even if your data isn’t), and that’s been part of the db schema now (since 4.3 I think – under system tables).

There could be many reasons and ways to experience that error , but most of the time…

The problematic part is some combination of manually backing/restoring the full database with code changes alongside – I’m not entirely certain as to why always.

In short, even if Db-s are the same migration table data might not be – and hash comparison may fail (still full restore sounds like good enough – but you have ‘two sides’).

What works for me is to use
Update-Database -Script

That creates a script with a ‘migration difference’,
which you can manually apply as an SQL script on the target server database (and you should get the right migration table rows inserted etc.).

If that still doesn’t work – you can still do two things…

a) remove the migration table (target – under system tables) – as per http://blogs.msdn.com/b/adonet/archive/2012/02/09/ef-4-3-automatic-migrations-walkthrough.aspx comments in there – that should fail back to previous behavior and if you’re certain that your Db-s are the same – it’s just going to ‘trust you’,

b) as a last resort I used – make a Update-Database -Script of the full schema (e.g. by initializing an empty db which should force a ‘full script’),
find the INSERT INTO [__MigrationHistory] records,
just run those, insert them into the database,
and make sure that your databases – and code match,

that should make things run in sync again.

(disclaimer: this is not a bullet proof to work at all times, you may need to try a few things given your local scenarios – but should get you in sync)

Also this will work

I found the code will work by changing

static LaundryShopContext()
{
Database.SetInitializer<LaundryShopContext>(
new DropCreateDatabaseIfModelChanges<LaundryShopContext>());
}
into

static LaundryShopContext()
{
Database.SetInitializer<LaundryShopContext>(
new DropCreateDatabaseAlways<LaundryShopContext>());
}

 

GridView: Findcontrol from HeaderTemplate/ItemTemplate

September 19, 2017 Leave a comment

How to find item template control in GridView?

Dim chkHeader As CheckBox = DirectCast(gridView1.Row.FindControl(“chkHeader”), CheckBox)

How to find control from header template in grid view?

For example If you want to find checkbox available in header template of gridview

<asp:GridView ID=”gridView1″ runat=”server” AutoGenerateColumns=”False”><Columns>

<asp:TemplateField >

<HeaderTemplate>

<asp:CheckBox ID=”chkHeader” runat=”server” />

 </HeaderTemplate>

</asp:TemplateField >

</Columns>

</asp:GridView>

Solution:

Find Header control using following line of code

VB.NET Code:

Dim chkHeader As CheckBox = DirectCast(gridView1.HeaderRow.FindControl(“chkHeader”), CheckBox)

C# Code:

CheckBox chkHeader = (CheckBox)gridView1.HeaderRow.FindControl(“chkHeader”);

Hope this is help !

 

 

 

 

 

Microservices in Practice: From Architecture to Deployment

August 30, 2017 Leave a comment

“Microservices” is one of the most popular buzz-words in the field of software architecture.  There are quite a lot of learning materials on the fundamentals and benefits of microservices, but there are very few resources on how you can use microservices in the real world enterprise scenarios.

In this post, I’m planning to cover the key architectural concepts of the Microservices Architecture (MSA) and how you can use those architectural principles in practice.

Monolithic Architecture

Enterprise software applications are designed to facilitate numerous business requirements. Hence, a given software application offers hundreds to functionalities and all such functionalities are piled into a single monolithic application. For examples, ERPs, CRMs, and other various software systems are built as a monolith with several hundreds of functionalities. The deployment, troubleshooting, scaling, and upgrading of such monstrous software applications is a nightmare.

Service Oriented Architecture (SOA) was designed to overcome some of the aforementioned limitations by introducing the concept of a ‘service’ which is an aggregation and grouping of similar functionalities offered from an application. Hence, with SOA, a software application is designed as a combination of ‘coarse-grained’ services. However, in SOA, the scope of a service is very broad. That leads to complex and mammoth services with several dozens of operations (functionalities) along with complex message formats and standards (e.g: all WS* standards).

Figure 1 : Monolithic Architecture

In most cases, services in SOA are independent from each other, yet they are deployed in the same runtime along with all other services (just think about having several web applications which are deployed into the same Tomcat instance). Similar to monolithic software applications, these services have a habit of growing over time by accumulating various functionalities. Literally, that turns those applications into monolithic globs which are no different from conventional monolithic applications such as ERPs. Figure 1 shows a retail software application which comprises of multiple services. All these services are deployed into the same application runtime. So, it’s a very good example of a monolithic architecture. Here are some of the characteristics of  such applications which are based on monolithic architecture.

  • Monolithic applications are designed, developed, and deployed as a single unit.
  • Monolithic applications are overwhelmingly complex; which leads to nightmares in maintaining, upgrading, and adding new features.
  • Hard to practice agile development and delivery methodologies with Monolithic architecture.
  • It is required to redeploy the entire application, in order to update a part of it.
  • Scaling: Has to be scaled as a single application and difficult to scale with conflicting resource requirements (e.g. one service requires more CPU while the other requires more memory)
  • Reliability: One unstable service can bring the whole application down.
  • Hard to innovate: It’s really difficult to adopt new technologies and frameworks as of all the functionalities have to build on homogeneous technologies/frameworks.

These characteristics of Monolithic Architecture have led to the Microservices Architecture.

Microservices Architecture

The foundation of microservices architecture(MSA) is about developing a single application as a suite of small and independent services that are running in its own process, developed and deployed independently.

In most of the definitions of microservices architecture, it is explained as the process of segregating the services available in the monolith into a set of independent services. However, in my opinion, Microservices is not just about splitting the services available in monolith into independent services.

The key idea is that by looking at the functionalities offered from the monolith, we can identify the required business capabilities. Then those business capabilities can be implemented as fully independent, fine-grained, and self-contained (micro)services. They might be implemented on top of different technology stacks and each service is addressing a very specific and limited business scope.

Therefore, the online retail system scenario that we explain above can be realized with microservices architecture as depicted in figure 2. With the microservice architecture, the retail software application is implemented as a suite of microservices. So, as you can see in figure 2, based on the business requirements, there is an additional microservice created from the original set of services that are there in the monolith. So, it is quite obvious that using microservices architecture is something beyond the splitting of the services in the monolith.

Figure 2 : Microservice Architecture

So, let’s dive deep into the key architectural principles of microservices and more importantly, let’s focus on how they can be used in practice.

Designing Microservices: Size, Scope, and Capabilities

You may be building your software application from scratch by using Microservices Architecture or you are converting existing applications/services into microservices. Either way, it is quite important that you properly decide the size, scope, and the capabilities of the Microservices. Probably, that is the hardest thing that you initially encounter when you implement Microservices Architecture in practice.

Let’s discuss some of the key practical concerns and misconceptions related to the size, scope, and capabilities of microservices.

  • Lines of Code/Team size are lousy metrics: There are several discussions on deciding the size of the Microservices based on the lines-of-code of its implementation or its team’s size (i.e. two-pizza team). However, these are considered to be very impractical and lousy metrics, because we can still develop services with less code/with two-pizza-team size but totally violating the microservice architectural principals.
  • ‘Micro’ is a bit misleading term: Most developers tend to think that they should try to make the service, as small as possible. This is a misinterpretation.
  • In the SOA context, services are often implemented as monolithic globs with the support for several dozens of operations/functionalities. So, having SOA-like services and rebranding them as microservices is not going to give you any benefits of microservices architecture.

So, then how should we properly design services in Microservices Architecture?

Guidelines for Designing Microservices

  • Single Responsibility Principle(SRP): Having a limited and a focused business scope for a microservice helps us to meet the agility in development and delivery of services.
  • During the designing phase of the microservices, we should find their boundaries and align them with the business capabilities (also known as bounded context in Domain-Driven-Design).
  • Make sure the microservices design ensures the agile/independent development and deployment of the service.
  • Our focus should be on the scope of the microservice, but not about making the service smaller. The (right) size of the service should be the required size to facilitate a given business capability.
  • Unlike service in SOA, a given microservice should have a very few operations/functionalities and simple message format.
  • It is often a good practice to start with relatively broad service boundaries to begin with, refactoring to smaller ones (based on business requirements) as time goes on.

In our retail use case, you can find that we have split the functionalities of its monolith into four different microservices, namely ‘inventory’, ‘accounting’, ‘shipping’, and ‘store’. They are addressing a limited but focused business scope so that each service is fully decoupled from each other and ensures the agility in development and deployment.

Messaging in Microservices

In monolithic applications, business functionalities of different processors/components are invoked using function calls or language-level method calls. In SOA, this was shifted towards a much more loosely coupled web service level messaging, which is primarily based on SOAP on top of different protocols such as HTTP, JMS. Webservices with several dozens of operations and complex message schemas was a key resistive force for the popularity of web services. For Microservices architecture, it is required to have a simple and lightweight messaging mechanism.

Synchronous Messaging – REST, Thrift

For synchronous messaging (client expects a timely response from the service and waits till it get it) in Microservices Architecture, REST is the unanimous choice as it provides a simple messaging style implemented with HTTP request-response, based on resource API style. Therefore, most microservices implementations are using HTTP along with resource API based styles (every functionality is represented with a resource and operations carried out on top of those resources).

Figure 3 : Using REST interfaces to expose microservices

Thrift is used (in which you can define an interface definition for your microservice), as an alternative to REST/HTTP synchronous messaging.

Asynchronous Messaging – AMQP, STOMP, MQTT

For some microservices scenarios, it is required to use asynchronous messaging techniques(client doesn’t expect a response immediately, or does not accept a response at all). In such scenarios, asynchronous messaging protocols such as AMQP, STOMP, or MQTT are widely used.

Message Formats – JSON, XML, Thrift, ProtoBuf, Avro

Deciding the best-suited message format for Microservices is another key factor. The traditional monolithic applications use complex binary formats, SOA/Web services-based applications use text messages based on the complex message formats (SOAP) and schemas (xsd). In most microservices-based applications, they use simple text-based message formats such as JSON and XML on top of HTTP resource API style. In cases where we need binary message formats (text messages can become verbose in some use cases), microservices can leverage binary message formats such as binary Thrift, ProtoBuf, or Avro.

Service Contracts – Defining the Service Interfaces – Swagger, RAML, Thrift IDL

When you have a business capability implemented as a service, you need to define and publish the service contract. In traditional monolithic applications, we barely find such feature to define the business capabilities of an application. In SOA/Web services world, WSDL is used to define the service contract, but, as we all know, WSDL is not the ideal solution for defining microservices contract as WSDL is insanely complex and tightly coupled to SOAP.

Since we build microservices on top of REST architectural style, we can use the same REST API definition techniques to define the contract of the microservices. Therefore, microservices use the standard REST API definition languages such as Swagger and RAML to define the service contracts.

For other microservices implementation which are not based on HTTP/REST (such as Thrift), we can use the protocol level ‘Interface Definition Languages(IDL)’ (e.g.: Thrift IDL).

Integrating Microservices (Inter-service/process Communication)

In Microservices architecture, the software applications are built as a suite of independent services. So, in order to realize a business use case, it is required to have the communication structures between different microservices/processes. That’s why inter-service/process communication between microservices is a such a vital aspect.

In SOA implementations, the inter-service communication between services is facilitated with an Enterprise Service Bus (ESB) and most of the business logic resides in the intermediate layer (message routing, transformation, and orchestration). However, Microservices architecture promotes to eliminate the central message bus/ESB and move the ‘smart-ness’ or business logic to the services and client (known as ‘Smart Endpoints’).

Since microservices use standard protocols such as HTTP, JSON, etc. the requirement of integrating with a disparate protocol is minimal when it comes to the communication among microservices.  Another alternative approach in Microservice communication is to use a lightweight message bus or gateway with minimal routing capabilities and just acting as a ‘dumb pipe’ with no business logic implemented on gateway. Based on these styles there are several communication patterns that have emerged in microservices architecture.

Point-to-point Style – Invoking Services Directly

In point to point style, the entirety of the message routing logic resides on each endpoint and the services can communicate directly. Each microservice exposes a REST APIs and a given microservice or an external client can invoke another microservice through its REST API.

Figure 4 : Inter-service communication with point-to-point connectivity.

Obviously, this model works for relatively simple microservices-based applications but as the number of services increases, this will become overwhelmingly complex. After all that’s the exact same reason for using ESB in the traditional SOA implementation, which is to get rid of the messy point-to-point integration links. Let’s try to summarize the key drawbacks of the point-to-point style for microservice communication.

  • The non-functional requirements such as end-user authentication, throttling, monitoring, etc. has to be implemented at each and every microservice level.
  • As a result of duplicating common functionalities, each microservice implementation can become complex.
  • There is no control at all of the communication between the services and clients (even for monitoring, tracing, or filtering)
  • Often the direct communication style is considered as a microservice anti-pattern for large scale microservice implementations.

Therefore, for complex Microservices use cases, rather than having point-to-point connectivity or a central ESB, we could have a lightweight central messaging bus which can provide an abstraction layer for the microservices and that can be used to implement various non-functional capabilities. This style is known as API Gateway style.

API-Gateway Style

The key idea behind the API Gateway style is that using a lightweight message gateway as the main entry point for all the clients/consumers and implement the common non-functional requirements at the Gateway level. In general, an API Gateway allows you to consume a managed API over REST/HTTP. Therefore, here we can expose our business functionalities which are implemented as microservices, through the API-GW, as managed APIs. In fact, this is a combination of Microservices architecture and API-Management which give you the best of both worlds.

Figure 5: All microservices are exposed through an API-GW.

In our retail business scenario, as depicted in figure 5, all the microservices are exposed through an API-GW and that is the single entry point for all the clients. If a microservice wants to consume another microservice that also needs to be done through the API-GW.

API-GW style gives you the following advantages:

  • Ability to provide the required abstractions at the gateway level for the existing microservices. For example, rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client.
  • Lightweight message routing/transformations at gateway level.
  • Central place to apply non-functional capabilities such as security, monitoring and throttling.
  • With the use of API-GW pattern, the microservice becomes even more lightweight as all the non-functional requirements are implemented at the Gateway level.

The API-GW style could well be the most widely used pattern in most microservice implementations.

Message Broker style

The microservices can be integrated with an asynchronous messaging scenario such as one-way requests and publish-subscribe messaging using queues or topics. A given microservice can be the message producer and it can asynchronously send messages to a queue or topic. Then the consuming microservice can consume messages from the queue or topic. This style decouples message producers from message consumers and the intermediate message broker will buffer messages until the consumer is able to process them. Producer microservices are completely unaware of the consumer microservices.

Figure 6: Asynchronous messaging based integration using pub-sub.

The communication between the consumers/producers is facilitated through a message broker which is based on asynchronous messaging standards such as AMQP, MQTT, etc.

Decentralized Data Management

In monolithic architecture, the application stores data in a single and centralized databases to implement various functionalities/capabilities of the application.

Figure 7: Monolithic application uses a centralized database to implement all its features.

In Microservices architecture, the functionalities are dispersed across multiple microservices and, if we use the same centralized database, then the microservices will no longer be independent from each other (for instance, if the database schema has changed from a given microservice, that will break several other services). Therefore, each microservice has to have its own database.

Figure 8: Microservices has its own private database and they can’t directly access the database owned by other microservices.

Here are the key aspects of implementing decentralized data management in microservices architecture.

  • Each microservice can have a private database to persist the data that requires to implement the business functionality offered from it.
  • A given microservice can only access the dedicated private database but not the databases of other microservices.
  • In some business scenarios, you might have to update several database for a single transaction. In such scenarios, the databases of other microservices should be updated through its service API only (not allowed to access the database directly)

The de-centralized data management gives you the fully decoupled microservices and the liberty of choosing disparate data management techniques (SQL or NoSQL etc., different database management systems for each service). However, for complex transactional use cases that involve multiple microservices, the transactional behavior has to be implemented using the APIs offered from each service and the logic resides either at the client or intermediary (GW) level.

Decentralized Governance

Microservices architecture favors decentralized governance.

In general, ‘governance’ means establishing and enforcing how people and solutions work together to achieve organizational objectives. In the context of SOA, SOA governance guides the development of reusable services, establishing how services will be designed and developed and how those services will change over time. It establishes agreements between the providers of services and the consumers of those services, telling the consumers what they can expect and the providers what they’re obligated to provide. In SOA Governance there are two types of governance that are in common use:

  • Design-time governance – defining and controlling the service creations, design, and implementation of service policies
  • Run-time governance – the ability to enforce service policies during execution

So, what does governance in Microservices context really mean? In microservices architecture, the microservices are built as fully independent and decoupled services with the variety of technologies and platforms. So, there is no need of defining a common standard for services designing and development.  So, we can summarize the decentralized governance capabilities of Microservices as follows:

  • In microservices architecture, there is no requirement to have centralized design-time governance.
  • Microservices can make their own decisions about its design and implementation.
  • Microservices architecture fosters the sharing of common/reusable services.
  • Some of the run-time governances aspects such as SLAs, throttling, monitoring, common security requirements and service discovery may be implemented at API-GW level.

Service Registry and Service Discovery

In Microservices architecture, the number of microservices that you need to deal with is quite high. And also, their locations change dynamically owing to the rapid and agile development/deployment nature of microservices. Therefore, you need to find the location of a microservice during the runtime. The solution to this problem is to use a Service Registry.

Service Registry

Service Registry holds the microservices instances and their locations. Microservice instances are registered with the service registry on startup and deregistered on shutdown. The consumers can find the available microservices and their locations through service registry.

Service Discovery

To find the available microservices and their location, we need to have a service discovery mechanism. There are two types of service discovery mechanisms, Client-side Discovery and Server-side Discovery. Let’s have a closer look at those service discovery mechanisms.

Client-side Discovery — In this approach, the client or the API-GW obtains the location of a service instance by querying a Service Registry.

Figure 9 – Client-side discovery

Here the client/API-GW has to implement the service discovery logic by calling the Service-Registry component.

Server-side Discovery — With this approach, clients/API-GW sends the request to a component (such as a Load balancer) that runs on a well-known location. That component calls the service registry and determines the absolute location of the microservice.

Figure 10 – Server-side discovery

The microservices deployment solutions such as Kubernetes(http://kubernetes.io/v1.1/docs/user-guide/services.html) offers service-side discovery mechanisms.

Deployment

When it comes to microservices architecture, the deployment of microservices plays a critical role and has the following key requirements:

  • Ability to deploy/un-deploy independently of other microservices.
  • Must be able to scale at each microservices level (a given service may get more traffic than other services).
  • Building and deploying microservices quickly.
  • Failure in one microservice must not affect any of the other services.

Docker (an open source engine that lets developers and system administrators deploy self-sufficient application containers in Linux environments) provides a great way to deploy microservices addressing the above requirements. The key steps involved are as follows:

  • Package the microservice as a (Docker) container image.
  • Deploy each service instance as a container.
  • Scaling is done based on changing the number of container instances.
  • Building, deploying, and starting microservice will be much faster as we are using Docker containers (which is much faster than  a regular VM)

Kubernetes is extending Docker’s capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control. As you can see, most of these features are essential in our microservices context too. Hence using Kubernetes (on top of Docker) for microservices deployment has become an extremely powerful approach, especially for large scale microservices deployments.

Figure 11 : Building and deploying microservices as containers.

In figure 11, it shows an overview of the deployment of the microservices of the retail application. Each microservice instance is deployed as a container and there are two containers per each host. You can arbitrarily change the number of containers that you run on a given host.

Security

Securing microservices is a quite common requirement when you use microservices in real world scenarios. Before jumping into microservices security, let’s have a quick look at how we normally implement security at the monolithic application level.

  • In a typical monolithic application, the security is about finding that ‘who is the caller’, ‘what can the caller do’, and ‘how do we propagate that information’.
  • This is usually implemented at a common security component which is at the beginning of the request handling chain and that component populates the required information with the use of an underlying user repository (or user store).

So, can we directly translate this pattern into the microservices architecture? Yes, but that requires a security component implemented at each microservices level which is talking to a centralized/shared user repository and retrieve the required information. That’s is a very tedious approach of solving the Microservices security problem.  Instead, we can leverage the widely used API-Security standards such as OAuth2 and OpenID Connect to find a better solution to our Microservices security problem. Before we dive deep into that, let me just summarize the purpose of each standard and how we can use them.

  • OAuth2 – Is an access delegation protocol. The client authenticates with authorization server and gets an opaque token which is known as ‘Access token’. An Access token has zero information about the user/client. It only has a reference to the user information that can only be retrieved by the Authorization server. Hence, this is known as a ‘by-reference token’ and it is safe to use this token even in the public network/internet.
  • OpenID Connect behaves similarly to OAuth, but, in addition to the Access token, the authorization server issues an ID token which contains information about the user. This is often implemented by a JWT (JSON Web Token) and that is signed by an authorization server. So, this ensures the trust between the authorization server and the client. JWT token is therefore known as a ‘By-value token’ as it contains the information of the user and obviously is not safe to use it outside the internal network.

Now, lets see how we can use these standards to secure microservices in our retail example.

Figure 12 : Microservice security with OAuth2 and OpenID Connect

As shown in figure 12, these are the key steps involved in implementing microservices security:

  • Leave authentication to OAuth and the OpenID Connect server(Authorization Server), so that microservices successfully provide access given someone has the right to use the data.
  • Use the API-GW style, in which there is a single entry point for all the client request.
  • Client connects to authorization server and obtains the Access Token (by-reference token). Then send the access token to the API-GW along with the request.
  • Token Translation at the Gateway – API-GW extracts the access token and sends it to the authorization server to retrieve the JWT (by value-token).
  • Then GW passes this JWT along with the request to the microservices layer.
  • JWTs contains the necessary information to help in storing user sessions, etc. If each service can understand a JSON web token, then you have distributed your identity mechanism which is allowing you to transport identity throughout your system.
  • At each microservice layer, we can have a component that processes the JWT, which is a quite trivial implementation.

Transactions

How about the transactions support in microservices? In fact, supporting distributed transactions across multiple microservices is an exceptionally complex task. The microservice architecture itself encourages the transaction-less coordination between services.

The idea is that a given service is fully self-contained and based on the single responsibility principle. The need to have distributed transactions across multiple microservices is often a symptom of a design flaw in microservice architecture and usually can be sorted out by refactoring the scopes of microservices.  However, if there is a mandatory requirement to have distributed transactions across multiple services, then such scenarios can be realized with the introduction of ‘compensating operations’ at each microservice level. The key idea is, a given microservice is based on the single responsibility principle and if a given microservice failed to execute a given operation, we can consider that as a failure of that entire microservice. Then all the other (upstream) operations have to be undone by invoking the respective compensating operation of those microservices.

Design for Failures

Microservice architecture introduces a dispersed set of services and, compared to monolithic design, that increases the possibility of having failures at each service level. A given microservice can fail due to network issues, unavailability of the underlying resources, etc. An unavailable or unresponsive microservice should not bring the whole microservices-based application down. Thus, microservices should be fault tolerant, be able to recover when that is possible, and the client has to handle it gracefully.

Also, since services can fail at any time, it’s important to be able to detect (real-time monitoring) the failures quickly and, if possible, automatically restore the services.

There are several commonly used patterns in handling errors in Microservices context.

Circuit Breaker

When you are doing an external call to a microservice, you configure a fault monitor component with each invocation and when the failures reach a certain threshold then that component stops any further invocations of the service (trips the circuit). After a certain number of requests in open state (which you can configure), change the circuit back to close state.

This pattern is quite useful to avoid unnecessary resource consumption, request delay due to timeouts, and also gives us to chance to monitor the system (based on the active open circuits states).

Bulkhead

As microservice application comprises of the number of microservices, the failures of one part of the microservices-based application should not affect the rest of the application. Bulkhead pattern is about isolating different parts of your application so that a failure of a service in such part of the application does not affect any of the other services.

Timeout

The timeout pattern is a mechanism which is allowing you to stop waiting for a response from the microservice, when you think that it won’t come. Here you can configure the time interval that you wish to wait.

So, where and how do we use these patterns with microservices? In most cases, most of these patterns are applicable at the Gateway level. Which means when the microservices are not available or not responding, at the Gateway level we can decide whether to send the request to the microservice using circuit breakers or timeout pattern. Also, it’s quite important to have patterns such as bulkhead implemented at the Gateway level, as it’s the single entry point for all the client requests, so a failure in a give service should not affect the invocation of the other microservices.

In addition, Gateway can be used as the central point that we can obtain the status and monitor of each microservice as each microservices is invoked through the Gateway.

Categories: ASP.NET

What is Knockout.js and how is it different from jQuery?

August 29, 2017 Leave a comment

Knockout.js is a javascript library that allows us to bind html elements against any data model. It provides a simple two-way data binding mechanism between your data model and UI means any changes to data model are automatically reflected in the DOM(UI) and any changes to the DOM are automatically reflected to the data model.

Why Knockout

consider a simple example of shopping-cart interface for an e-commerce website. When the user deletes an item from the shopping cart, you have to remove the item from the underlying data model and remove the associated html element from the shopping cart and also update the total price. If you are not using knockout for doing this you have to write event handlers and listeners for dependency tracking.

The Knockout.js provides a simple and convenient way to manage this kind of complex, data-driven interfaces. Instead of manually tracking, each element of the HTML page rely on the affected data and will automatically updated the DOM when any changes to the data model occurs.

Knockout Features

  1. Declarative Bindings

    This allows you to bind the elements of UI to the data model in a simple and convenient way. When you use JavaScript to manipulates DOM, this may cause broken code if you later change the DOM hierarchy or element IDs. But with declarative bindings even if you change the DOM then all bound elements stay connected. You can bind data to a DOM by simply including a data-bind attribute to any DOM element.

  2. Dependency Tracking

    This automatically updates the right parts of your UI whenever your data model changes. This is achieved by the two way bindings and special type of variables called observables. You don’t worry about adding event handlers and listeners for dependency tracking.

  3. Templating

    This comes in handy when your application becomes more complex and you need a way to display a rich structure of view model data, thus keeping your code simple.

    Knockout can use alternative template engines for its template binding. Knockout has a native, built-in template engine which you can use right away and can be customize how the data and template are executed to determine the resulting markup.

  4. Extensible

    This implements custom behaviors as new declarative bindings for easy reuse in just a few lines of code. Knockout is also flexible to integrate with other libraries and technologies.

Knockout VS jQuery

Knockout.js is not a replacement of jQuery, Prototype, or MooTools. It doesn’t attempt to provide animation, generic event handling, or AJAX functionality (however, Knockout.js can parse the data received from an AJAX call). Knockout.js is focused only on designing scalable and data-driven UI.

Moreover Knockout.js is compatible with any other client-side and server-side technology. Knockout.js acts as a supplement to the other web technologies like jQuery, MooTools.

MVVM Design Pattern

Knockout.js uses a Model-View-ViewModel (MVVM) design pattern in which the model is your stored data, and the view is the visual representation of that data (UI) and ViewModel acts as the intermediary between the model and the view.

Actually, the ViewModel is a JavaScript representation of the model data, along with associated functions for manipulating the data. Knockout.js creates a direct connection between the ViewModel and the view, which helps to detect changes to the underlying model and automatically update the right element of the UI.

 

Categories: ASP.NET Tags:

Dependency Injection Pattern in C#

August 29, 2017 Leave a comment

Simple Introduction to Dependency Injection

Scenario 1

You work in an organization where you and your colleagues tend to travel a lot. Generally you travel by air and every time you need to catch a flight, you arrange for a pickup by a cab. You are aware of the airline agency who does the flight bookings, and the cab agency which arranges for the cab to drop you off at the airport. You know the phone numbers of the agencies, you are aware of the typical conversational activities to conduct the necessary bookings.

Thus your typical travel planning routine might look like the following :

  • Decide the destination, and desired arrival date and time
  • Call up the airline agency and convey the necessary information to obtain a flight booking.
  • Call up the cab agency, request for a cab to be able to catch a particular flight from say your residence (the cab agency in turn might need to communicate with the airline agency to obtain the flight departure schedule, the airport, compute the distance between your residence and the airport and compute the appropriate time at which to have the cab reach your residence)
  • Pickup the tickets, catch the cab and be on your way

Now if your company suddenly changed the preferred agencies and their contact mechanisms, you would be subject to the following relearning scenarios

  • The new agencies, and their new contact mechanisms (say the new agencies offer internet based services and the way to do the bookings is over the internet instead of over the phone)
  • The typical conversational sequence through which the necessary bookings get done (Data instead of voice).

Its not just you, but probably many of your colleagues would need to adjust themselves to the new scenario. This could lead to a substantial amount of time getting spent in the readjustment process.

Scenario 2

Now lets say the protocol is a little bit different. You have an administration department. Whenever you needed to travel an administration department interactive telephony system simply calls you up (which in turn is hooked up to the agencies). Over the phone you simply state the destination, desired arrival date and time by responding to a programmed set of questions. The flight reservations are made for you, the cab gets scheduled for the appropriate time, and the tickets get delivered to you.

Now if the preferred agencies were changed, the administration department would become aware of a change, would perhaps readjust its workflow to be able to communicate with the agencies. The interactive telephony system could be reprogrammed to communicate with the agencies over the internet. However you and your colleagues would have no relearning required. You still continue to follow exactly the same protocol as earlier (since the administration department did all the necessary adaptation in a manner that you do not need to do anything differently).

Dependency Injection ?

In both the scenarios, you are the client and you are dependent upon the services provided by the agencies. However Scenario 2 has a few differences.

  • You don’t need to know the contact numbers / contact points of the agencies – the administration department calls you when necessary.
  • You don’t need to know the exact conversational sequence by which they conduct their activities (Voice / Data etc.) (though you are aware of a particular standardized conversational sequence with the administration department)
  • The services you are dependent upon are provided to you in a manner that you do not need to readjust should the service providers change.

That is dependency injection in “real life”. This may not seem like a lot since you imagine a cost to yourself as a single person – but if you imagine a large organization the savings are likely to be substantial.

Sorry for long description above 😦 lets discuss what is Dependency Injection in a Software Context

Dependency Injection (DI) is a software design pattern that allow us to develop loosely coupled code. DI is a great way to reduce tight coupling between software components. DI also enables us to better manage future changes and other complexity in our software. The purpose of DI is to make code maintainable.

The Dependency Injection pattern uses a builder object to initialize objects and provide the required dependencies to the object means it allows you to “inject” a dependency from outside the class.

For example, Suppose your Client class needs to use a Service class component, then the best you can do is to make your Client class aware of an IServiceinterface rather than a Service class. In this way, you can change the implementation of the Service class at any time (and for how many times you want) without breaking the host code.

We can modify this code by the DI different ways. We have following different ways to implement DI :

Constructor Injection

  1. This is the most common DI.
  2. Dependency Injection is done by supplying the DEPENDENCY through the class’s constructor when instantiating that class.
  3. Injected component can be used anywhere within the class.
  4. Should be used when the injected dependency is required for the class to function.
  5. It addresses the most common scenario where a class requires one or more dependencies.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public Client(IService service)
{
this._service = service;
}

public void Start()
{
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client(new Service());
client.Start();

Console.ReadKey();
}
}

The Injection happens in the constructor, by passing the Service that implements the IService-Interface. The dependencies are assembled by a “Builder” and Builder responsibilities are as follows:

  1. knowing the types of each IService
  2. according to the request, feed the abstract IService to the Client

Property injection

  1. Also called Setter injection.
  2. Used when a class has optional dependencies, or where the implementations may need to be swapped. Different logger implementations could be used this way.
  3. May require checking for a provided implementation throughout the class(need to check for null before using it).
  4. Does not require adding or modifying constructors.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public IService Service
{
set
{
this._service = value;
}
}

public void Start()
{
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client();
client.Service = new Service();
client.Start();

Console.ReadKey();
}
}

Method injection

  1. Inject the dependency into a single method, for use by that method.
  2. Could be useful where the whole class does not need the dependency, just the one method.
  3. Generally uncommon, usually used for edge cases.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public void Start(IService service)
{
this._service = service;
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client();
client.Start(new Service());

Console.ReadKey();
}
}

Key points about DI

  1. Reduces class coupling
  2. Increases code reusing
  3. Improves code maintainability
  4. Improves application testing