Advertisements

Archive

Archive for the ‘ASP.NET’ Category

GridView: Findcontrol from HeaderTemplate/ItemTemplate

September 19, 2017 Leave a comment

How to find item template control in GridView?

Dim chkHeader As CheckBox = DirectCast(gridView1.Row.FindControl(“chkHeader”), CheckBox)

How to find control from header template in grid view?

For example If you want to find checkbox available in header template of gridview

<asp:GridView ID=”gridView1″ runat=”server” AutoGenerateColumns=”False”><Columns>

<asp:TemplateField >

<HeaderTemplate>

<asp:CheckBox ID=”chkHeader” runat=”server” />

 </HeaderTemplate>

</asp:TemplateField >

</Columns>

</asp:GridView>

Solution:

Find Header control using following line of code

VB.NET Code:

Dim chkHeader As CheckBox = DirectCast(gridView1.HeaderRow.FindControl(“chkHeader”), CheckBox)

C# Code:

CheckBox chkHeader = (CheckBox)gridView1.HeaderRow.FindControl(“chkHeader”);

Hope this is help !

 

 

 

 

 

Advertisements

Microservices in Practice: From Architecture to Deployment

August 30, 2017 Leave a comment

“Microservices” is one of the most popular buzz-words in the field of software architecture.  There are quite a lot of learning materials on the fundamentals and benefits of microservices, but there are very few resources on how you can use microservices in the real world enterprise scenarios.

In this post, I’m planning to cover the key architectural concepts of the Microservices Architecture (MSA) and how you can use those architectural principles in practice.

Monolithic Architecture

Enterprise software applications are designed to facilitate numerous business requirements. Hence, a given software application offers hundreds to functionalities and all such functionalities are piled into a single monolithic application. For examples, ERPs, CRMs, and other various software systems are built as a monolith with several hundreds of functionalities. The deployment, troubleshooting, scaling, and upgrading of such monstrous software applications is a nightmare.

Service Oriented Architecture (SOA) was designed to overcome some of the aforementioned limitations by introducing the concept of a ‘service’ which is an aggregation and grouping of similar functionalities offered from an application. Hence, with SOA, a software application is designed as a combination of ‘coarse-grained’ services. However, in SOA, the scope of a service is very broad. That leads to complex and mammoth services with several dozens of operations (functionalities) along with complex message formats and standards (e.g: all WS* standards).

Figure 1 : Monolithic Architecture

In most cases, services in SOA are independent from each other, yet they are deployed in the same runtime along with all other services (just think about having several web applications which are deployed into the same Tomcat instance). Similar to monolithic software applications, these services have a habit of growing over time by accumulating various functionalities. Literally, that turns those applications into monolithic globs which are no different from conventional monolithic applications such as ERPs. Figure 1 shows a retail software application which comprises of multiple services. All these services are deployed into the same application runtime. So, it’s a very good example of a monolithic architecture. Here are some of the characteristics of  such applications which are based on monolithic architecture.

  • Monolithic applications are designed, developed, and deployed as a single unit.
  • Monolithic applications are overwhelmingly complex; which leads to nightmares in maintaining, upgrading, and adding new features.
  • Hard to practice agile development and delivery methodologies with Monolithic architecture.
  • It is required to redeploy the entire application, in order to update a part of it.
  • Scaling: Has to be scaled as a single application and difficult to scale with conflicting resource requirements (e.g. one service requires more CPU while the other requires more memory)
  • Reliability: One unstable service can bring the whole application down.
  • Hard to innovate: It’s really difficult to adopt new technologies and frameworks as of all the functionalities have to build on homogeneous technologies/frameworks.

These characteristics of Monolithic Architecture have led to the Microservices Architecture.

Microservices Architecture

The foundation of microservices architecture(MSA) is about developing a single application as a suite of small and independent services that are running in its own process, developed and deployed independently.

In most of the definitions of microservices architecture, it is explained as the process of segregating the services available in the monolith into a set of independent services. However, in my opinion, Microservices is not just about splitting the services available in monolith into independent services.

The key idea is that by looking at the functionalities offered from the monolith, we can identify the required business capabilities. Then those business capabilities can be implemented as fully independent, fine-grained, and self-contained (micro)services. They might be implemented on top of different technology stacks and each service is addressing a very specific and limited business scope.

Therefore, the online retail system scenario that we explain above can be realized with microservices architecture as depicted in figure 2. With the microservice architecture, the retail software application is implemented as a suite of microservices. So, as you can see in figure 2, based on the business requirements, there is an additional microservice created from the original set of services that are there in the monolith. So, it is quite obvious that using microservices architecture is something beyond the splitting of the services in the monolith.

Figure 2 : Microservice Architecture

So, let’s dive deep into the key architectural principles of microservices and more importantly, let’s focus on how they can be used in practice.

Designing Microservices: Size, Scope, and Capabilities

You may be building your software application from scratch by using Microservices Architecture or you are converting existing applications/services into microservices. Either way, it is quite important that you properly decide the size, scope, and the capabilities of the Microservices. Probably, that is the hardest thing that you initially encounter when you implement Microservices Architecture in practice.

Let’s discuss some of the key practical concerns and misconceptions related to the size, scope, and capabilities of microservices.

  • Lines of Code/Team size are lousy metrics: There are several discussions on deciding the size of the Microservices based on the lines-of-code of its implementation or its team’s size (i.e. two-pizza team). However, these are considered to be very impractical and lousy metrics, because we can still develop services with less code/with two-pizza-team size but totally violating the microservice architectural principals.
  • ‘Micro’ is a bit misleading term: Most developers tend to think that they should try to make the service, as small as possible. This is a misinterpretation.
  • In the SOA context, services are often implemented as monolithic globs with the support for several dozens of operations/functionalities. So, having SOA-like services and rebranding them as microservices is not going to give you any benefits of microservices architecture.

So, then how should we properly design services in Microservices Architecture?

Guidelines for Designing Microservices

  • Single Responsibility Principle(SRP): Having a limited and a focused business scope for a microservice helps us to meet the agility in development and delivery of services.
  • During the designing phase of the microservices, we should find their boundaries and align them with the business capabilities (also known as bounded context in Domain-Driven-Design).
  • Make sure the microservices design ensures the agile/independent development and deployment of the service.
  • Our focus should be on the scope of the microservice, but not about making the service smaller. The (right) size of the service should be the required size to facilitate a given business capability.
  • Unlike service in SOA, a given microservice should have a very few operations/functionalities and simple message format.
  • It is often a good practice to start with relatively broad service boundaries to begin with, refactoring to smaller ones (based on business requirements) as time goes on.

In our retail use case, you can find that we have split the functionalities of its monolith into four different microservices, namely ‘inventory’, ‘accounting’, ‘shipping’, and ‘store’. They are addressing a limited but focused business scope so that each service is fully decoupled from each other and ensures the agility in development and deployment.

Messaging in Microservices

In monolithic applications, business functionalities of different processors/components are invoked using function calls or language-level method calls. In SOA, this was shifted towards a much more loosely coupled web service level messaging, which is primarily based on SOAP on top of different protocols such as HTTP, JMS. Webservices with several dozens of operations and complex message schemas was a key resistive force for the popularity of web services. For Microservices architecture, it is required to have a simple and lightweight messaging mechanism.

Synchronous Messaging – REST, Thrift

For synchronous messaging (client expects a timely response from the service and waits till it get it) in Microservices Architecture, REST is the unanimous choice as it provides a simple messaging style implemented with HTTP request-response, based on resource API style. Therefore, most microservices implementations are using HTTP along with resource API based styles (every functionality is represented with a resource and operations carried out on top of those resources).

Figure 3 : Using REST interfaces to expose microservices

Thrift is used (in which you can define an interface definition for your microservice), as an alternative to REST/HTTP synchronous messaging.

Asynchronous Messaging – AMQP, STOMP, MQTT

For some microservices scenarios, it is required to use asynchronous messaging techniques(client doesn’t expect a response immediately, or does not accept a response at all). In such scenarios, asynchronous messaging protocols such as AMQP, STOMP, or MQTT are widely used.

Message Formats – JSON, XML, Thrift, ProtoBuf, Avro

Deciding the best-suited message format for Microservices is another key factor. The traditional monolithic applications use complex binary formats, SOA/Web services-based applications use text messages based on the complex message formats (SOAP) and schemas (xsd). In most microservices-based applications, they use simple text-based message formats such as JSON and XML on top of HTTP resource API style. In cases where we need binary message formats (text messages can become verbose in some use cases), microservices can leverage binary message formats such as binary Thrift, ProtoBuf, or Avro.

Service Contracts – Defining the Service Interfaces – Swagger, RAML, Thrift IDL

When you have a business capability implemented as a service, you need to define and publish the service contract. In traditional monolithic applications, we barely find such feature to define the business capabilities of an application. In SOA/Web services world, WSDL is used to define the service contract, but, as we all know, WSDL is not the ideal solution for defining microservices contract as WSDL is insanely complex and tightly coupled to SOAP.

Since we build microservices on top of REST architectural style, we can use the same REST API definition techniques to define the contract of the microservices. Therefore, microservices use the standard REST API definition languages such as Swagger and RAML to define the service contracts.

For other microservices implementation which are not based on HTTP/REST (such as Thrift), we can use the protocol level ‘Interface Definition Languages(IDL)’ (e.g.: Thrift IDL).

Integrating Microservices (Inter-service/process Communication)

In Microservices architecture, the software applications are built as a suite of independent services. So, in order to realize a business use case, it is required to have the communication structures between different microservices/processes. That’s why inter-service/process communication between microservices is a such a vital aspect.

In SOA implementations, the inter-service communication between services is facilitated with an Enterprise Service Bus (ESB) and most of the business logic resides in the intermediate layer (message routing, transformation, and orchestration). However, Microservices architecture promotes to eliminate the central message bus/ESB and move the ‘smart-ness’ or business logic to the services and client (known as ‘Smart Endpoints’).

Since microservices use standard protocols such as HTTP, JSON, etc. the requirement of integrating with a disparate protocol is minimal when it comes to the communication among microservices.  Another alternative approach in Microservice communication is to use a lightweight message bus or gateway with minimal routing capabilities and just acting as a ‘dumb pipe’ with no business logic implemented on gateway. Based on these styles there are several communication patterns that have emerged in microservices architecture.

Point-to-point Style – Invoking Services Directly

In point to point style, the entirety of the message routing logic resides on each endpoint and the services can communicate directly. Each microservice exposes a REST APIs and a given microservice or an external client can invoke another microservice through its REST API.

Figure 4 : Inter-service communication with point-to-point connectivity.

Obviously, this model works for relatively simple microservices-based applications but as the number of services increases, this will become overwhelmingly complex. After all that’s the exact same reason for using ESB in the traditional SOA implementation, which is to get rid of the messy point-to-point integration links. Let’s try to summarize the key drawbacks of the point-to-point style for microservice communication.

  • The non-functional requirements such as end-user authentication, throttling, monitoring, etc. has to be implemented at each and every microservice level.
  • As a result of duplicating common functionalities, each microservice implementation can become complex.
  • There is no control at all of the communication between the services and clients (even for monitoring, tracing, or filtering)
  • Often the direct communication style is considered as a microservice anti-pattern for large scale microservice implementations.

Therefore, for complex Microservices use cases, rather than having point-to-point connectivity or a central ESB, we could have a lightweight central messaging bus which can provide an abstraction layer for the microservices and that can be used to implement various non-functional capabilities. This style is known as API Gateway style.

API-Gateway Style

The key idea behind the API Gateway style is that using a lightweight message gateway as the main entry point for all the clients/consumers and implement the common non-functional requirements at the Gateway level. In general, an API Gateway allows you to consume a managed API over REST/HTTP. Therefore, here we can expose our business functionalities which are implemented as microservices, through the API-GW, as managed APIs. In fact, this is a combination of Microservices architecture and API-Management which give you the best of both worlds.

Figure 5: All microservices are exposed through an API-GW.

In our retail business scenario, as depicted in figure 5, all the microservices are exposed through an API-GW and that is the single entry point for all the clients. If a microservice wants to consume another microservice that also needs to be done through the API-GW.

API-GW style gives you the following advantages:

  • Ability to provide the required abstractions at the gateway level for the existing microservices. For example, rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client.
  • Lightweight message routing/transformations at gateway level.
  • Central place to apply non-functional capabilities such as security, monitoring and throttling.
  • With the use of API-GW pattern, the microservice becomes even more lightweight as all the non-functional requirements are implemented at the Gateway level.

The API-GW style could well be the most widely used pattern in most microservice implementations.

Message Broker style

The microservices can be integrated with an asynchronous messaging scenario such as one-way requests and publish-subscribe messaging using queues or topics. A given microservice can be the message producer and it can asynchronously send messages to a queue or topic. Then the consuming microservice can consume messages from the queue or topic. This style decouples message producers from message consumers and the intermediate message broker will buffer messages until the consumer is able to process them. Producer microservices are completely unaware of the consumer microservices.

Figure 6: Asynchronous messaging based integration using pub-sub.

The communication between the consumers/producers is facilitated through a message broker which is based on asynchronous messaging standards such as AMQP, MQTT, etc.

Decentralized Data Management

In monolithic architecture, the application stores data in a single and centralized databases to implement various functionalities/capabilities of the application.

Figure 7: Monolithic application uses a centralized database to implement all its features.

In Microservices architecture, the functionalities are dispersed across multiple microservices and, if we use the same centralized database, then the microservices will no longer be independent from each other (for instance, if the database schema has changed from a given microservice, that will break several other services). Therefore, each microservice has to have its own database.

Figure 8: Microservices has its own private database and they can’t directly access the database owned by other microservices.

Here are the key aspects of implementing decentralized data management in microservices architecture.

  • Each microservice can have a private database to persist the data that requires to implement the business functionality offered from it.
  • A given microservice can only access the dedicated private database but not the databases of other microservices.
  • In some business scenarios, you might have to update several database for a single transaction. In such scenarios, the databases of other microservices should be updated through its service API only (not allowed to access the database directly)

The de-centralized data management gives you the fully decoupled microservices and the liberty of choosing disparate data management techniques (SQL or NoSQL etc., different database management systems for each service). However, for complex transactional use cases that involve multiple microservices, the transactional behavior has to be implemented using the APIs offered from each service and the logic resides either at the client or intermediary (GW) level.

Decentralized Governance

Microservices architecture favors decentralized governance.

In general, ‘governance’ means establishing and enforcing how people and solutions work together to achieve organizational objectives. In the context of SOA, SOA governance guides the development of reusable services, establishing how services will be designed and developed and how those services will change over time. It establishes agreements between the providers of services and the consumers of those services, telling the consumers what they can expect and the providers what they’re obligated to provide. In SOA Governance there are two types of governance that are in common use:

  • Design-time governance – defining and controlling the service creations, design, and implementation of service policies
  • Run-time governance – the ability to enforce service policies during execution

So, what does governance in Microservices context really mean? In microservices architecture, the microservices are built as fully independent and decoupled services with the variety of technologies and platforms. So, there is no need of defining a common standard for services designing and development.  So, we can summarize the decentralized governance capabilities of Microservices as follows:

  • In microservices architecture, there is no requirement to have centralized design-time governance.
  • Microservices can make their own decisions about its design and implementation.
  • Microservices architecture fosters the sharing of common/reusable services.
  • Some of the run-time governances aspects such as SLAs, throttling, monitoring, common security requirements and service discovery may be implemented at API-GW level.

Service Registry and Service Discovery

In Microservices architecture, the number of microservices that you need to deal with is quite high. And also, their locations change dynamically owing to the rapid and agile development/deployment nature of microservices. Therefore, you need to find the location of a microservice during the runtime. The solution to this problem is to use a Service Registry.

Service Registry

Service Registry holds the microservices instances and their locations. Microservice instances are registered with the service registry on startup and deregistered on shutdown. The consumers can find the available microservices and their locations through service registry.

Service Discovery

To find the available microservices and their location, we need to have a service discovery mechanism. There are two types of service discovery mechanisms, Client-side Discovery and Server-side Discovery. Let’s have a closer look at those service discovery mechanisms.

Client-side Discovery — In this approach, the client or the API-GW obtains the location of a service instance by querying a Service Registry.

Figure 9 – Client-side discovery

Here the client/API-GW has to implement the service discovery logic by calling the Service-Registry component.

Server-side Discovery — With this approach, clients/API-GW sends the request to a component (such as a Load balancer) that runs on a well-known location. That component calls the service registry and determines the absolute location of the microservice.

Figure 10 – Server-side discovery

The microservices deployment solutions such as Kubernetes(http://kubernetes.io/v1.1/docs/user-guide/services.html) offers service-side discovery mechanisms.

Deployment

When it comes to microservices architecture, the deployment of microservices plays a critical role and has the following key requirements:

  • Ability to deploy/un-deploy independently of other microservices.
  • Must be able to scale at each microservices level (a given service may get more traffic than other services).
  • Building and deploying microservices quickly.
  • Failure in one microservice must not affect any of the other services.

Docker (an open source engine that lets developers and system administrators deploy self-sufficient application containers in Linux environments) provides a great way to deploy microservices addressing the above requirements. The key steps involved are as follows:

  • Package the microservice as a (Docker) container image.
  • Deploy each service instance as a container.
  • Scaling is done based on changing the number of container instances.
  • Building, deploying, and starting microservice will be much faster as we are using Docker containers (which is much faster than  a regular VM)

Kubernetes is extending Docker’s capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control. As you can see, most of these features are essential in our microservices context too. Hence using Kubernetes (on top of Docker) for microservices deployment has become an extremely powerful approach, especially for large scale microservices deployments.

Figure 11 : Building and deploying microservices as containers.

In figure 11, it shows an overview of the deployment of the microservices of the retail application. Each microservice instance is deployed as a container and there are two containers per each host. You can arbitrarily change the number of containers that you run on a given host.

Security

Securing microservices is a quite common requirement when you use microservices in real world scenarios. Before jumping into microservices security, let’s have a quick look at how we normally implement security at the monolithic application level.

  • In a typical monolithic application, the security is about finding that ‘who is the caller’, ‘what can the caller do’, and ‘how do we propagate that information’.
  • This is usually implemented at a common security component which is at the beginning of the request handling chain and that component populates the required information with the use of an underlying user repository (or user store).

So, can we directly translate this pattern into the microservices architecture? Yes, but that requires a security component implemented at each microservices level which is talking to a centralized/shared user repository and retrieve the required information. That’s is a very tedious approach of solving the Microservices security problem.  Instead, we can leverage the widely used API-Security standards such as OAuth2 and OpenID Connect to find a better solution to our Microservices security problem. Before we dive deep into that, let me just summarize the purpose of each standard and how we can use them.

  • OAuth2 – Is an access delegation protocol. The client authenticates with authorization server and gets an opaque token which is known as ‘Access token’. An Access token has zero information about the user/client. It only has a reference to the user information that can only be retrieved by the Authorization server. Hence, this is known as a ‘by-reference token’ and it is safe to use this token even in the public network/internet.
  • OpenID Connect behaves similarly to OAuth, but, in addition to the Access token, the authorization server issues an ID token which contains information about the user. This is often implemented by a JWT (JSON Web Token) and that is signed by an authorization server. So, this ensures the trust between the authorization server and the client. JWT token is therefore known as a ‘By-value token’ as it contains the information of the user and obviously is not safe to use it outside the internal network.

Now, lets see how we can use these standards to secure microservices in our retail example.

Figure 12 : Microservice security with OAuth2 and OpenID Connect

As shown in figure 12, these are the key steps involved in implementing microservices security:

  • Leave authentication to OAuth and the OpenID Connect server(Authorization Server), so that microservices successfully provide access given someone has the right to use the data.
  • Use the API-GW style, in which there is a single entry point for all the client request.
  • Client connects to authorization server and obtains the Access Token (by-reference token). Then send the access token to the API-GW along with the request.
  • Token Translation at the Gateway – API-GW extracts the access token and sends it to the authorization server to retrieve the JWT (by value-token).
  • Then GW passes this JWT along with the request to the microservices layer.
  • JWTs contains the necessary information to help in storing user sessions, etc. If each service can understand a JSON web token, then you have distributed your identity mechanism which is allowing you to transport identity throughout your system.
  • At each microservice layer, we can have a component that processes the JWT, which is a quite trivial implementation.

Transactions

How about the transactions support in microservices? In fact, supporting distributed transactions across multiple microservices is an exceptionally complex task. The microservice architecture itself encourages the transaction-less coordination between services.

The idea is that a given service is fully self-contained and based on the single responsibility principle. The need to have distributed transactions across multiple microservices is often a symptom of a design flaw in microservice architecture and usually can be sorted out by refactoring the scopes of microservices.  However, if there is a mandatory requirement to have distributed transactions across multiple services, then such scenarios can be realized with the introduction of ‘compensating operations’ at each microservice level. The key idea is, a given microservice is based on the single responsibility principle and if a given microservice failed to execute a given operation, we can consider that as a failure of that entire microservice. Then all the other (upstream) operations have to be undone by invoking the respective compensating operation of those microservices.

Design for Failures

Microservice architecture introduces a dispersed set of services and, compared to monolithic design, that increases the possibility of having failures at each service level. A given microservice can fail due to network issues, unavailability of the underlying resources, etc. An unavailable or unresponsive microservice should not bring the whole microservices-based application down. Thus, microservices should be fault tolerant, be able to recover when that is possible, and the client has to handle it gracefully.

Also, since services can fail at any time, it’s important to be able to detect (real-time monitoring) the failures quickly and, if possible, automatically restore the services.

There are several commonly used patterns in handling errors in Microservices context.

Circuit Breaker

When you are doing an external call to a microservice, you configure a fault monitor component with each invocation and when the failures reach a certain threshold then that component stops any further invocations of the service (trips the circuit). After a certain number of requests in open state (which you can configure), change the circuit back to close state.

This pattern is quite useful to avoid unnecessary resource consumption, request delay due to timeouts, and also gives us to chance to monitor the system (based on the active open circuits states).

Bulkhead

As microservice application comprises of the number of microservices, the failures of one part of the microservices-based application should not affect the rest of the application. Bulkhead pattern is about isolating different parts of your application so that a failure of a service in such part of the application does not affect any of the other services.

Timeout

The timeout pattern is a mechanism which is allowing you to stop waiting for a response from the microservice, when you think that it won’t come. Here you can configure the time interval that you wish to wait.

So, where and how do we use these patterns with microservices? In most cases, most of these patterns are applicable at the Gateway level. Which means when the microservices are not available or not responding, at the Gateway level we can decide whether to send the request to the microservice using circuit breakers or timeout pattern. Also, it’s quite important to have patterns such as bulkhead implemented at the Gateway level, as it’s the single entry point for all the client requests, so a failure in a give service should not affect the invocation of the other microservices.

In addition, Gateway can be used as the central point that we can obtain the status and monitor of each microservice as each microservices is invoked through the Gateway.

Categories: ASP.NET

What is Knockout.js and how is it different from jQuery?

August 29, 2017 Leave a comment

Knockout.js is a javascript library that allows us to bind html elements against any data model. It provides a simple two-way data binding mechanism between your data model and UI means any changes to data model are automatically reflected in the DOM(UI) and any changes to the DOM are automatically reflected to the data model.

Why Knockout

consider a simple example of shopping-cart interface for an e-commerce website. When the user deletes an item from the shopping cart, you have to remove the item from the underlying data model and remove the associated html element from the shopping cart and also update the total price. If you are not using knockout for doing this you have to write event handlers and listeners for dependency tracking.

The Knockout.js provides a simple and convenient way to manage this kind of complex, data-driven interfaces. Instead of manually tracking, each element of the HTML page rely on the affected data and will automatically updated the DOM when any changes to the data model occurs.

Knockout Features

  1. Declarative Bindings

    This allows you to bind the elements of UI to the data model in a simple and convenient way. When you use JavaScript to manipulates DOM, this may cause broken code if you later change the DOM hierarchy or element IDs. But with declarative bindings even if you change the DOM then all bound elements stay connected. You can bind data to a DOM by simply including a data-bind attribute to any DOM element.

  2. Dependency Tracking

    This automatically updates the right parts of your UI whenever your data model changes. This is achieved by the two way bindings and special type of variables called observables. You don’t worry about adding event handlers and listeners for dependency tracking.

  3. Templating

    This comes in handy when your application becomes more complex and you need a way to display a rich structure of view model data, thus keeping your code simple.

    Knockout can use alternative template engines for its template binding. Knockout has a native, built-in template engine which you can use right away and can be customize how the data and template are executed to determine the resulting markup.

  4. Extensible

    This implements custom behaviors as new declarative bindings for easy reuse in just a few lines of code. Knockout is also flexible to integrate with other libraries and technologies.

Knockout VS jQuery

Knockout.js is not a replacement of jQuery, Prototype, or MooTools. It doesn’t attempt to provide animation, generic event handling, or AJAX functionality (however, Knockout.js can parse the data received from an AJAX call). Knockout.js is focused only on designing scalable and data-driven UI.

Moreover Knockout.js is compatible with any other client-side and server-side technology. Knockout.js acts as a supplement to the other web technologies like jQuery, MooTools.

MVVM Design Pattern

Knockout.js uses a Model-View-ViewModel (MVVM) design pattern in which the model is your stored data, and the view is the visual representation of that data (UI) and ViewModel acts as the intermediary between the model and the view.

Actually, the ViewModel is a JavaScript representation of the model data, along with associated functions for manipulating the data. Knockout.js creates a direct connection between the ViewModel and the view, which helps to detect changes to the underlying model and automatically update the right element of the UI.

 

Categories: ASP.NET Tags:

Dependency Injection Pattern in C#

August 29, 2017 Leave a comment

Simple Introduction to Dependency Injection

Scenario 1

You work in an organization where you and your colleagues tend to travel a lot. Generally you travel by air and every time you need to catch a flight, you arrange for a pickup by a cab. You are aware of the airline agency who does the flight bookings, and the cab agency which arranges for the cab to drop you off at the airport. You know the phone numbers of the agencies, you are aware of the typical conversational activities to conduct the necessary bookings.

Thus your typical travel planning routine might look like the following :

  • Decide the destination, and desired arrival date and time
  • Call up the airline agency and convey the necessary information to obtain a flight booking.
  • Call up the cab agency, request for a cab to be able to catch a particular flight from say your residence (the cab agency in turn might need to communicate with the airline agency to obtain the flight departure schedule, the airport, compute the distance between your residence and the airport and compute the appropriate time at which to have the cab reach your residence)
  • Pickup the tickets, catch the cab and be on your way

Now if your company suddenly changed the preferred agencies and their contact mechanisms, you would be subject to the following relearning scenarios

  • The new agencies, and their new contact mechanisms (say the new agencies offer internet based services and the way to do the bookings is over the internet instead of over the phone)
  • The typical conversational sequence through which the necessary bookings get done (Data instead of voice).

Its not just you, but probably many of your colleagues would need to adjust themselves to the new scenario. This could lead to a substantial amount of time getting spent in the readjustment process.

Scenario 2

Now lets say the protocol is a little bit different. You have an administration department. Whenever you needed to travel an administration department interactive telephony system simply calls you up (which in turn is hooked up to the agencies). Over the phone you simply state the destination, desired arrival date and time by responding to a programmed set of questions. The flight reservations are made for you, the cab gets scheduled for the appropriate time, and the tickets get delivered to you.

Now if the preferred agencies were changed, the administration department would become aware of a change, would perhaps readjust its workflow to be able to communicate with the agencies. The interactive telephony system could be reprogrammed to communicate with the agencies over the internet. However you and your colleagues would have no relearning required. You still continue to follow exactly the same protocol as earlier (since the administration department did all the necessary adaptation in a manner that you do not need to do anything differently).

Dependency Injection ?

In both the scenarios, you are the client and you are dependent upon the services provided by the agencies. However Scenario 2 has a few differences.

  • You don’t need to know the contact numbers / contact points of the agencies – the administration department calls you when necessary.
  • You don’t need to know the exact conversational sequence by which they conduct their activities (Voice / Data etc.) (though you are aware of a particular standardized conversational sequence with the administration department)
  • The services you are dependent upon are provided to you in a manner that you do not need to readjust should the service providers change.

That is dependency injection in “real life”. This may not seem like a lot since you imagine a cost to yourself as a single person – but if you imagine a large organization the savings are likely to be substantial.

Sorry for long description above 😦 lets discuss what is Dependency Injection in a Software Context

Dependency Injection (DI) is a software design pattern that allow us to develop loosely coupled code. DI is a great way to reduce tight coupling between software components. DI also enables us to better manage future changes and other complexity in our software. The purpose of DI is to make code maintainable.

The Dependency Injection pattern uses a builder object to initialize objects and provide the required dependencies to the object means it allows you to “inject” a dependency from outside the class.

For example, Suppose your Client class needs to use a Service class component, then the best you can do is to make your Client class aware of an IServiceinterface rather than a Service class. In this way, you can change the implementation of the Service class at any time (and for how many times you want) without breaking the host code.

We can modify this code by the DI different ways. We have following different ways to implement DI :

Constructor Injection

  1. This is the most common DI.
  2. Dependency Injection is done by supplying the DEPENDENCY through the class’s constructor when instantiating that class.
  3. Injected component can be used anywhere within the class.
  4. Should be used when the injected dependency is required for the class to function.
  5. It addresses the most common scenario where a class requires one or more dependencies.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public Client(IService service)
{
this._service = service;
}

public void Start()
{
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client(new Service());
client.Start();

Console.ReadKey();
}
}

The Injection happens in the constructor, by passing the Service that implements the IService-Interface. The dependencies are assembled by a “Builder” and Builder responsibilities are as follows:

  1. knowing the types of each IService
  2. according to the request, feed the abstract IService to the Client

Property injection

  1. Also called Setter injection.
  2. Used when a class has optional dependencies, or where the implementations may need to be swapped. Different logger implementations could be used this way.
  3. May require checking for a provided implementation throughout the class(need to check for null before using it).
  4. Does not require adding or modifying constructors.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public IService Service
{
set
{
this._service = value;
}
}

public void Start()
{
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client();
client.Service = new Service();
client.Start();

Console.ReadKey();
}
}

Method injection

  1. Inject the dependency into a single method, for use by that method.
  2. Could be useful where the whole class does not need the dependency, just the one method.
  3. Generally uncommon, usually used for edge cases.

public interface IService
{
void Serve();
}

public class Service : IService
{
public void Serve()
{
Console.WriteLine(“Service Called”);
//To Do: Some Stuff
}
}

public class Client
{
private IService _service;

public void Start(IService service)
{
this._service = service;
Console.WriteLine(“Service Started”);
this._service.Serve();
//To Do: Some Stuff
}
}
class Program
{
static void Main(string[] args)
{
Client client = new Client();
client.Start(new Service());

Console.ReadKey();
}
}

Key points about DI

  1. Reduces class coupling
  2. Increases code reusing
  3. Improves code maintainability
  4. Improves application testing

 

 

Updatepanel triggers another updatepanel

August 18, 2017 Leave a comment

updatepanel

By default, every UpdatePanel will be refreshed during every asynchronous post back.

Some important remarks about Update Panel

When an UpdatePanel control is not inside another UpdatePanel control, the panel is updated according to the settings of the UpdateMode and ChildrenAsTriggers properties, together with the collection of triggers. When an UpdatePanel control is inside another UpdatePanel control, the child panel is automatically updated when the parent panel is updated.

The content of an UpdatePanel control is updated in the following circumstances:

  • If the UpdateMode property is set to Always, the UpdatePanel control’s content is updated on every postback that originates from anywhere on the page. This includes asynchronous postbacks from controls that are inside other UpdatePanel controls and postbacks from controls that are not inside UpdatePanel controls.
  • If the UpdatePanel control is nested inside another UpdatePanel control and the parent update panel is updated.
  • If the UpdateMode property is set to Conditional, and one of the following conditions occurs:
    • You call the Update method of the UpdatePanel control explicitly.
    • The postback is caused by a control that is defined as a trigger by using the Triggers property of the UpdatePanel control. In this scenario, the control explicitly triggers an update of the panel content. The control can be either inside or outside the UpdatePanelcontrol that defines the trigger.
    • The ChildrenAsTriggers property is set to true and a child control of the UpdatePanel control causes a postback. A child control of a nested UpdatePanel control does not cause an update to the outer UpdatePanel control unless it is explicitly defined as a trigger.

 

Following is the example of code with UpdateMode property set to Conditional.

updatePanelCondition

Hope it helps !

 

C# 7.0

July 27, 2017 1 comment

C# History

In all previous versions of C# (with the exception of C# 6.0 maybe) new features have revolved around a specific theme:

  • C# 2.0 introduced generics.
  • C# 3.0 enabled LINQ by bringing extension methods, lambda expressions, anonymous types and other related features.
  • C# 4.0 was all about interoperability with dynamic non-strongly typed languages.
  • C# 5.0 simplified asynchronous programming with the async and await keywords.
  • C# 6.0 had its compiler completely rewritten from scratch, and introduced a variety of small features and improvements that were easier to implement now.

C# 7.0 is no exception to this rule. The language designers were focusing on three main themes:

  • Working with Data – Increased usage of web services is changing the way data is being modelled. Instead of designing the data models as a part of the application, their definition is becoming a part of web service contracts. While this is very convenient in functional languages, it can bring additional complexity to object oriented development. Several C# 7.0 features are targeted at making it easier to work with external data contracts.
  • Improved Performance – Increased share of mobile devices is making performance an important consideration again. C# 7.0 introduces features that allow performance optimizations, which were previously not possible in the .NET framework.
  • Code simplification – Several additional small changes built on the work done for C# 6.0 to allow further simplification of the code written.

Using the code

Tuples (with types and literals)

Return multiple values from a method is now a common practice, we generally use custom datatype, out parameters, Dynamic return type or a tuple object but here C# 7.0 brings tuple types and tuple literals for you it just return multiple values/ multiple type inform of tuple object. see below snippet

( string, string, string, string) getEmpInfo()
{
    //read EmpInfo from database or any other source and just return them
    string strFirstName = "abc";
    string strAddress = "Address";
    string strCity= "City";
    string strState= "State";
     return (strFirstName, strAddress, strCity, strState); // tuple literal
}

//Just call above method and it will return multiple values 
 var empInfo= getEmpInfo();
WriteLine("Emp info as  {empInfo .Item1} {empInfo .Item2} {empInfo .Item3} {empInfo .Item4}.");

In above sample we can easily retrieve multiple values from tuples, but Item1, Item2 name are bit ir-relevant so let’s assign some meaningful names before return, see below sample

(string strFName, string strAdd, string strC, string strSt) getEmpInfo()
{
    //code goes here
}

//Now when you call method get values with specific name as below 
var empInfo= getEmpInfo();
WriteLine("Emp info as {empInfo.strFName} {empInfo.strAdd} {empInfo.strC} {empInfo.strSt}.");

Additionally you can return their name directly in tuple literal as below

return (strFName: strFirstName, strAdd: strAddress, strCity: strC, strState: strSt);

Tuples are very useful thing where you can replace hash table or dictionary easily, even you can return multiple values for a single key, Additionally you can use it instead of List where you store multiple values at single position.

.NET also has a Tuple type (See here) but it is a reference type and that leads to performance issue, but C# 7.0 bring a Tuple with value type which is faster in performance and a mutable type.

Deconstruction

Most of the time we don’t want to access whole tuple bundle or we just need internal values then we can use Deconstruction features of C# 7.0, we can easily de-construct a tuple and fetch values that we need, following snippet will clear your doubt

( string strFName,  string strAdd,  string strC, string strSt) = getEmpInfo(); 
Console.WriteLine($"Address: { strAdd }, Country: { strC }");

Record Type

C# support record type, which is nothing but a container of a properties and variables, most of the time classes are full with properties and variables, we need lot of code to just declare them but with the help of Record Type you can reduce your effort, see below snippet

class studentInfo
{
    string _strFName;
    string _strMName;
    string _strLName;
    studentInfo(string strFN, string strMN, string strLN){
        this._strFName = strFN;
        this._strMName = strMN;
        this._strLName = strLN;
    }
    public string StudentFName {get{ return this._strFName;}}
    public string StudentMName {get{ return this._strMName;}}
    public string StudentLName {get{ return this._strLName;}}
}

In above code we have a class with property, constructor and variable, so access and declare variable i need to write more code.

To avoid it i can use  Record Type in C#, see below snippet

class studentInfo(string StudentFName, string StudentMName, string StudentLName);

That’s it and we have Done !

above snippet produce same output as earlier snippet.

 

Minimizing OUT

Out parameter is very popular when we want to return multiple values from method, By nature out parameters are ref type and works as an argument, we can use it easily but the only condition is out variable should be initialized before it passed. see below snippet

class SetOut
{
    void AssignVal(out string strName)
    {
        strName = "I am from OUT";
    }
    static void Main()
    {
        string strArgu;
        AssignVal(out strArgu);
        // here contents of strArgu is "I am from OUT"
    }
}

C# 7.0 reduce your pain of writing extra code and you can just pass argument without initialize them, see below snippet

pat

  static void Main()
    {
        AssignVal(out string szArgu);
        // here contents of szArgu is "I am from OUT"
    }

You can either use var as argument type instead to declare them.

Note that variable are used here, are in limited scope only, thus we can not use them outside method

Since we can define variable as argument directly, C# 7.0 gives us freedom to declare them as var also. so you don’t need to worry about datatype, see below snippet

  static void Main()
    {
        AssignVal(out var szArgu);
        // here contents of szArgu is "I am from OUT"
    }

 

Non-‘NULL’ able reference type

Null reference is really a headache for all programmers, it is a million dollar exception. If you don’t check them you got runtime exception or if you check them for each object then your code goes long and long, To deal with this problem C# 7.0 come with non-nullable reference types

**I think syntax for it yet not fixed still they have release following syntax

?‘ is for nullable value-type and ‘!‘ is for non-nullable reference type

int objNullVal;     //non-nullable value type
int? objNotNullVal;    //nullable value type
string! objNotNullRef; //non-nullable reference type
string objNullRef;  //nullable reference type

Now look at the following complier effect after we run this snippet

MyClass objNullRef;  // Nullable reference type
MyClass! objNotNullRef; // Non-nullable reference type
 
objNullRef = null;   // this is nullable, so no problem in assigning
objNotNullRef = null;   // Error, as objNotNullRef is non-nullable
objNotNullRef = objNullRef;      // Error, as nullable object can not be refered
 
WriteLine(objNotNullRef.ToString()); // Not null so can convert to tostring
WriteLine(objNullRef.ToString()); // could be null
 
if (objNullRef != null) { WriteLine(objNullRef.ToString); } // No error as we have already checked it
WriteLine(objNullRef!.Length); // No error

 

Local Methods/Functions

 

Local methods and functions is already there in current version of C# (Yes, we can achieve them using Func and Action types, see here Func  and Action), but still there are some limitations to local method, we can not have following features in it

  • Generic
  • out parameters
  • Ref
  • params

Now with C# 7.0 we can overcome this problems, see below snippet

private static void Main(string[] args)
{
    int local_var = 100;
    int LocalFunction(int arg1)
    {
        return local_var * arg1;
    }
 
    Console.WriteLine(LocalFunction(100));
}

in above snippet we have define ‘LocalFunction’ as local function which is inside Main Function ‘Main’, here we can use out or ref in it.

Readability Improvement with Literals

Many times we use literals in code, if they are too long then we might loose Readability,   to sort out such issues C# 7.0 comes with some improvement in Literals. Now C# allows ‘_‘ (underscore) in Literals for betterment of understand, it does not effect on its value. see below snippet

 var lit1 = 478_1254_3698_44;
 var lit2 = ab0Xab_bc47at_XY;

//C# also come with binary literal for bunary values

 var binLit = 1100_1011_0100_1010_1001;

**Literals are nothing but a constant value (hard-coded value) which may be with predefined meaning. (Litearls in C#)

Pattern matching

C# 7.0 allows user to use pattern in IS statement and with SWITCH statement, so we can match pattern with any datatype, patterns can be constant patterns, Type patterns, Var patterns. following sample snippet will clear your concepts, let’s start with IS pattern

 public  void Method1( object obj)
{
    //following null is constant pattern
     if (obj  is null)  return;
    //datatype pattern, string is datatype that we check directly     
     if (obj  is  string st)
    { //code goes here }
    else
    return; 
}

Switch pattern helps a lot as it uses any datatype for matching additionally ‘case’ clauses also can have its pattern so it bit flexible implementation

see below snippet

class Calculate();
class Add(int a, int b, int c) : Calculate;
class Substract(int a, int b) : Calculate;
class Multiply(int a, int b, int c) : Calculate;
 
Calculate objCal = new Multiply(2, 3, 4);
switch (objCal)
{
    case Add(int a, int b, int c):
        //code goes here
        break;
    case Substract(int a, int b):
        //code goes here
        break;
    case Multiply(int a, int b, int c):
        //code goes here
        break;
    default:
        //default case
        break;
}

in above sample switch case checks pattern and call ‘Multiply’ method

‘return’ by Ref

Have you tried to return your variable from method/function as Ref ? Yes, C# 7.0 allows you to do that. Infect you can pass a variable with Ref return them as Ref  and also store them as Ref, isn’t it amazing.

see below snippet

ref string getFromList(string strVal, string[] Values)
{
 foreach (string val1 in Values)
 {
     if (strVal == val1)
        return ref val1; //return location as ref not actual value
 }
}

string[] values = { "a", "b", "c", "d" };
ref string strSubstitute = ref getFromList("b", values);
strSubstitute = "K"; // replaces 7 with 9 in the array
System.Write(values[1]); // it prints "K"

In above sample we have find and replace a string, by return a Ref from method.

Throw Exception from Expression

You read it right, in C# 7.0 Now you can throw exception from your expression directly. see below snippet

public string getEmpInfo( string EmpName)
    {
        string[] empArr = EmpName.Split(",");
        return (empArr.Length > 0) ? empArr[0] : throw new Exception("Emp Info Not exist");
    }

In above snippet we can directly throw exception from return statement, isn’t it really good !

Point to be Notice

All above features are expected to be a part of C# 7.0, yet Microsoft has release some of it with Visual studio 2015 Release 4.

Hope you enjoy these new features of C# 7.0.

Categories: ASP.NET Tags: , ,

USAePay Token error: 23 Specified source key not found


Hello friends

If you are facing above error message while testing USAepay payment integration than following is the reason and solution

That error means you are not sending it to the correct URL. If utilizing a sandbox source key make sure you are sending it to a sandbox URL and if using a production source key you have to send it to a production URL.

Hope it helps !

 

Categories: ASP.NET