Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on August 19, 2016

Cloud Scale Intelligent Load Balancing for a Modern-day Microservices Application Architecture

Load balancers have played a key role in providing enhanced performance experience to clients since pretty much the advent of client server architecture. Most load balancers fall in two categories:

  1. Hardware based load balancers working in OSI Layer 4
  2. Application based load balancers (ALB) working with HTTP services in OSI Layer 7

Application based load balancers are more intelligent in that they can support adaptive routing based on intelligent algorithms that look for a variety of parameters to route the incoming requests to a more suitable instance. In the last 5 years, application load balancers have inherited more responsibilities as Service Oriented Architectures and Distributed Systems gained prominence. This trend is mostly attributed to their flexibility and ability to rely on an intelligent algorithm. Today, ALBs are taking up even more complex roles like SSL acceleration that can save costly processing time by taking away the responsibility of encrypting and decrypting the traffic from the application server. This immensely boosts server performance. That said, the ask from load balancers is ever increasing, given the modern world of API driven development and the Microservices architecture.

With cloud scale becoming a reality, application server responsibilities are increasing demonstrating a self-contained behavior. ALBs are now required to meet the demands of this new application development paradigm and cloud scale infrastructure support. The good news is that cloud providers are listening. Amazon has taken a step forward by announcing the launch of an ALB option for its Elastic Load Balancing service. The most important feature it provides is support for container based applications and content based routing. The ALB will have access to HTTP headers and will be able to route to a specific set of API endpoints based on the content, which essentially means that you will be able to route and load balance requests from different client devices to different sets of API endpoints depending on the need for scale. With support for containers, the ALB can load balance requests to different service containers hosted in the same instance, and that is pretty cool! AWS has leaped into a new future for ALBs and I am sure competition will not be far behind in announcing their equivalents.


Posted by Gigi Sayfan on August 15, 2016

Design patterns are solutions or approaches to situations that appear often when developing software. They were introduced to the software engineering community at large by the seminal Gang of Four (GoF) book, "Design Patterns: Elements of Reusable Object-Oriented Design." The touted benefits of design patterns are that they allow proliferation of best practices by "codifying" them as design patterns and providing efficient communication between engineers who can refer to an entire design pattern, that can consist of many classes, by its name.

I must admit that I haven't seen those benefits in practice. There is a small subset of design patterns such as Singleton or Factory that are mentioned often, but those design patterns are typically simple and are self-explanatory — or can be explained by one sentence: Singleton — there can be only one; Factory — makes something. I have read the original GoF book and other books and articles that introduced other design patterns and I either recognized design patterns and themes that my colleagues and I have developed ourselves or didn't really get them deeply. Much later, after I solved a new problem, in retrospect I realized I had used a design pattern. But, I have never looked at a problem and suddenly proclaim: "Hey, let's use X design pattern here."

I'm not sure if my opinion is just based on my experience mostly working for fast-paced startups. It's possible that in larger enterprise shops, design patterns are a part of the culture and dedicated software architects converse with each other using design patterns. But, I highly doubt it. The main reason is that there are a lot of nuances to real world problem and design patterns, by their nature, are general.

In particular, the more complicated design patterns require various adaptations and often a combination of multiple modified design patterns to construct real world systems. So, what's the bottom line? I believe design patterns are useful for documenting the architecture of systems. They are also great for educational purposes because they have well defined format and explicitly explain what problem they solve. But, don't expect them to guide you when faced when an actual problem. If you are stumped and start going over a catalog of design patterns to see if one of them suddenly jump-starts your creativity, you might be sorely disappointed.


Posted by Gigi Sayfan on August 11, 2016

The CAP theorem (also known as Brewer's theorem) of distributed systems says that you can have two out of these three:

  • Consistency
  • Availability
  • Partitioning

Consistency means that you have the same state across all the machines in your cluster. Availability means that all the data is always accessible and partitioning means that the system can tolerate network partitions (some machines can't reach other machines in the cluster) without affecting the system's operation.

It's pretty clear that if there is a network partition and server A can't reach server B then any update to A can't be communicated to B until the network is repaired. That means that when a network partition happens the system can't remain consistent. If you're willing to sacrifice availability, then you can just reject all reads and clients will never discover the inconsistency between A and B. So, you can have C/P — a system that can remain consistent (from the user's point of view) and can tolerate network partitioning, but will sometimes be unavailable (in particular when there is a partition). This can be useful is certain situations, such as financial transactions where it is better to be unavailable than to break consistency.

If you can somehow guarantee that there will be no network partitions by employing massive networking redundancy, then you can have C/A. Every change will propagate to all servers and the system will always be available. It is very difficult to build such systems in practice, but it's very easy to design systems that rely on uninterrupted connectivity.

Finally, if you're willing to sacrifice perfect consistency, you can build A/P systems — always available and can tolerate network partitioning, but the data on different servers in the cluster might not always agree. This configuration is very common for some aspects of Web-based systems. The idea is that small temporary inconsistencies are fine and conflicts can be resolved later. For example, if you search Google for the same term from two different machines, in two different geographic locations, it is possible that you'll receive different results. Actually, if you run the same search twice (and clear your cache) you might get different results. But, this is not a problem for Google — or for its users. Google doesn't guarantee that there is a "true" answer to a search. It is very likely that the top results will be identical because it takes a lot of effort to change the rank. All the servers (or caching systems) constantly play catch up with the latest and greatest.

The same concept applies to something like the comments on a Facebook post. If you comment, then one of your friends may see it immediately and another friend may see it a little while later. There is no real-time requirement.

In general, distributed systems that are designed for eventual consistency typically still provision enough capacity and redundancy to be pretty consistent under normal operating conditions, but accept that 1% or 0.1% of actions/messages might be delayed.


Posted by Sandeep Chanda on August 9, 2016

Azure Event Hubs is a great platform for planning real time and actionable telemetry operations to audit your services in the Microsoft Azure ecosystem. The support for streaming diagnostic logs into Event Hubs was released in preview recently, and is made available in various services in Azure including App Service Gateways, Network Security Groups, Logic Apps, Data Lake, Search and Key Vault. These are the only the set of services where the support is currently available, and the footprint will only increase in days to come.

Telemetry of the diagnostic log using Event Hubs allows services to stream usage and makes it possible to apply corrective measures or compensation in real time. A great use for this feature could be with Logic Apps in which a live stream of real-time audit trail from a Logic App workflow could mean real-time analytics of the telemetry data and subsequent corrective or follow-up action. Streaming data to Event Hubs also allows you to perform live analysis of the data using Azure Stream Analytics and Power BI. In Stream Analytics you can directly create a query to fetch the service health hot path data from Event Hub and store the output into a Power BI dataset table. "Stream Analytics & Power BI" explains the steps to configure a real time stream analytics dashboard using Power BI.

There are various means by which you can enable diagnostic streaming for your Logic App instance. One of the easier ways is to enable it through the Azure Portal. Let's assume that our Logic App is a simple request-response service as shown below:

If you now navigate to your Logic App settings, you find the option under Diagnostic Settings after you enable the status to ON. The following picture illustrates this:

You can specify what to log. If you check "Export to Event Hubs", you will have to provide the service bus namespace. The other option to create this is to execute the following command using the Azure Command Line Interface:

azure insights diagnostic set --resourceId <resourceId> --serviceBusRuleId <serviceBusRuleId> --enabled true

You can analyze the JSON output received by the Event Hub instance using the Properties attribute. The event details will be available under this property.


Posted by Sandeep Chanda on July 29, 2016

Coming from the likes of Facebook, GraphQL is a modern data service platform that combines both a query language and an execution engine for complex data models. It is driven by the requirements of views and is extremely intuitive for client-side developers to consume and program. The Apollo data stack builds on the power of GraphQL and provides a set of client and server side components that contain easy-to-use boilerplate templates for setting up a data service for modern UI driven apps. It also provides a set of great developer tools to easily debug what is going on inside your app.

While the stack is still in preview, it gives a good glimpse of its capabilities for developers to get excited and participate in active development. It also integrates well with a bunch of client side JS platforms including React and React Native, Angular 2.0, and Meteor — to name a few. Setting up a server is pretty easy and the queries follow the GraphQL schema specification. To set up the Apollo server, first clone the Apollo starter kit repository using your node console:

Once you have clone the repository you can install the Apollo stack packages using the npm install command and then start the server using the npm start command.

By default, the server starts on the 8080 port. If you browse to http://localhost:8080/graphql, you will see the sample GraphQL schema outcome as shown below:

If you go to the data folder under the starter kit, you will see two files: mocks.js and schema.js. The schema.js file contains the default schema you just saw getting executed when you opened the server URL. The mock.js file immediately returns the mocks on querying the schema for testability. The "it works" message that you see in the browser outcome is the mock data being delivered by the mock.js file.

You can define your types in the schema.js file and then use the client to consume the data. A set of connectors are also supported to store the data in actual data repositories like MongoDB, MySQL, and Postgres to name a few. Have fun exploring Apollo!


Posted by Gigi Sayfan on July 28, 2016

Microsoft recently announced the release of .NET Core 1.0, a truly cross-platform runtime and development environment. It is a significant milestone in Microsoft's commitment to fully open and platform-agnostic computing. It supports Windows (duh!), Linux, Mac OSX, iOS and Android.

The transition over the years from a Windows-centric view to fully embrace other platforms, as well as the open source model (yes, .NET is now open source via the .NET foundation, is very impressive. Another interesting aspect is that Microsoft made the announcement at the Red Hat summit together with Red Hat, who will officially support .NET Core in its enterprise product.

In addition, Microsoft also announced ASP.NET Core 1.0, which unifies ASP.NET MVC and WebAPI. ASP.NET Core 1.0 can run on top of either .NET Core 1.0 or the full-fledged .NET framework. Exciting days are ahead for .NET developers whose skills, and the famous .NET productivity, suddenly become even more widely applicable through Microsoft's efforts.

Some of the distinctive features of .NET Core 1.0, in addition to its cross-platform abilities, are:

  • Flexible deployment (Can be included in your app or installed side-by-side user- or machine-wide)
  • Command-line tools (The 'dotnet' command. You don't need to rely on IDE although there is great IDE support too)
  • Compatibility (It works with the .NET framework, Xamarin and Mono)
  • Open Source (MIT or Apache 2 licenses, the documentation is open source too)
  • Official support from Microsoft


Posted by Sandeep Chanda on July 22, 2016

Distributed transaction management is a key architectural consideration that needs to be addressed whenever you are proposing a microservices deployment model to a customer. For developers from the monolithic relational database world, it is not an easy idea to grasp. In a monolithic application with a relational data store, transactions are ACID (atomic, consistent, isolated, and durable) in nature and follow a simple pessimistic control for data consistency. Distributed transactions are always guided by a two-phase commit protocol, where first all the changes are temporarily applied and then committed to, or rolled back, depending on whether or not all the transactions were successful or there was any error. You could easily write queries that fetched data from different relational sources and the distributed transaction coordinator model supported transaction control across disparate relational data sources.

In today's world, however, the focus is on modular, self-contained APIs. To make these services self-contained and aligned with the microservices deployment paradigm, the data is also self-contained within the module, loosely coupled from other APIs. Encapsulating the data allows the services to grow independently, but a major problem is dealing with keeping data consistent across the services.

A microservices architecture promotes availability over consistency, hence leveraging a distributed transaction coordinator with a two-phase commit protocol is usually not an option. It gets even more complicated with the idea that these loosely-coupled data repositories might not necessarily all be relational stores and could be a polyglot with a mix of relational and non-relational data stores. The idea of having an immediately consistent state is difficult. What you should look forward to, however, is an architecture that promotes eventual consistency like CQRS and Event Sourcing. An event driven architecture can support a BASE (basic availability, soft-state, and eventual consistency) transaction model. This is a great technique to solve the problem of handling transactions across services and customers can be convinced to negotiate the contract towards a BASE model since usually transactional consistency across functional boundaries is mostly about frequent change of states, and consistency rules can be relaxed to let the final state be eventually visible to the consumers.


Posted by Gigi Sayfan on July 21, 2016

I have often encountered code littered with lots of nested if and else statements. I consider it a serious code smell. Deeply nested code, especially if the nesting is due to conditionals, is very difficult to follow, wrap your head around and test. It even makes it difficult for the compiler or runtime to optimize. In many cases it is possible to flatten deeply nested code by very simple means. Here are a few examples in Python, but the concepts translate to most languages:

if some_predicate() == True:
result = True
else:
result = False
This can be simply replaced with:

reslt = some_predicate()

Inside loops you can often break out/return or continue and keep the body of the loop shallow.

for i in get_numbers():
if i <= 50:
if i % 2 == 0:
print('Even')
else:
print('Odd') 

Note, that there are two levels of nesting inside the loop. That can be replaced with:

for i in get_numbers():
if i > 50:
continue
msg = 'Even' if i % 2 == 0 else 'Odd'
print(msg) 

I used Python's continue statement to eliminate nesting due to the 50 check and then a ternary expression to eliminate the nesting due to the even/odd check. The result is that the main logic of the loop is not nested two levels deep.

Another big offender is the exception handling. Consider this code:

try:
try:
f = open(filename)
except IOError:
print('Can't open file')
raise
do_something_with_file(f)
except Exception as e:
print('Something went wrong') 

I've seen a lot of similar code. The exception handling here doesn't do much good. There is no real recovery or super clear error messages to the user. I always start with the simplest solution, which is just let the exceptions propagate up to the next level and let that code handle the exception. In this case, the code will become:

f = open(filename)
do_something_with_file(f) 

Specifically for files, you may want to sometimes close the file after you're done. In idiomatic Python, the 'with' statement is preferable to try-finally.

with open(filenam) as f: 
    do_something_with_file(f) 

Finally, multiple return statements are better than a deep hierarchy of if else statements and trying to keep track of a result variable that's returned in the end.


Posted by Gigi Sayfan on July 13, 2016

The T-shaped employee is a management and hiring concept regarding the skills of an employee. The vertical of the T represents skills with which the employee has deep expertise and the horizontal bar represents skills that exhibit less expertise, but are good enough to collaborate with other people. This concept applies in software development as well.

Successful developers have some expertise, even if it's just from working in a certain domain or being in charge of a particular aspect of the system (e.g. the database). But, every successful developer also has to work with other people--under source control, testing, read and write design documents, etc. As far as career planning goes, you want to think seriously about your expertise. If you're not careful you may find yourself a master of skills that nobody requires anymore. For example, early in my career I worked on a great tool called PowerBuilder. It was state-of-the-art back then and it still exists to this day, but I would be very surprised if more than a handful of you have ever heard about it. If I had defined myself as a PowerBuilder expert back then, and pursued a career path that focused on PowerBuilder, I would have limited my options significantly (maybe all the way to unemployment). Instead, I chose to focus on more general skills such as object-oriented design and architecture.

Programming language choice is also very important. Visual Basic was the most popular programming language when I started my career. It did evolve into VB.NET eventually, but I haven't heard much about any excitement in this area for quite some time. The same goes for operating systems. Windows was all the rage back then. Today, it's mostly Linux on the backend (along with Windows, of course) and either Web development or mobile apps on the frontend.

Consider the future of your area of current expertise carefully and be ready to switch if necessary. It's fine to develop several separate fields of expertise. If you're dedicated and have a strong software engineering foundation, you can be an expert on any topic within several months or a couple of years. All it takes is to read a little and do a couple of non-trivial projects. I have often delved into a new domain and within three months I could answer, or find out the answer to, most questions people asked on various public forums. My advice is to invest in timeless skills and core computer science and software engineering first, and then follow your passion and specialize in something about which you care deeply.


Posted by Sandeep Chanda on July 8, 2016

A lot of focus on C# version 6.0 has been towards enhancing the productivity of developers. Attention has been paid to areas where the same functionality can be attained by writing fewer lines of code, making it look clean and much easier to maintain. Similarly, the team has also worked towards improving the readability of string manipulations in code in terms of how they are formatted in output, conceiving the idea of interpolated strings. Then there are improvements in the area of exception handling. In this post we will discuss, in particular, the idea of exception filters that was newly introduced in C# 6.0.

First let us explore the idea of expression bodied members. Expression bodied members are a first class citizen in C# 6.0 and reflect a syntax that is a combination of the current syntax for members and the lambda expression syntax. The following code illustrates-

Let's say we had a method that returns the sum of two numbers. A standard syntax could look like:

public int Add(int a, int b)
{
    return a + b;
} 

The similar syntax could be demonstrated using an expression body which is much simpler to represent:

public int Add(int a, int b) => a + b; 

It can be applied to asynchronous operations as well:

public async Task<string> Request() => await Response();

Expression body also makes it simpler to represent properties as illustrated in the following code example:

public DateTime CurrentDateTime => DateTime.Now;

This is the equivalent representation of:

public DateTime CurrentDateTime
{
    get
    {
        return DateTime.Now;
    }
}

Note that there are currently some limitations to leveraging the expression body syntax, especially for branching statements such as if else and switch. Only the tertiary operator works for conditional assignment.

C# 6.0 introduces the idea of interpolated strings that essentially allows developers to use named variables in strings instead of the indexes.

The following code illustrates the comparison:

string name = string.Format("name: {0} {1}", person.FirstName, person.LastName);

In the interpolated format you can use:

string interpolatedName = $ "name: {person.FirstName} {person.LastName}";

This is a much cleaner way to represent formatted strings in code.

Finally, C# 6.0 also introduces the idea of exception filters that allows you to filter exceptions based on a particular condition using the "when" statement. It is not only a syntactical improvement for productivity and cleaner code representation, but also preserves the stack trace, thereby providing better traceability than an if condition to represent the equivalent filter in regular syntax.

try
    {
        ///try body
    }
    catch (HttpException ex) when (ex.ErrorCode == 500)
    {
        ///handle exception
    } 


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date