Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Gigi Sayfan on August 31, 2016

Over the last three years, the average page weight has grown by at least 15 percent per year. This is due to several trends such as increases in ad-related content, more images and more videos — as well as a lack of emphasis by designers and developers on reducing page weight. Google (along with other companies) has been on a mission to accelerate the Web across several fronts. One of the most interesting efforts is the Quick UDP Internet Connections (QUIC) project. The Web is built on top of HTTP/HTTPS, which typically uses TCP as a transfer protocol. TCP was recognized a long time ago is sub-optimal for the request-response model of the Web. An average Web page makes about 100 HTTP requests to tens of different domains to load all of its content. That causes significant latency issues due to TCP's design.

QUIC

QUIC is based on the connection-less UDP and doesn't suffer from the same design limitations as TCP. It requires building its own infrastructure for ordering and re-transmission of lost packets and dealing with congestion, but has a lot of interesting tricks. The ultimate goal is to improve TCP and incorporate ides from QUIC into an improved TCP protocol. Since TCP evolution is very slow, working with QUIC allows faster iteration on novel ideas such as innovative congestion management algorithms without disrupting the larger Web.

Where to Get Started

There is currently QUIC support in Chromium and Opera on the client side and Google's servers support QUIC on the server side. In addition, there a few libraries such as libquic and Google has released a prototype server for people to play around with QUIC. One of the major concerns was that the UDP protocol could be blocked for most people, but a survey conducted by Chromium showed that it is not a common occurrence. If UDP is blocked, QUIC falls back to TCP.


Posted by Sandeep Chanda on August 30, 2016

Multiple teams within Microsoft are working aggressively to create open source frameworks and the Office team is not far behind. They have already created an open source toolkit called the Office UI Fabric that helps you easily create Office 365 Apps or Office Add-ins, integrating seamlessly to provide the unified Office experience. The fabric's components are designed for the modern responsive UI, allowing you to apply the Office Design Language to your own Web and mobile form factors.

One key aspect of the fabric is its support for UI toolkits that you are familiar with such as Node, Angular, and React. The Office UI Fabric React provides React-based components that you can use to create an experience for your Office 365 app. The idea is to let developers leverage their favorite tool in creating Office Apps.

Getting Started

Open Visual Studio (make sure you have Node.js v4.x.x and Node Tools for Visual Studio installed) and create a blank Node.js Web application.

After the basic template is created, right-click on your project and click Open Command Prompt to launch the Node command console. In the Node command console, first install the React components using the command:

npm install –g create-react-app

followed by:

create-react-app card-demo

If there are no errors in creating the app, navigate to the app folder using the command cd card-demo and then start the node app using the npm start command. The following figure illustrates that your React app is now running successfully:

Next, run the following command to install the Office UI Fabric components-

npm install office-ui-fabric-react --save

Now switch back to your Visual Studio solution and you will see a new folder created. Include the folder and its components in your project.

Open the App.js file and replace the contents with the following code:

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import {
    DocumentCard,
    DocumentCardPreview,
    DocumentCardTitle,
    DocumentCardActivity
} from 'office-ui-fabric-react/lib/DocumentCard';

class App extends Component {
  render() {
    return (
        <div>
            <DocumentCard>
                <DocumentCardPreview
                    previewImages={[
                        {
                            previewImageSrc: require('./logo.svg'),
                            width: 318,
                            height: 196,
                            accentColor: '#ce4b1f'
                        }
                    ]}
                    />
                <DocumentCardTitle title='React Inside a Card'/>
                <DocumentCardActivity
                    activity='Created Aug 27, 2016'
                    people={
                        [
                            { name: 'John Doe' }
                        ]
                    }
                    />
            </DocumentCard>
        </div>
    );
  }
}

export default App; 

Notice that we have imported the Document Card UI components from the Office UI Fabric and replaced the contents of the render function with a sample Document Card that displays the React logo inside the card. Save the changes. Now open the index.html file and include a reference to the Office UI Fabric CSS:

<link rel="stylesheet" href="https://appsforoffice.microsoft.com/fabric/2.2.0/fabric.min.css">

Save changes. Switch to the Node console and run npm start. You will see the browser launched with the following card displayed.

While the toolkit is still in preview, it reflects how easy it is to create Office 365 apps using the language of your choice.


Posted by Gigi Sayfan on August 23, 2016

Building software used to be simple. You worked on one system with one executable. You compiled the executable and if the compilation passed, you could run your executable and play with it. Not anymore--and trying to follow Agile principles can make it even more complex. Today systems are made of many loosely-coupled programs and services. Some (maybe most) of these services are third-party. Both your code and the other services (in-house and third-party) depend on a large number of libraries, which require constant upgrades to keep up-to-date (security patches are almost always mandatory). In addition, these days, a lot more systems are heavily data-driven, which means you don't deal with just code anymore. You have to make sure your persistent stores contain the data for decision making. In addition, many systems are implemented using multiple programming languages, each with their own build tool-chain. This situation is becoming more and more common.

Maintaining Agility

To follow Agile principles and allow an individual developer to have a quick build cycle of edit-built-test requires significant effort. In most cases it is worth it. There are two representative cases: small and large:

    In the small case, the organization is relatively small and young. The entire system (not including third-party services) can fit on a single machine (even if in a very degraded form). In the large case, the organization is larger, it's been around for longer and there are multiple independent systems developed by independent teams.

The big case can often be broken down into multiple small cases. So, let's focus on the small case. The recommended solution is to invest the time and effort required to allow each developer to run everything on their own machine. That may mean supporting cross-platform development even though the production environment is very carefully specified. It might mean creating a lot of tooling and test databases that can be quickly created and populated.

It is important to cleanly separate that functionality from production functionality. I call this capability system in a box. You can run your entire system on a laptop. You may need to mock some services, but overall each developer should be able to test their code locally and be pretty confident it is solid before pushing it to other developers. This buys you a tremendous amount of confidence to move quickly and try things without worrying about breaking the build or development for other people.


Posted by Sandeep Chanda on August 19, 2016

Cloud Scale Intelligent Load Balancing for a Modern-day Microservices Application Architecture

Load balancers have played a key role in providing enhanced performance experience to clients since pretty much the advent of client server architecture. Most load balancers fall in two categories:

  1. Hardware based load balancers working in OSI Layer 4
  2. Application based load balancers (ALB) working with HTTP services in OSI Layer 7

Application based load balancers are more intelligent in that they can support adaptive routing based on intelligent algorithms that look for a variety of parameters to route the incoming requests to a more suitable instance. In the last 5 years, application load balancers have inherited more responsibilities as Service Oriented Architectures and Distributed Systems gained prominence. This trend is mostly attributed to their flexibility and ability to rely on an intelligent algorithm. Today, ALBs are taking up even more complex roles like SSL acceleration that can save costly processing time by taking away the responsibility of encrypting and decrypting the traffic from the application server. This immensely boosts server performance. That said, the ask from load balancers is ever increasing, given the modern world of API driven development and the Microservices architecture.

With cloud scale becoming a reality, application server responsibilities are increasing demonstrating a self-contained behavior. ALBs are now required to meet the demands of this new application development paradigm and cloud scale infrastructure support. The good news is that cloud providers are listening. Amazon has taken a step forward by announcing the launch of an ALB option for its Elastic Load Balancing service. The most important feature it provides is support for container based applications and content based routing. The ALB will have access to HTTP headers and will be able to route to a specific set of API endpoints based on the content, which essentially means that you will be able to route and load balance requests from different client devices to different sets of API endpoints depending on the need for scale. With support for containers, the ALB can load balance requests to different service containers hosted in the same instance, and that is pretty cool! AWS has leaped into a new future for ALBs and I am sure competition will not be far behind in announcing their equivalents.


Posted by Gigi Sayfan on August 15, 2016

Design patterns are solutions or approaches to situations that appear often when developing software. They were introduced to the software engineering community at large by the seminal Gang of Four (GoF) book, "Design Patterns: Elements of Reusable Object-Oriented Design." The touted benefits of design patterns are that they allow proliferation of best practices by "codifying" them as design patterns and providing efficient communication between engineers who can refer to an entire design pattern, that can consist of many classes, by its name.

I must admit that I haven't seen those benefits in practice. There is a small subset of design patterns such as Singleton or Factory that are mentioned often, but those design patterns are typically simple and are self-explanatory — or can be explained by one sentence: Singleton — there can be only one; Factory — makes something. I have read the original GoF book and other books and articles that introduced other design patterns and I either recognized design patterns and themes that my colleagues and I have developed ourselves or didn't really get them deeply. Much later, after I solved a new problem, in retrospect I realized I had used a design pattern. But, I have never looked at a problem and suddenly proclaim: "Hey, let's use X design pattern here."

I'm not sure if my opinion is just based on my experience mostly working for fast-paced startups. It's possible that in larger enterprise shops, design patterns are a part of the culture and dedicated software architects converse with each other using design patterns. But, I highly doubt it. The main reason is that there are a lot of nuances to real world problem and design patterns, by their nature, are general.

In particular, the more complicated design patterns require various adaptations and often a combination of multiple modified design patterns to construct real world systems. So, what's the bottom line? I believe design patterns are useful for documenting the architecture of systems. They are also great for educational purposes because they have well defined format and explicitly explain what problem they solve. But, don't expect them to guide you when faced when an actual problem. If you are stumped and start going over a catalog of design patterns to see if one of them suddenly jump-starts your creativity, you might be sorely disappointed.


Posted by Gigi Sayfan on August 11, 2016

The CAP theorem (also known as Brewer's theorem) of distributed systems says that you can have two out of these three:

  • Consistency
  • Availability
  • Partitioning

Consistency means that you have the same state across all the machines in your cluster. Availability means that all the data is always accessible and partitioning means that the system can tolerate network partitions (some machines can't reach other machines in the cluster) without affecting the system's operation.

It's pretty clear that if there is a network partition and server A can't reach server B then any update to A can't be communicated to B until the network is repaired. That means that when a network partition happens the system can't remain consistent. If you're willing to sacrifice availability, then you can just reject all reads and clients will never discover the inconsistency between A and B. So, you can have C/P — a system that can remain consistent (from the user's point of view) and can tolerate network partitioning, but will sometimes be unavailable (in particular when there is a partition). This can be useful is certain situations, such as financial transactions where it is better to be unavailable than to break consistency.

If you can somehow guarantee that there will be no network partitions by employing massive networking redundancy, then you can have C/A. Every change will propagate to all servers and the system will always be available. It is very difficult to build such systems in practice, but it's very easy to design systems that rely on uninterrupted connectivity.

Finally, if you're willing to sacrifice perfect consistency, you can build A/P systems — always available and can tolerate network partitioning, but the data on different servers in the cluster might not always agree. This configuration is very common for some aspects of Web-based systems. The idea is that small temporary inconsistencies are fine and conflicts can be resolved later. For example, if you search Google for the same term from two different machines, in two different geographic locations, it is possible that you'll receive different results. Actually, if you run the same search twice (and clear your cache) you might get different results. But, this is not a problem for Google — or for its users. Google doesn't guarantee that there is a "true" answer to a search. It is very likely that the top results will be identical because it takes a lot of effort to change the rank. All the servers (or caching systems) constantly play catch up with the latest and greatest.

The same concept applies to something like the comments on a Facebook post. If you comment, then one of your friends may see it immediately and another friend may see it a little while later. There is no real-time requirement.

In general, distributed systems that are designed for eventual consistency typically still provision enough capacity and redundancy to be pretty consistent under normal operating conditions, but accept that 1% or 0.1% of actions/messages might be delayed.


Posted by Sandeep Chanda on August 9, 2016

Azure Event Hubs is a great platform for planning real time and actionable telemetry operations to audit your services in the Microsoft Azure ecosystem. The support for streaming diagnostic logs into Event Hubs was released in preview recently, and is made available in various services in Azure including App Service Gateways, Network Security Groups, Logic Apps, Data Lake, Search and Key Vault. These are the only the set of services where the support is currently available, and the footprint will only increase in days to come.

Telemetry of the diagnostic log using Event Hubs allows services to stream usage and makes it possible to apply corrective measures or compensation in real time. A great use for this feature could be with Logic Apps in which a live stream of real-time audit trail from a Logic App workflow could mean real-time analytics of the telemetry data and subsequent corrective or follow-up action. Streaming data to Event Hubs also allows you to perform live analysis of the data using Azure Stream Analytics and Power BI. In Stream Analytics you can directly create a query to fetch the service health hot path data from Event Hub and store the output into a Power BI dataset table. "Stream Analytics & Power BI" explains the steps to configure a real time stream analytics dashboard using Power BI.

There are various means by which you can enable diagnostic streaming for your Logic App instance. One of the easier ways is to enable it through the Azure Portal. Let's assume that our Logic App is a simple request-response service as shown below:

If you now navigate to your Logic App settings, you find the option under Diagnostic Settings after you enable the status to ON. The following picture illustrates this:

You can specify what to log. If you check "Export to Event Hubs", you will have to provide the service bus namespace. The other option to create this is to execute the following command using the Azure Command Line Interface:

azure insights diagnostic set --resourceId <resourceId> --serviceBusRuleId <serviceBusRuleId> --enabled true

You can analyze the JSON output received by the Event Hub instance using the Properties attribute. The event details will be available under this property.


Posted by Sandeep Chanda on July 29, 2016

Coming from the likes of Facebook, GraphQL is a modern data service platform that combines both a query language and an execution engine for complex data models. It is driven by the requirements of views and is extremely intuitive for client-side developers to consume and program. The Apollo data stack builds on the power of GraphQL and provides a set of client and server side components that contain easy-to-use boilerplate templates for setting up a data service for modern UI driven apps. It also provides a set of great developer tools to easily debug what is going on inside your app.

While the stack is still in preview, it gives a good glimpse of its capabilities for developers to get excited and participate in active development. It also integrates well with a bunch of client side JS platforms including React and React Native, Angular 2.0, and Meteor — to name a few. Setting up a server is pretty easy and the queries follow the GraphQL schema specification. To set up the Apollo server, first clone the Apollo starter kit repository using your node console:

Once you have clone the repository you can install the Apollo stack packages using the npm install command and then start the server using the npm start command.

By default, the server starts on the 8080 port. If you browse to http://localhost:8080/graphql, you will see the sample GraphQL schema outcome as shown below:

If you go to the data folder under the starter kit, you will see two files: mocks.js and schema.js. The schema.js file contains the default schema you just saw getting executed when you opened the server URL. The mock.js file immediately returns the mocks on querying the schema for testability. The "it works" message that you see in the browser outcome is the mock data being delivered by the mock.js file.

You can define your types in the schema.js file and then use the client to consume the data. A set of connectors are also supported to store the data in actual data repositories like MongoDB, MySQL, and Postgres to name a few. Have fun exploring Apollo!


Posted by Gigi Sayfan on July 28, 2016

Microsoft recently announced the release of .NET Core 1.0, a truly cross-platform runtime and development environment. It is a significant milestone in Microsoft's commitment to fully open and platform-agnostic computing. It supports Windows (duh!), Linux, Mac OSX, iOS and Android.

The transition over the years from a Windows-centric view to fully embrace other platforms, as well as the open source model (yes, .NET is now open source via the .NET foundation, is very impressive. Another interesting aspect is that Microsoft made the announcement at the Red Hat summit together with Red Hat, who will officially support .NET Core in its enterprise product.

In addition, Microsoft also announced ASP.NET Core 1.0, which unifies ASP.NET MVC and WebAPI. ASP.NET Core 1.0 can run on top of either .NET Core 1.0 or the full-fledged .NET framework. Exciting days are ahead for .NET developers whose skills, and the famous .NET productivity, suddenly become even more widely applicable through Microsoft's efforts.

Some of the distinctive features of .NET Core 1.0, in addition to its cross-platform abilities, are:

  • Flexible deployment (Can be included in your app or installed side-by-side user- or machine-wide)
  • Command-line tools (The 'dotnet' command. You don't need to rely on IDE although there is great IDE support too)
  • Compatibility (It works with the .NET framework, Xamarin and Mono)
  • Open Source (MIT or Apache 2 licenses, the documentation is open source too)
  • Official support from Microsoft


Posted by Sandeep Chanda on July 22, 2016

Distributed transaction management is a key architectural consideration that needs to be addressed whenever you are proposing a microservices deployment model to a customer. For developers from the monolithic relational database world, it is not an easy idea to grasp. In a monolithic application with a relational data store, transactions are ACID (atomic, consistent, isolated, and durable) in nature and follow a simple pessimistic control for data consistency. Distributed transactions are always guided by a two-phase commit protocol, where first all the changes are temporarily applied and then committed to, or rolled back, depending on whether or not all the transactions were successful or there was any error. You could easily write queries that fetched data from different relational sources and the distributed transaction coordinator model supported transaction control across disparate relational data sources.

In today's world, however, the focus is on modular, self-contained APIs. To make these services self-contained and aligned with the microservices deployment paradigm, the data is also self-contained within the module, loosely coupled from other APIs. Encapsulating the data allows the services to grow independently, but a major problem is dealing with keeping data consistent across the services.

A microservices architecture promotes availability over consistency, hence leveraging a distributed transaction coordinator with a two-phase commit protocol is usually not an option. It gets even more complicated with the idea that these loosely-coupled data repositories might not necessarily all be relational stores and could be a polyglot with a mix of relational and non-relational data stores. The idea of having an immediately consistent state is difficult. What you should look forward to, however, is an architecture that promotes eventual consistency like CQRS and Event Sourcing. An event driven architecture can support a BASE (basic availability, soft-state, and eventual consistency) transaction model. This is a great technique to solve the problem of handling transactions across services and customers can be convinced to negotiate the contract towards a BASE model since usually transactional consistency across functional boundaries is mostly about frequent change of states, and consistency rules can be relaxed to let the final state be eventually visible to the consumers.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date