Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on September 23, 2016

Azure Logic Apps provides scalable workflows and business process automation backed by the power of the cloud. It has an intuitive designer interface that allows you to declaratively define a process automation workflow. The vast array of connectors available out of the box let you create integrations between a suite of standard enterprise applications such as Salesforce and Office 365 — as well as social platforms such as Facebook, Twitter and Instagram. Today we will look at how Logic Apps can help you create a process automation in your Salesforce cloud platform instance.

Login to your Azure subscription and create a new Logic App instance from the list of available resources. You need to specify a resource group while creating the instance. It will take a minute to deploy and the designer will fire up once the instance deployment is successful. Once you are in the designer, the first step is to search for the Salesforce connector. You will see two trigger options:

  1. When an object is created
  2. When an object is modified

Select the first option. A dialog will appear letting you sign-in to your Force.com account (production/staging) and then allow Logic Apps to access your account.

In the Object Type, select Campaigns and leave the trigger interval to default 3 minutes. You can also expand the advanced options to provide additional filter conditions.

Next provide a condition to check if the budgeted cost is greater than the expected revenue. If the condition is met, you can add a subsequent step to create a Task in Salesforce for someone to act on the campaign and/or send an email.

The following figure illustrates creating a Task in Salesforce based on the created Campaign condition:

Save the workflow. Go to your Force.com account and create a campaign with the condition of a higher budgeted cost than the expected revenue and you will see that a task will be created after the first run in 3 minutes.


Posted by Gigi Sayfan on September 15, 2016

Agile practices help you develop software that meets the user needs faster and safer — and that responds quickly to changes in the requirements, environment or technological advances. But, there is one "secret" practice that is not often mentioned in the context of Agile development. This is really an "un-practice". The idea is to flat out not do something. It could be a requirement (this will require negotiating with the customer), a refactoring or a bug fix. Just because something is on the backlog doesn't mean it always needs to be done. Extreme Programming calls it YAGNI (You ain't gonna need it) where you postpone doing things that are not needed immediately.

Minimalism

Being minimalist by design is often neglected. Everybody wants to eventually conquer the world, right? Another aspect of this mindset is over-engineering. A lot of effort is expended towards building infrastructure, scalability and automation that isn't necessarily needed. Why is it so important and why is it often ignored? It is important because Agile is all about delivering real value, really quickly. If you work on something that's not really needed, you just wasted time and effort.

YAGNI

The reason it's often ignored or not practiced fully is that it's difficult to be disciplined. You start working on a cool feature or capability and want to keep evolving and improving it even if it's not providing immediate business value. On the infrastructure/implementation side, developers are often worried about technical debt. I'm often guilty of trying to get the project "right" from the beginning. If you want to really deliver the maximum business value in each iteration, you have to be very aware and explicit about what you plan and how you go about it. Just paying lip service to the idea is not good enough.


Posted by Sandeep Chanda on September 13, 2016

The support for Temporal Tables has now been extended to SQL Azure databases. With Temporal Tables, you can track changes made to your data and store the entire history either for analysis or for making the information actionable. For example, a compensation logic can trigger based on historical changes resulting from an exception scenario. The important aspect about Temporal Tables is that it can keep data stored over a timeline. Data in context of time can be leveraged in reporting facts valid for that specific period of time. It then becomes very easy for you to gain insights from data as it evolves over a specified period.

Auditing is probably the most significant use case for Temporal tables. Temporal tables are created by enabling system versioning on new or existing tables. The following SQL script creates a table that is temporal system-versioned:

Note that in addition to the regular fields for the Person entity, there are three additional columns. The ValidFrom and ValidTo fields allow time-based tracking of any information updates on the Person table and the WITH statement allows enabling the historical tracking of changes in the PersonHistory table.

Create a Person record using the following statement-

Run multiple updates on the table to modify the address field. If you query the history table, you will see records for all the updates made:

You can enable system-versioning on existing tables by altering the schema and introducing the ValidFrom and ValidTo columns. The history table becomes the focal point for your auditing needs without requiring you to write any programming logic for the purpose. It then also becomes very easy to perform Point-in-Time analysis such as tracking trends or differences between two points in time of interest. The other popular use case that Temporal Tables enable you to perform is anomaly detection. For example, it can help you figure out a miss in your sales forecast. Temporal Tables are a powerful feature helping you tailor your auditing needs without having to write any code!


Posted by Gigi Sayfan on September 6, 2016

Shells are the interactive programs that run in text terminals and let you execute commands and run external programs. But, shells can be used non-interactively and you can write scripts and execute (source) them like regular programs. System administrators live and die by the command-line. Unix evolved as text-based environment and for a long time shells were a central part of the user experience and the graphic UI evolved significantly later. On Windows the user experience was focused from the beginning on graphic UI (It's not called Windows for nothing). Unix/Linux shells, such as Bash, are very good at manipulating text and chaining together (piping) the text output of small commands to the input of other commands.

On Windows, the original textual shell and batch language (command.com and later cmd.exe) were significantly inferior. But, Microsoft saw the light and developed PowerShell, which is indeed very powerful and arguably exceeds the capabilities of the Unix shells. For a long time, the two camps were pretty separate. You could run some variation of Unix shells on Windows via cygwin or similar, but it was mostly used for special purposes. PowerShell was definitely a Windows-only affair. But, things are changing.

Microsoft is blending the boundaries. First, Microsoft made PowerShell available on Linux and Mac OSX and then it brought Bash to Windows by way of having Ubuntu on Windows. Those are all exciting developments for shell nerds and will pave the way for stronger and more streamlined integrations between *Nix and Windows environments. Developers will be able to work in their favorite environment and will have fewer problems debugging production issues on various platforms.


Posted by Gigi Sayfan on August 31, 2016

Over the last three years, the average page weight has grown by at least 15 percent per year. This is due to several trends such as increases in ad-related content, more images and more videos — as well as a lack of emphasis by designers and developers on reducing page weight. Google (along with other companies) has been on a mission to accelerate the Web across several fronts. One of the most interesting efforts is the Quick UDP Internet Connections (QUIC) project. The Web is built on top of HTTP/HTTPS, which typically uses TCP as a transfer protocol. TCP was recognized a long time ago is sub-optimal for the request-response model of the Web. An average Web page makes about 100 HTTP requests to tens of different domains to load all of its content. That causes significant latency issues due to TCP's design.

QUIC

QUIC is based on the connection-less UDP and doesn't suffer from the same design limitations as TCP. It requires building its own infrastructure for ordering and re-transmission of lost packets and dealing with congestion, but has a lot of interesting tricks. The ultimate goal is to improve TCP and incorporate ides from QUIC into an improved TCP protocol. Since TCP evolution is very slow, working with QUIC allows faster iteration on novel ideas such as innovative congestion management algorithms without disrupting the larger Web.

Where to Get Started

There is currently QUIC support in Chromium and Opera on the client side and Google's servers support QUIC on the server side. In addition, there a few libraries such as libquic and Google has released a prototype server for people to play around with QUIC. One of the major concerns was that the UDP protocol could be blocked for most people, but a survey conducted by Chromium showed that it is not a common occurrence. If UDP is blocked, QUIC falls back to TCP.


Posted by Sandeep Chanda on August 30, 2016

Multiple teams within Microsoft are working aggressively to create open source frameworks and the Office team is not far behind. They have already created an open source toolkit called the Office UI Fabric that helps you easily create Office 365 Apps or Office Add-ins, integrating seamlessly to provide the unified Office experience. The fabric's components are designed for the modern responsive UI, allowing you to apply the Office Design Language to your own Web and mobile form factors.

One key aspect of the fabric is its support for UI toolkits that you are familiar with such as Node, Angular, and React. The Office UI Fabric React provides React-based components that you can use to create an experience for your Office 365 app. The idea is to let developers leverage their favorite tool in creating Office Apps.

Getting Started

Open Visual Studio (make sure you have Node.js v4.x.x and Node Tools for Visual Studio installed) and create a blank Node.js Web application.

After the basic template is created, right-click on your project and click Open Command Prompt to launch the Node command console. In the Node command console, first install the React components using the command:

npm install –g create-react-app

followed by:

create-react-app card-demo

If there are no errors in creating the app, navigate to the app folder using the command cd card-demo and then start the node app using the npm start command. The following figure illustrates that your React app is now running successfully:

Next, run the following command to install the Office UI Fabric components-

npm install office-ui-fabric-react --save

Now switch back to your Visual Studio solution and you will see a new folder created. Include the folder and its components in your project.

Open the App.js file and replace the contents with the following code:

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import {
    DocumentCard,
    DocumentCardPreview,
    DocumentCardTitle,
    DocumentCardActivity
} from 'office-ui-fabric-react/lib/DocumentCard';

class App extends Component {
  render() {
    return (
        <div>
            <DocumentCard>
                <DocumentCardPreview
                    previewImages={[
                        {
                            previewImageSrc: require('./logo.svg'),
                            width: 318,
                            height: 196,
                            accentColor: '#ce4b1f'
                        }
                    ]}
                    />
                <DocumentCardTitle title='React Inside a Card'/>
                <DocumentCardActivity
                    activity='Created Aug 27, 2016'
                    people={
                        [
                            { name: 'John Doe' }
                        ]
                    }
                    />
            </DocumentCard>
        </div>
    );
  }
}

export default App; 

Notice that we have imported the Document Card UI components from the Office UI Fabric and replaced the contents of the render function with a sample Document Card that displays the React logo inside the card. Save the changes. Now open the index.html file and include a reference to the Office UI Fabric CSS:

<link rel="stylesheet" href="https://appsforoffice.microsoft.com/fabric/2.2.0/fabric.min.css">

Save changes. Switch to the Node console and run npm start. You will see the browser launched with the following card displayed.

While the toolkit is still in preview, it reflects how easy it is to create Office 365 apps using the language of your choice.


Posted by Gigi Sayfan on August 23, 2016

Building software used to be simple. You worked on one system with one executable. You compiled the executable and if the compilation passed, you could run your executable and play with it. Not anymore--and trying to follow Agile principles can make it even more complex. Today systems are made of many loosely-coupled programs and services. Some (maybe most) of these services are third-party. Both your code and the other services (in-house and third-party) depend on a large number of libraries, which require constant upgrades to keep up-to-date (security patches are almost always mandatory). In addition, these days, a lot more systems are heavily data-driven, which means you don't deal with just code anymore. You have to make sure your persistent stores contain the data for decision making. In addition, many systems are implemented using multiple programming languages, each with their own build tool-chain. This situation is becoming more and more common.

Maintaining Agility

To follow Agile principles and allow an individual developer to have a quick build cycle of edit-built-test requires significant effort. In most cases it is worth it. There are two representative cases: small and large:

    In the small case, the organization is relatively small and young. The entire system (not including third-party services) can fit on a single machine (even if in a very degraded form). In the large case, the organization is larger, it's been around for longer and there are multiple independent systems developed by independent teams.

The big case can often be broken down into multiple small cases. So, let's focus on the small case. The recommended solution is to invest the time and effort required to allow each developer to run everything on their own machine. That may mean supporting cross-platform development even though the production environment is very carefully specified. It might mean creating a lot of tooling and test databases that can be quickly created and populated.

It is important to cleanly separate that functionality from production functionality. I call this capability system in a box. You can run your entire system on a laptop. You may need to mock some services, but overall each developer should be able to test their code locally and be pretty confident it is solid before pushing it to other developers. This buys you a tremendous amount of confidence to move quickly and try things without worrying about breaking the build or development for other people.


Posted by Sandeep Chanda on August 19, 2016

Cloud Scale Intelligent Load Balancing for a Modern-day Microservices Application Architecture

Load balancers have played a key role in providing enhanced performance experience to clients since pretty much the advent of client server architecture. Most load balancers fall in two categories:

  1. Hardware based load balancers working in OSI Layer 4
  2. Application based load balancers (ALB) working with HTTP services in OSI Layer 7

Application based load balancers are more intelligent in that they can support adaptive routing based on intelligent algorithms that look for a variety of parameters to route the incoming requests to a more suitable instance. In the last 5 years, application load balancers have inherited more responsibilities as Service Oriented Architectures and Distributed Systems gained prominence. This trend is mostly attributed to their flexibility and ability to rely on an intelligent algorithm. Today, ALBs are taking up even more complex roles like SSL acceleration that can save costly processing time by taking away the responsibility of encrypting and decrypting the traffic from the application server. This immensely boosts server performance. That said, the ask from load balancers is ever increasing, given the modern world of API driven development and the Microservices architecture.

With cloud scale becoming a reality, application server responsibilities are increasing demonstrating a self-contained behavior. ALBs are now required to meet the demands of this new application development paradigm and cloud scale infrastructure support. The good news is that cloud providers are listening. Amazon has taken a step forward by announcing the launch of an ALB option for its Elastic Load Balancing service. The most important feature it provides is support for container based applications and content based routing. The ALB will have access to HTTP headers and will be able to route to a specific set of API endpoints based on the content, which essentially means that you will be able to route and load balance requests from different client devices to different sets of API endpoints depending on the need for scale. With support for containers, the ALB can load balance requests to different service containers hosted in the same instance, and that is pretty cool! AWS has leaped into a new future for ALBs and I am sure competition will not be far behind in announcing their equivalents.


Posted by Gigi Sayfan on August 15, 2016

Design patterns are solutions or approaches to situations that appear often when developing software. They were introduced to the software engineering community at large by the seminal Gang of Four (GoF) book, "Design Patterns: Elements of Reusable Object-Oriented Design." The touted benefits of design patterns are that they allow proliferation of best practices by "codifying" them as design patterns and providing efficient communication between engineers who can refer to an entire design pattern, that can consist of many classes, by its name.

I must admit that I haven't seen those benefits in practice. There is a small subset of design patterns such as Singleton or Factory that are mentioned often, but those design patterns are typically simple and are self-explanatory — or can be explained by one sentence: Singleton — there can be only one; Factory — makes something. I have read the original GoF book and other books and articles that introduced other design patterns and I either recognized design patterns and themes that my colleagues and I have developed ourselves or didn't really get them deeply. Much later, after I solved a new problem, in retrospect I realized I had used a design pattern. But, I have never looked at a problem and suddenly proclaim: "Hey, let's use X design pattern here."

I'm not sure if my opinion is just based on my experience mostly working for fast-paced startups. It's possible that in larger enterprise shops, design patterns are a part of the culture and dedicated software architects converse with each other using design patterns. But, I highly doubt it. The main reason is that there are a lot of nuances to real world problem and design patterns, by their nature, are general.

In particular, the more complicated design patterns require various adaptations and often a combination of multiple modified design patterns to construct real world systems. So, what's the bottom line? I believe design patterns are useful for documenting the architecture of systems. They are also great for educational purposes because they have well defined format and explicitly explain what problem they solve. But, don't expect them to guide you when faced when an actual problem. If you are stumped and start going over a catalog of design patterns to see if one of them suddenly jump-starts your creativity, you might be sorely disappointed.


Posted by Gigi Sayfan on August 11, 2016

The CAP theorem (also known as Brewer's theorem) of distributed systems says that you can have two out of these three:

  • Consistency
  • Availability
  • Partitioning

Consistency means that you have the same state across all the machines in your cluster. Availability means that all the data is always accessible and partitioning means that the system can tolerate network partitions (some machines can't reach other machines in the cluster) without affecting the system's operation.

It's pretty clear that if there is a network partition and server A can't reach server B then any update to A can't be communicated to B until the network is repaired. That means that when a network partition happens the system can't remain consistent. If you're willing to sacrifice availability, then you can just reject all reads and clients will never discover the inconsistency between A and B. So, you can have C/P — a system that can remain consistent (from the user's point of view) and can tolerate network partitioning, but will sometimes be unavailable (in particular when there is a partition). This can be useful is certain situations, such as financial transactions where it is better to be unavailable than to break consistency.

If you can somehow guarantee that there will be no network partitions by employing massive networking redundancy, then you can have C/A. Every change will propagate to all servers and the system will always be available. It is very difficult to build such systems in practice, but it's very easy to design systems that rely on uninterrupted connectivity.

Finally, if you're willing to sacrifice perfect consistency, you can build A/P systems — always available and can tolerate network partitioning, but the data on different servers in the cluster might not always agree. This configuration is very common for some aspects of Web-based systems. The idea is that small temporary inconsistencies are fine and conflicts can be resolved later. For example, if you search Google for the same term from two different machines, in two different geographic locations, it is possible that you'll receive different results. Actually, if you run the same search twice (and clear your cache) you might get different results. But, this is not a problem for Google — or for its users. Google doesn't guarantee that there is a "true" answer to a search. It is very likely that the top results will be identical because it takes a lot of effort to change the rank. All the servers (or caching systems) constantly play catch up with the latest and greatest.

The same concept applies to something like the comments on a Facebook post. If you comment, then one of your friends may see it immediately and another friend may see it a little while later. There is no real-time requirement.

In general, distributed systems that are designed for eventual consistency typically still provision enough capacity and redundancy to be pretty consistent under normal operating conditions, but accept that 1% or 0.1% of actions/messages might be delayed.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date