Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Gigi Sayfan on October 26, 2016

Virtual reality is here and many companies are working on VR devices, SDKs, content and frameworks. But, true presence and immersion requires high-end equipment at the moment. It will be several more years until really immersive VR is affordable and ubiquitous. In the mean-time, developers must work with today's limitations and constraints. One of the most interesting initiatives is WebVR. It seems to have a lot of support and can be used today for displaying VR content in the browser.

The main draw of WebVR is that it lets gazillions of Web developers take advantage of their experience, skills and tools to develop VR applications and content that will be broadly available. Facebook recently announced it has plans for ReactVR and the Carmel VR browser. The A-Frame project is built on top of three.js and allows you render VR content today. The major browser vendors are all aware of the promise of VR and are taking steps to enable it in their browsers.

It is rare to see the whole industry (or even different industries) aligned and collaborating early on open standards and, in general, taking the right steps to ensure this innovation reaches each and every person sooner rather than later. I'm very excited to see these developments. The matrix may be just 10 years away. You may be overjoyed or terrified, but don't be surprised. As far as alternatives to WebVR, many developers use the Unity game engine, which has good integration with VR SDKs and devices. The skill set and expertise to develop on Unity is not as ubiquitous among developers as Web development skills. I highly recommend that you check out those technologies and dip your toes in virtual reality.

Posted by Gigi Sayfan on October 18, 2016

Containers are making inroads into the mainstream. Many large organizations and startups deploy their software using containers and many other experiment with the technology or plan to do it. When talking about containers, Docker is the 800-pound gorilla. The recent awareness and popularity of containers for deploying complicated systems (often using micro-services architecture) can be credited in large part to Docker. But, Docker is not the only game in town.

There were a lot of complaints in the community about several aspects of Docker. In particular, it had serious security issues. Others are unhappy with the kitchen sink approach Docker is taking and its tendency to push out half-baked features. CoreOS is one of the harshest critics. CoreOS sees containers as a basic low-level infrastructure components. CoreOS developed a standard for application containers called appc and an implementation called rkt (pronounced rocket). Several large organizations and open-source projects support this effort. In particular, the Kubernetes Juggernaut, that competes with Docker swarm in the container orchestration area, has support for rkt containers. On the technical side, appc and Rkt have some benefits such as simplicity, performance and a clear specification.

It will be very interesting to see how the landscape evolves. Are developers going to stick with the incumbent, yet quickly innovating, Docker or are they going to flock to the supposedly superior newcomer? Are appc and rkt compelling enough for the mainstream developer to switch? I personally intend to dive into appc and rkt and find out for myself. The whole container scene is too young and fast moving to stick with Docker just because it was first.

Posted by Sandeep Chanda on September 29, 2016

In the previous post, I talked about how we can leverage Logic Apps to create scalable workflows and business processes and demonstrated a process automation example with Salesforce. In this post, we will look at leveraging Logic Apps in a hybrid scenario, where the business process needs to integrate systems in cloud and on-premise.

Hybrid scenarios are becoming commonplace with more enterprise customers taking their first step in embracing cloud by rationalizing part of the portfolio in cloud platforms such as Microsoft Azure. Logic apps can play a significant role in automating business processes that span both cloud-based and on-premise systems.

Azure Service Bus

Connecting an on-premise WCF service to a cloud based platform such as Salesforce is possible using the Azure Service Bus relay feature. Service Bus relay allows you to securely expose a WCF service endpoint in cloud without having to open a firewall connection or making any intrusive changes in the enterprise security infrastructure.

The first step to integrating the on-premise service would be to create a Service Bus namespace (you can create one from the Enterprise Integration menu under create new resource). Once the namespace is created, go to the shared access policies and copy the primary key connection string and the primary key.

Modify your existing WCF service solution by downloading the WindowsAzure.ServiceBus NuGet package as shown below.

This package essentially gets the equivalent relay bindings.

In your service host configuration add the code to expose the service bus end point using the following code:

Modify your WCF service configuration to reflect the WebHttpRelayBinding characteristics.

Create the Client

Now that you have configured your WCF service to expose the service bus end point, you can go ahead and create the client. Since this service needs to be called by Logic Apps, and there is no direct mechanism for Logic Apps to call a SOAP service, you will have to create an Azure Function App, that can call the WCF service whenever Logic App triggers the call. To create the Azure Function App, navigate to add new resource in your Azure management portal and search for Funtion Apps. Provide a name, and a resource group (tip: make sure the resource groups are same between the logic app and funtion app instances). The following figure illustrates:

Once the Function App is created, you can add your client code in the code window. Make sure the necessary NuGet package assemblies are referenced to call the service bus end point.

The final step in the process is to copy the Function URL and put it in the HTTP connector in your Logic Apps workflow created in the previous post. Add this step under the "If Yes" branch whenever an object is modified in Salesforce. You can specify the expected parameters thus configuring the trigger from Salesforce to an on-premise WCF service!

Posted by Gigi Sayfan on September 28, 2016

The Go language is 6 years old and has gotten a lot of traction. With the recent release of Go 1.7.1, the entire ecosystem looks very healthy. The continued focus on performance, while maintaining the original philosophy of simplicity, is encouraging. Go adoption is on the rise and Go is ideally suited for building micro-services that run on multi-core machines (often in containers). Its strong concurrency support allows taking advantage almost transparently of multiple cores. What are the indicators for Go's success? Go is being used for many innovative distributed system projects like Etcd, Docker, Kubernetes, NSQ and InfluxDB.

Of course Go is used heavily inside Google. Python developers, in particular, flock to Go when they have to deal with performance issues. Another encouraging sign is the Go mobile project. The premise is that you can write both the backend and the mobile frontend for Android and iOS in Go. This is similar to Node.js where you use the same language to write the backend and the frontend.

Going Forward

One other important factor is the improvement in Go's development environments. I'm a big fan of debuggers, and while Go advocates often say the Go is so simple you can just do Printf debugging, I prefer a real debugger for troubleshooting complex systems. The Delve debugger provides a solid debugging experience. It starts to get integrated in various Go IDEs and editors. If you are starting a new project, considering migrating incrementally to micro-services or just looking to expand your horizons, then Go should be on your radar as a nascent, yet well-supported language with a strong momentum.

Posted by Gigi Sayfan on September 26, 2016

The power of mobile devices and the available bandwidth keeps increasing and content producers are very aware of the importance of the mobile platform. They generate content that's designed specifically for mobile consumption. But, the user experience is still often lacking. There are two related problems here. First, the weight of the average Web page keeps increasing due to encumbering it with a lot of auxiliary stuff like ads, tracking and unnecessary animation, videos and heavy images. Second, content producers and developers often aim and test on the latest and greatest devices and in optimal networking environment. The implicit assumption is that technology moves so fast that very soon everybody will have high-end device and fast network. That leaves a lot of people with low-end devices and/or slow connection with a very poor experience.

One project that attempts to improve the situation is the AMP Project. It is built on existing Web technologies and promotes a restricted subset of HTML, CSS and JavaScript in addition to several custom HTML components that can improve performance. AMP accelerates the mobile experience by using the following practices:

  • Allow only asynchronous scripts
  • Size all resources statically
  • Don’t let extension mechanisms block rendering
  • Keep all third-party JavaScript out of the critical path
  • All CSS must be inline and size-bound
  • Font triggering must be efficient
  • Minimize style recalculations
  • Only run GPU-accelerated animations
  • Prioritize resource loading

While you can do all that without AMP, it takes a lot of effort and discipline. With AMP you get it all out of the box via the AMP validator. Keep an eye out for AMP. It may be a big deal very soon.

Posted by Sandeep Chanda on September 23, 2016

Azure Logic Apps provides scalable workflows and business process automation backed by the power of the cloud. It has an intuitive designer interface that allows you to declaratively define a process automation workflow. The vast array of connectors available out of the box let you create integrations between a suite of standard enterprise applications such as Salesforce and Office 365 — as well as social platforms such as Facebook, Twitter and Instagram. Today we will look at how Logic Apps can help you create a process automation in your Salesforce cloud platform instance.

Login to your Azure subscription and create a new Logic App instance from the list of available resources. You need to specify a resource group while creating the instance. It will take a minute to deploy and the designer will fire up once the instance deployment is successful. Once you are in the designer, the first step is to search for the Salesforce connector. You will see two trigger options:

  1. When an object is created
  2. When an object is modified

Select the first option. A dialog will appear letting you sign-in to your Force.com account (production/staging) and then allow Logic Apps to access your account.

In the Object Type, select Campaigns and leave the trigger interval to default 3 minutes. You can also expand the advanced options to provide additional filter conditions.

Next provide a condition to check if the budgeted cost is greater than the expected revenue. If the condition is met, you can add a subsequent step to create a Task in Salesforce for someone to act on the campaign and/or send an email.

The following figure illustrates creating a Task in Salesforce based on the created Campaign condition:

Save the workflow. Go to your Force.com account and create a campaign with the condition of a higher budgeted cost than the expected revenue and you will see that a task will be created after the first run in 3 minutes.

Posted by Sandeep Chanda on September 13, 2016

The support for Temporal Tables has now been extended to SQL Azure databases. With Temporal Tables, you can track changes made to your data and store the entire history either for analysis or for making the information actionable. For example, a compensation logic can trigger based on historical changes resulting from an exception scenario. The important aspect about Temporal Tables is that it can keep data stored over a timeline. Data in context of time can be leveraged in reporting facts valid for that specific period of time. It then becomes very easy for you to gain insights from data as it evolves over a specified period.

Auditing is probably the most significant use case for Temporal tables. Temporal tables are created by enabling system versioning on new or existing tables. The following SQL script creates a table that is temporal system-versioned:

Note that in addition to the regular fields for the Person entity, there are three additional columns. The ValidFrom and ValidTo fields allow time-based tracking of any information updates on the Person table and the WITH statement allows enabling the historical tracking of changes in the PersonHistory table.

Create a Person record using the following statement-

Run multiple updates on the table to modify the address field. If you query the history table, you will see records for all the updates made:

You can enable system-versioning on existing tables by altering the schema and introducing the ValidFrom and ValidTo columns. The history table becomes the focal point for your auditing needs without requiring you to write any programming logic for the purpose. It then also becomes very easy to perform Point-in-Time analysis such as tracking trends or differences between two points in time of interest. The other popular use case that Temporal Tables enable you to perform is anomaly detection. For example, it can help you figure out a miss in your sales forecast. Temporal Tables are a powerful feature helping you tailor your auditing needs without having to write any code!

Posted by Gigi Sayfan on September 6, 2016

Shells are the interactive programs that run in text terminals and let you execute commands and run external programs. But, shells can be used non-interactively and you can write scripts and execute (source) them like regular programs. System administrators live and die by the command-line. Unix evolved as text-based environment and for a long time shells were a central part of the user experience and the graphic UI evolved significantly later. On Windows the user experience was focused from the beginning on graphic UI (It's not called Windows for nothing). Unix/Linux shells, such as Bash, are very good at manipulating text and chaining together (piping) the text output of small commands to the input of other commands.

On Windows, the original textual shell and batch language (command.com and later cmd.exe) were significantly inferior. But, Microsoft saw the light and developed PowerShell, which is indeed very powerful and arguably exceeds the capabilities of the Unix shells. For a long time, the two camps were pretty separate. You could run some variation of Unix shells on Windows via cygwin or similar, but it was mostly used for special purposes. PowerShell was definitely a Windows-only affair. But, things are changing.

Microsoft is blending the boundaries. First, Microsoft made PowerShell available on Linux and Mac OSX and then it brought Bash to Windows by way of having Ubuntu on Windows. Those are all exciting developments for shell nerds and will pave the way for stronger and more streamlined integrations between *Nix and Windows environments. Developers will be able to work in their favorite environment and will have fewer problems debugging production issues on various platforms.

Posted by Gigi Sayfan on August 31, 2016

Over the last three years, the average page weight has grown by at least 15 percent per year. This is due to several trends such as increases in ad-related content, more images and more videos — as well as a lack of emphasis by designers and developers on reducing page weight. Google (along with other companies) has been on a mission to accelerate the Web across several fronts. One of the most interesting efforts is the Quick UDP Internet Connections (QUIC) project. The Web is built on top of HTTP/HTTPS, which typically uses TCP as a transfer protocol. TCP was recognized a long time ago is sub-optimal for the request-response model of the Web. An average Web page makes about 100 HTTP requests to tens of different domains to load all of its content. That causes significant latency issues due to TCP's design.


QUIC is based on the connection-less UDP and doesn't suffer from the same design limitations as TCP. It requires building its own infrastructure for ordering and re-transmission of lost packets and dealing with congestion, but has a lot of interesting tricks. The ultimate goal is to improve TCP and incorporate ides from QUIC into an improved TCP protocol. Since TCP evolution is very slow, working with QUIC allows faster iteration on novel ideas such as innovative congestion management algorithms without disrupting the larger Web.

Where to Get Started

There is currently QUIC support in Chromium and Opera on the client side and Google's servers support QUIC on the server side. In addition, there a few libraries such as libquic and Google has released a prototype server for people to play around with QUIC. One of the major concerns was that the UDP protocol could be blocked for most people, but a survey conducted by Chromium showed that it is not a common occurrence. If UDP is blocked, QUIC falls back to TCP.

Posted by Sandeep Chanda on August 30, 2016

Multiple teams within Microsoft are working aggressively to create open source frameworks and the Office team is not far behind. They have already created an open source toolkit called the Office UI Fabric that helps you easily create Office 365 Apps or Office Add-ins, integrating seamlessly to provide the unified Office experience. The fabric's components are designed for the modern responsive UI, allowing you to apply the Office Design Language to your own Web and mobile form factors.

One key aspect of the fabric is its support for UI toolkits that you are familiar with such as Node, Angular, and React. The Office UI Fabric React provides React-based components that you can use to create an experience for your Office 365 app. The idea is to let developers leverage their favorite tool in creating Office Apps.

Getting Started

Open Visual Studio (make sure you have Node.js v4.x.x and Node Tools for Visual Studio installed) and create a blank Node.js Web application.

After the basic template is created, right-click on your project and click Open Command Prompt to launch the Node command console. In the Node command console, first install the React components using the command:

npm install –g create-react-app

followed by:

create-react-app card-demo

If there are no errors in creating the app, navigate to the app folder using the command cd card-demo and then start the node app using the npm start command. The following figure illustrates that your React app is now running successfully:

Next, run the following command to install the Office UI Fabric components-

npm install office-ui-fabric-react --save

Now switch back to your Visual Studio solution and you will see a new folder created. Include the folder and its components in your project.

Open the App.js file and replace the contents with the following code:

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import {
} from 'office-ui-fabric-react/lib/DocumentCard';

class App extends Component {
  render() {
    return (
                            previewImageSrc: require('./logo.svg'),
                            width: 318,
                            height: 196,
                            accentColor: '#ce4b1f'
                <DocumentCardTitle title='React Inside a Card'/>
                    activity='Created Aug 27, 2016'
                            { name: 'John Doe' }

export default App; 

Notice that we have imported the Document Card UI components from the Office UI Fabric and replaced the contents of the render function with a sample Document Card that displays the React logo inside the card. Save the changes. Now open the index.html file and include a reference to the Office UI Fabric CSS:

<link rel="stylesheet" href="https://appsforoffice.microsoft.com/fabric/2.2.0/fabric.min.css">

Save changes. Switch to the Node console and run npm start. You will see the browser launched with the following card displayed.

While the toolkit is still in preview, it reflects how easy it is to create Office 365 apps using the language of your choice.

Thanks for your registration, follow us on our social networks to keep up-to-date