Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Gigi Sayfan on October 26, 2016

Virtual reality is here and many companies are working on VR devices, SDKs, content and frameworks. But, true presence and immersion requires high-end equipment at the moment. It will be several more years until really immersive VR is affordable and ubiquitous. In the mean-time, developers must work with today's limitations and constraints. One of the most interesting initiatives is WebVR. It seems to have a lot of support and can be used today for displaying VR content in the browser.

The main draw of WebVR is that it lets gazillions of Web developers take advantage of their experience, skills and tools to develop VR applications and content that will be broadly available. Facebook recently announced it has plans for ReactVR and the Carmel VR browser. The A-Frame project is built on top of three.js and allows you render VR content today. The major browser vendors are all aware of the promise of VR and are taking steps to enable it in their browsers.

It is rare to see the whole industry (or even different industries) aligned and collaborating early on open standards and, in general, taking the right steps to ensure this innovation reaches each and every person sooner rather than later. I'm very excited to see these developments. The matrix may be just 10 years away. You may be overjoyed or terrified, but don't be surprised. As far as alternatives to WebVR, many developers use the Unity game engine, which has good integration with VR SDKs and devices. The skill set and expertise to develop on Unity is not as ubiquitous among developers as Web development skills. I highly recommend that you check out those technologies and dip your toes in virtual reality.

Posted by Gigi Sayfan on October 18, 2016

Containers are making inroads into the mainstream. Many large organizations and startups deploy their software using containers and many other experiment with the technology or plan to do it. When talking about containers, Docker is the 800-pound gorilla. The recent awareness and popularity of containers for deploying complicated systems (often using micro-services architecture) can be credited in large part to Docker. But, Docker is not the only game in town.

There were a lot of complaints in the community about several aspects of Docker. In particular, it had serious security issues. Others are unhappy with the kitchen sink approach Docker is taking and its tendency to push out half-baked features. CoreOS is one of the harshest critics. CoreOS sees containers as a basic low-level infrastructure components. CoreOS developed a standard for application containers called appc and an implementation called rkt (pronounced rocket). Several large organizations and open-source projects support this effort. In particular, the Kubernetes Juggernaut, that competes with Docker swarm in the container orchestration area, has support for rkt containers. On the technical side, appc and Rkt have some benefits such as simplicity, performance and a clear specification.

It will be very interesting to see how the landscape evolves. Are developers going to stick with the incumbent, yet quickly innovating, Docker or are they going to flock to the supposedly superior newcomer? Are appc and rkt compelling enough for the mainstream developer to switch? I personally intend to dive into appc and rkt and find out for myself. The whole container scene is too young and fast moving to stick with Docker just because it was first.

Posted by Sandeep Chanda on September 29, 2016

In the previous post, I talked about how we can leverage Logic Apps to create scalable workflows and business processes and demonstrated a process automation example with Salesforce. In this post, we will look at leveraging Logic Apps in a hybrid scenario, where the business process needs to integrate systems in cloud and on-premise.

Hybrid scenarios are becoming commonplace with more enterprise customers taking their first step in embracing cloud by rationalizing part of the portfolio in cloud platforms such as Microsoft Azure. Logic apps can play a significant role in automating business processes that span both cloud-based and on-premise systems.

Azure Service Bus

Connecting an on-premise WCF service to a cloud based platform such as Salesforce is possible using the Azure Service Bus relay feature. Service Bus relay allows you to securely expose a WCF service endpoint in cloud without having to open a firewall connection or making any intrusive changes in the enterprise security infrastructure.

The first step to integrating the on-premise service would be to create a Service Bus namespace (you can create one from the Enterprise Integration menu under create new resource). Once the namespace is created, go to the shared access policies and copy the primary key connection string and the primary key.

Modify your existing WCF service solution by downloading the WindowsAzure.ServiceBus NuGet package as shown below.

This package essentially gets the equivalent relay bindings.

In your service host configuration add the code to expose the service bus end point using the following code:

Modify your WCF service configuration to reflect the WebHttpRelayBinding characteristics.

Create the Client

Now that you have configured your WCF service to expose the service bus end point, you can go ahead and create the client. Since this service needs to be called by Logic Apps, and there is no direct mechanism for Logic Apps to call a SOAP service, you will have to create an Azure Function App, that can call the WCF service whenever Logic App triggers the call. To create the Azure Function App, navigate to add new resource in your Azure management portal and search for Funtion Apps. Provide a name, and a resource group (tip: make sure the resource groups are same between the logic app and funtion app instances). The following figure illustrates:

Once the Function App is created, you can add your client code in the code window. Make sure the necessary NuGet package assemblies are referenced to call the service bus end point.

The final step in the process is to copy the Function URL and put it in the HTTP connector in your Logic Apps workflow created in the previous post. Add this step under the "If Yes" branch whenever an object is modified in Salesforce. You can specify the expected parameters thus configuring the trigger from Salesforce to an on-premise WCF service!

Posted by Gigi Sayfan on September 28, 2016

The Go language is 6 years old and has gotten a lot of traction. With the recent release of Go 1.7.1, the entire ecosystem looks very healthy. The continued focus on performance, while maintaining the original philosophy of simplicity, is encouraging. Go adoption is on the rise and Go is ideally suited for building micro-services that run on multi-core machines (often in containers). Its strong concurrency support allows taking advantage almost transparently of multiple cores. What are the indicators for Go's success? Go is being used for many innovative distributed system projects like Etcd, Docker, Kubernetes, NSQ and InfluxDB.

Of course Go is used heavily inside Google. Python developers, in particular, flock to Go when they have to deal with performance issues. Another encouraging sign is the Go mobile project. The premise is that you can write both the backend and the mobile frontend for Android and iOS in Go. This is similar to Node.js where you use the same language to write the backend and the frontend.

Going Forward

One other important factor is the improvement in Go's development environments. I'm a big fan of debuggers, and while Go advocates often say the Go is so simple you can just do Printf debugging, I prefer a real debugger for troubleshooting complex systems. The Delve debugger provides a solid debugging experience. It starts to get integrated in various Go IDEs and editors. If you are starting a new project, considering migrating incrementally to micro-services or just looking to expand your horizons, then Go should be on your radar as a nascent, yet well-supported language with a strong momentum.

Posted by Gigi Sayfan on September 26, 2016

The power of mobile devices and the available bandwidth keeps increasing and content producers are very aware of the importance of the mobile platform. They generate content that's designed specifically for mobile consumption. But, the user experience is still often lacking. There are two related problems here. First, the weight of the average Web page keeps increasing due to encumbering it with a lot of auxiliary stuff like ads, tracking and unnecessary animation, videos and heavy images. Second, content producers and developers often aim and test on the latest and greatest devices and in optimal networking environment. The implicit assumption is that technology moves so fast that very soon everybody will have high-end device and fast network. That leaves a lot of people with low-end devices and/or slow connection with a very poor experience.

One project that attempts to improve the situation is the AMP Project. It is built on existing Web technologies and promotes a restricted subset of HTML, CSS and JavaScript in addition to several custom HTML components that can improve performance. AMP accelerates the mobile experience by using the following practices:

  • Allow only asynchronous scripts
  • Size all resources statically
  • Don’t let extension mechanisms block rendering
  • Keep all third-party JavaScript out of the critical path
  • All CSS must be inline and size-bound
  • Font triggering must be efficient
  • Minimize style recalculations
  • Only run GPU-accelerated animations
  • Prioritize resource loading

While you can do all that without AMP, it takes a lot of effort and discipline. With AMP you get it all out of the box via the AMP validator. Keep an eye out for AMP. It may be a big deal very soon.

Thanks for your registration, follow us on our social networks to keep up-to-date