Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Gigi Sayfan on October 26, 2016

Virtual reality is here and many companies are working on VR devices, SDKs, content and frameworks. But, true presence and immersion requires high-end equipment at the moment. It will be several more years until really immersive VR is affordable and ubiquitous. In the mean-time, developers must work with today's limitations and constraints. One of the most interesting initiatives is WebVR. It seems to have a lot of support and can be used today for displaying VR content in the browser.

The main draw of WebVR is that it lets gazillions of Web developers take advantage of their experience, skills and tools to develop VR applications and content that will be broadly available. Facebook recently announced it has plans for ReactVR and the Carmel VR browser. The A-Frame project is built on top of three.js and allows you render VR content today. The major browser vendors are all aware of the promise of VR and are taking steps to enable it in their browsers.

It is rare to see the whole industry (or even different industries) aligned and collaborating early on open standards and, in general, taking the right steps to ensure this innovation reaches each and every person sooner rather than later. I'm very excited to see these developments. The matrix may be just 10 years away. You may be overjoyed or terrified, but don't be surprised. As far as alternatives to WebVR, many developers use the Unity game engine, which has good integration with VR SDKs and devices. The skill set and expertise to develop on Unity is not as ubiquitous among developers as Web development skills. I highly recommend that you check out those technologies and dip your toes in virtual reality.


Posted by Gigi Sayfan on October 18, 2016

Containers are making inroads into the mainstream. Many large organizations and startups deploy their software using containers and many other experiment with the technology or plan to do it. When talking about containers, Docker is the 800-pound gorilla. The recent awareness and popularity of containers for deploying complicated systems (often using micro-services architecture) can be credited in large part to Docker. But, Docker is not the only game in town.

There were a lot of complaints in the community about several aspects of Docker. In particular, it had serious security issues. Others are unhappy with the kitchen sink approach Docker is taking and its tendency to push out half-baked features. CoreOS is one of the harshest critics. CoreOS sees containers as a basic low-level infrastructure components. CoreOS developed a standard for application containers called appc and an implementation called rkt (pronounced rocket). Several large organizations and open-source projects support this effort. In particular, the Kubernetes Juggernaut, that competes with Docker swarm in the container orchestration area, has support for rkt containers. On the technical side, appc and Rkt have some benefits such as simplicity, performance and a clear specification.

It will be very interesting to see how the landscape evolves. Are developers going to stick with the incumbent, yet quickly innovating, Docker or are they going to flock to the supposedly superior newcomer? Are appc and rkt compelling enough for the mainstream developer to switch? I personally intend to dive into appc and rkt and find out for myself. The whole container scene is too young and fast moving to stick with Docker just because it was first.


Posted by Gigi Sayfan on October 6, 2016

Agile practices grew out of challenges in software development and have been extended to many related activities such as database design, modeling and product management. Many budding startup companies embraced agile practices. The nimble and feedback-based approach works superbly for a couple of guys burning the midnight oil in a room or garage. But, founding an actual company has always been a formal and tedious process that can take the founders' focus off the main activities that evolve the product. Stripe decided to change that. The Atlas initiative aims to take a lot of the burden off your shoulders--including incorporating in Delaware, opening a business bank account, registering the new company with the IRS, opening a Stripe account for accepting payments, tax consulting and even more of tools and services. Atlas can also be used by non-US founders that believe incorporating in the US is the best route for their business.

Startup Ease

Atlas is still in Beta, but already has an impressive network of partners. During the beta, there is a one-time fee of $500. Once you send in your application, you should have your company up and running within a week! This is quite amazing.

How many potential new startup companies never make it out of the side project stage because of the friction and difficulty of actually starting a company? How many companies waste a lot of time and resources at the critical early stage to get their company off the ground? Atlas is an ambitious proposition and I'm very curious to see how it pans out. If you're an aspiring entrepreneur who has considered starting your own company, Alas may be just the right thing for you.


Posted by Sandeep Chanda on September 29, 2016

In the previous post, I talked about how we can leverage Logic Apps to create scalable workflows and business processes and demonstrated a process automation example with Salesforce. In this post, we will look at leveraging Logic Apps in a hybrid scenario, where the business process needs to integrate systems in cloud and on-premise.

Hybrid scenarios are becoming commonplace with more enterprise customers taking their first step in embracing cloud by rationalizing part of the portfolio in cloud platforms such as Microsoft Azure. Logic apps can play a significant role in automating business processes that span both cloud-based and on-premise systems.

Azure Service Bus

Connecting an on-premise WCF service to a cloud based platform such as Salesforce is possible using the Azure Service Bus relay feature. Service Bus relay allows you to securely expose a WCF service endpoint in cloud without having to open a firewall connection or making any intrusive changes in the enterprise security infrastructure.

The first step to integrating the on-premise service would be to create a Service Bus namespace (you can create one from the Enterprise Integration menu under create new resource). Once the namespace is created, go to the shared access policies and copy the primary key connection string and the primary key.

Modify your existing WCF service solution by downloading the WindowsAzure.ServiceBus NuGet package as shown below.

This package essentially gets the equivalent relay bindings.

In your service host configuration add the code to expose the service bus end point using the following code:

Modify your WCF service configuration to reflect the WebHttpRelayBinding characteristics.

Create the Client

Now that you have configured your WCF service to expose the service bus end point, you can go ahead and create the client. Since this service needs to be called by Logic Apps, and there is no direct mechanism for Logic Apps to call a SOAP service, you will have to create an Azure Function App, that can call the WCF service whenever Logic App triggers the call. To create the Azure Function App, navigate to add new resource in your Azure management portal and search for Funtion Apps. Provide a name, and a resource group (tip: make sure the resource groups are same between the logic app and funtion app instances). The following figure illustrates:

Once the Function App is created, you can add your client code in the code window. Make sure the necessary NuGet package assemblies are referenced to call the service bus end point.

The final step in the process is to copy the Function URL and put it in the HTTP connector in your Logic Apps workflow created in the previous post. Add this step under the "If Yes" branch whenever an object is modified in Salesforce. You can specify the expected parameters thus configuring the trigger from Salesforce to an on-premise WCF service!


Posted by Gigi Sayfan on September 28, 2016

The Go language is 6 years old and has gotten a lot of traction. With the recent release of Go 1.7.1, the entire ecosystem looks very healthy. The continued focus on performance, while maintaining the original philosophy of simplicity, is encouraging. Go adoption is on the rise and Go is ideally suited for building micro-services that run on multi-core machines (often in containers). Its strong concurrency support allows taking advantage almost transparently of multiple cores. What are the indicators for Go's success? Go is being used for many innovative distributed system projects like Etcd, Docker, Kubernetes, NSQ and InfluxDB.

Of course Go is used heavily inside Google. Python developers, in particular, flock to Go when they have to deal with performance issues. Another encouraging sign is the Go mobile project. The premise is that you can write both the backend and the mobile frontend for Android and iOS in Go. This is similar to Node.js where you use the same language to write the backend and the frontend.

Going Forward

One other important factor is the improvement in Go's development environments. I'm a big fan of debuggers, and while Go advocates often say the Go is so simple you can just do Printf debugging, I prefer a real debugger for troubleshooting complex systems. The Delve debugger provides a solid debugging experience. It starts to get integrated in various Go IDEs and editors. If you are starting a new project, considering migrating incrementally to micro-services or just looking to expand your horizons, then Go should be on your radar as a nascent, yet well-supported language with a strong momentum.


Posted by Gigi Sayfan on September 26, 2016

The power of mobile devices and the available bandwidth keeps increasing and content producers are very aware of the importance of the mobile platform. They generate content that's designed specifically for mobile consumption. But, the user experience is still often lacking. There are two related problems here. First, the weight of the average Web page keeps increasing due to encumbering it with a lot of auxiliary stuff like ads, tracking and unnecessary animation, videos and heavy images. Second, content producers and developers often aim and test on the latest and greatest devices and in optimal networking environment. The implicit assumption is that technology moves so fast that very soon everybody will have high-end device and fast network. That leaves a lot of people with low-end devices and/or slow connection with a very poor experience.

One project that attempts to improve the situation is the AMP Project. It is built on existing Web technologies and promotes a restricted subset of HTML, CSS and JavaScript in addition to several custom HTML components that can improve performance. AMP accelerates the mobile experience by using the following practices:

  • Allow only asynchronous scripts
  • Size all resources statically
  • Don’t let extension mechanisms block rendering
  • Keep all third-party JavaScript out of the critical path
  • All CSS must be inline and size-bound
  • Font triggering must be efficient
  • Minimize style recalculations
  • Only run GPU-accelerated animations
  • Prioritize resource loading

While you can do all that without AMP, it takes a lot of effort and discipline. With AMP you get it all out of the box via the AMP validator. Keep an eye out for AMP. It may be a big deal very soon.


Posted by Sandeep Chanda on September 23, 2016

Azure Logic Apps provides scalable workflows and business process automation backed by the power of the cloud. It has an intuitive designer interface that allows you to declaratively define a process automation workflow. The vast array of connectors available out of the box let you create integrations between a suite of standard enterprise applications such as Salesforce and Office 365 — as well as social platforms such as Facebook, Twitter and Instagram. Today we will look at how Logic Apps can help you create a process automation in your Salesforce cloud platform instance.

Login to your Azure subscription and create a new Logic App instance from the list of available resources. You need to specify a resource group while creating the instance. It will take a minute to deploy and the designer will fire up once the instance deployment is successful. Once you are in the designer, the first step is to search for the Salesforce connector. You will see two trigger options:

  1. When an object is created
  2. When an object is modified

Select the first option. A dialog will appear letting you sign-in to your Force.com account (production/staging) and then allow Logic Apps to access your account.

In the Object Type, select Campaigns and leave the trigger interval to default 3 minutes. You can also expand the advanced options to provide additional filter conditions.

Next provide a condition to check if the budgeted cost is greater than the expected revenue. If the condition is met, you can add a subsequent step to create a Task in Salesforce for someone to act on the campaign and/or send an email.

The following figure illustrates creating a Task in Salesforce based on the created Campaign condition:

Save the workflow. Go to your Force.com account and create a campaign with the condition of a higher budgeted cost than the expected revenue and you will see that a task will be created after the first run in 3 minutes.


Posted by Gigi Sayfan on September 15, 2016

Agile practices help you develop software that meets the user needs faster and safer — and that responds quickly to changes in the requirements, environment or technological advances. But, there is one "secret" practice that is not often mentioned in the context of Agile development. This is really an "un-practice". The idea is to flat out not do something. It could be a requirement (this will require negotiating with the customer), a refactoring or a bug fix. Just because something is on the backlog doesn't mean it always needs to be done. Extreme Programming calls it YAGNI (You ain't gonna need it) where you postpone doing things that are not needed immediately.

Minimalism

Being minimalist by design is often neglected. Everybody wants to eventually conquer the world, right? Another aspect of this mindset is over-engineering. A lot of effort is expended towards building infrastructure, scalability and automation that isn't necessarily needed. Why is it so important and why is it often ignored? It is important because Agile is all about delivering real value, really quickly. If you work on something that's not really needed, you just wasted time and effort.

YAGNI

The reason it's often ignored or not practiced fully is that it's difficult to be disciplined. You start working on a cool feature or capability and want to keep evolving and improving it even if it's not providing immediate business value. On the infrastructure/implementation side, developers are often worried about technical debt. I'm often guilty of trying to get the project "right" from the beginning. If you want to really deliver the maximum business value in each iteration, you have to be very aware and explicit about what you plan and how you go about it. Just paying lip service to the idea is not good enough.


Posted by Sandeep Chanda on September 13, 2016

The support for Temporal Tables has now been extended to SQL Azure databases. With Temporal Tables, you can track changes made to your data and store the entire history either for analysis or for making the information actionable. For example, a compensation logic can trigger based on historical changes resulting from an exception scenario. The important aspect about Temporal Tables is that it can keep data stored over a timeline. Data in context of time can be leveraged in reporting facts valid for that specific period of time. It then becomes very easy for you to gain insights from data as it evolves over a specified period.

Auditing is probably the most significant use case for Temporal tables. Temporal tables are created by enabling system versioning on new or existing tables. The following SQL script creates a table that is temporal system-versioned:

Note that in addition to the regular fields for the Person entity, there are three additional columns. The ValidFrom and ValidTo fields allow time-based tracking of any information updates on the Person table and the WITH statement allows enabling the historical tracking of changes in the PersonHistory table.

Create a Person record using the following statement-

Run multiple updates on the table to modify the address field. If you query the history table, you will see records for all the updates made:

You can enable system-versioning on existing tables by altering the schema and introducing the ValidFrom and ValidTo columns. The history table becomes the focal point for your auditing needs without requiring you to write any programming logic for the purpose. It then also becomes very easy to perform Point-in-Time analysis such as tracking trends or differences between two points in time of interest. The other popular use case that Temporal Tables enable you to perform is anomaly detection. For example, it can help you figure out a miss in your sales forecast. Temporal Tables are a powerful feature helping you tailor your auditing needs without having to write any code!


Posted by Gigi Sayfan on September 6, 2016

Shells are the interactive programs that run in text terminals and let you execute commands and run external programs. But, shells can be used non-interactively and you can write scripts and execute (source) them like regular programs. System administrators live and die by the command-line. Unix evolved as text-based environment and for a long time shells were a central part of the user experience and the graphic UI evolved significantly later. On Windows the user experience was focused from the beginning on graphic UI (It's not called Windows for nothing). Unix/Linux shells, such as Bash, are very good at manipulating text and chaining together (piping) the text output of small commands to the input of other commands.

On Windows, the original textual shell and batch language (command.com and later cmd.exe) were significantly inferior. But, Microsoft saw the light and developed PowerShell, which is indeed very powerful and arguably exceeds the capabilities of the Unix shells. For a long time, the two camps were pretty separate. You could run some variation of Unix shells on Windows via cygwin or similar, but it was mostly used for special purposes. PowerShell was definitely a Windows-only affair. But, things are changing.

Microsoft is blending the boundaries. First, Microsoft made PowerShell available on Linux and Mac OSX and then it brought Bash to Windows by way of having Ubuntu on Windows. Those are all exciting developments for shell nerds and will pave the way for stronger and more streamlined integrations between *Nix and Windows environments. Developers will be able to work in their favorite environment and will have fewer problems debugging production issues on various platforms.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date