Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on June 29, 2016

With most of the SharePoint development now focusing around leveraging the Client Side Object Model (CSOM), guidance was long due from the community to write down the best practices to leverage the model using JavaScript. The Office 365 developer patterns and practices team has recently announced the release of a JavaScript Core Library to package some of the common practices and accelerate the development of SharePoint using client-side technologies.

The library provides fluent APIs to perform CSOM operations. In addition, it also has support for ES6 promise specifications for chaining asynchronous operations. The library works perfectly with in a SharePoint script editor Web part as well as with a module loader like requirejs.

To configure, first you can add the NodeJS package to your project using NPM:

npm install sp-pnp-js --save-dev

Once you have configured the package, you can import the root object and start interaction with the API. You can also leverage the API from within a Visual Studio TypeScript project. First you need to add the requirejs NuGet package and then use the module loader to load the pnp library.

Here is the requirejs code illustrating the module dependencies:

require(["jquery", "pnp", "fetch", "es6-promise.min"], function ($, app) {

    $(function () {
        app.render($("#content"), {
            "jquery": $
        });
    });

});

You will notice that apart from the module dependencies for jquery and the app launcher, there is additional dependencies for fetch and es6 promise modules. The fetch library supports cross origin request response against an API. The es6 promise library allows you to chain requests based on the promise style of programming in JS.

Here is a sample app code leveraging the pnp module:

import pnp from "pnp";

class App {

    render(element: HTMLElement, preloadedModules: any[]) {

        let $ = preloadedModules["jquery"];

        $(element).append(`${pnp.sp.web.select("Title").get()}`);
    }
}

You can also leverage the promise style as shown in the example below:

pnp.sp.crossDomainWeb().select("Title").get().then(function (result) {
         //perform further operations on result
    });


Posted by Gigi Sayfan on June 28, 2016

Xamarin creates mobile app development tools that are built on top of the Mono Project. Xamarin always provided, arguably, the most polished cross-platform development environment, but, it was pretty pricey. Recently Microsoft acquired Xamarin, and in the new spirit of openness Microsoft has made Xamarin free. That means it costs nothing to developers and you can also look at the code and even contribute if you're so inclined.

There are some services that you still need to pay for such as Xamarin test cloud and training in Xamarin university. But, those are extras most developers and organizations can do without. The organizations that do require them usually can afford to pay for them.

Why is it such a big deal? Xamarin provides a mature, well-thought-out and well-engineered solution for cross-platform app development.

With Xamarin, you develop in C# and have the power of the .NET framework behind you. Xamarin does the heavy lifting of translating your C# code to the native mobile OS. You can target iOS, Android and, of course, Windows phone. Xamarin provides an interesting mix of approaches. You get cross-platform capability with Xamarin.Forms, which gives you the native look and feel and you can also get full access to each target platform capabilities using Xamarin.Mac and Xamarin.Android. The main benefit is that you can start prototyping, and even begin actual development quickly, for all supported platforms using Xamarin.Forms, knowing that if you do need to write low-level platform-specific code this route is always open to you and it will integrate cleanly with the cross-platform code.


Posted by Gigi Sayfan on June 24, 2016

The traditional view of productivity and how to improve it is completely backwards. Most people think of productivity as a personal attribute of themselves (or their subordinates). X does twice as much as Y, or yesterday I had a really good day and accomplished much more than usual. This is a very limited view and it doesn't touch on the real issue.

The real issue is organizational productivity. The bottom line. The level of unproductively increases with scale. This is nothing new. It is one of the reasons that miniature startups can beat multi-billion corporations. But, most organizations look at the inefficiencies introduced with scale as a process or a communication problem, "If you improve the process or the communication between groups, then you'll improve your situation." There is some merit to this idea, but in the end, larger organizations still have a much greater amount of unproductivity compared to smaller organizations.

The individual employees at the bottom of an organizational hierarchy work approximately as hard as individual startup employees. The middle management does its thing, but is unable to eliminate or significantly minimize this large-organization unproductively tax. In some cases, there is a business justification for the larger organization to go more slowly. For example, if you have a mature product used by a large, and mostly happy, user base then you don't want to change the UI every three months and frustrate your existing users with weird design experiments. You might want to do A-B testing on a small percentage, however. The current thinking is that this unavoidable. Large companies just can't innovate quickly. Large companies either create internal autonomous "startups" or acquire startups and try to integrate them. But, both approaches miss out on important benefits.

I don't have an answer, just nagging feeling that we shouldn't accept this unproductively tax as an axiom. I look forward to some creative approaches that will let big companies innovate at startup-like speeds, while maintaining the advantages of scale.


Posted by Sandeep Chanda on June 22, 2016

While there are several scenarios that may require you to run .NET code from within Node.js like- programming against a Windows specific interface or running a T-SQL query, there could be possible scenarios where you might have to execute a Node.js code from a .NET application. The most obvious one is where you have to return results from the .NET code to the calling Node script using a callback function, but there could be other possible scenarios like hybrid teams working on processes that run both Node and .NET applications. With Node.js getting a fairly large share of the server side development in recent years, the possibility of such hybrid development could become commonplace.

Edge.js really solves the problem of marshalling between .NET and Node.js (using the V8 engine and .NET CLR) thereby allowing each of these server side platforms to run in-process with one another in Windows, Linux and Mac. Edge can compile the CLR Code (it is primarily C#, but could compile any CLR supported language) and provides an asynchronous mechanism for interoperable scripts. Edge.js allows you to not only marshal data but JS proxies, specifically for .NET to the Func<object, Task<object>> delegate.

To install Edge.js in your .NET application, you can use the NuGet package.

Once you have successfully installed the package, you will see the Edge folder appearing in your solution.

You can then reference the EdgeJs namespace in your class files. The following code illustrates:

Note how the code uses the .NET CLR async await mechanism to support asynchronous callback of a JavaScript function using Node.js and Edge.js. This opens up several possibilities to call server side JavaScript from a .NET application using Edge.


Posted by Gigi Sayfan on June 16, 2016

In today's information-rich world, people read more than ever. We are constantly bombarded with text. Software developers, in particular, read a great deal. But, what part do books play in all this reading? Also, what is a book exactly these days?

I was always an avid reader. I read a lot in general and software development related books were my preferred channel for improving my knowledge and understanding. Back then, the Internet had barely started reaching the mainstream. Companies had libraries and developers had stacks of books on their desks with lots of post-it notes and highlighted sections. Browsing meant physically turning pages in a book. The equivalent of Stack Overflow was asking the department genius. Fast forward to the present — developers have an overwhelming number of options for accessing information — across all dimensions: programming languages, frameworks, databases and methodologies.

The pace of innovation in all of these areas seems to have increased as well. How can a developer make sense out of this abundance? Many developers give up and don't try to understand things in depth. They focus on getting the job done, following architectures and patterns designed by others, using frameworks that encapsulate many operational best practices and assembling together loosely-coupled components. When they need to address a specific problem they look for a similar project on GitHub, a Stack Overflow answer or a blog. This is not necessarily a bad thing. A small number of people write the foundational frameworks and libraries and many other people reap the benefits. This shows maturity and advances in ergonomic design. The 90's holy grail of reuse is finally here. But, that leaves software development books in an awkward position. They are not a useful medium anymore, by and large, for the majority of developers.

There are some books that communicate general concepts well, but most software development books explain how to use a particular framework or tool. Paper books are disappearing fast. Even e-books don't seem to cover these needs. In the past, books tried to keep up-to-date by releasing new versions. But, there is a new trend of "live" books that are constantly updated. This may be the future of software books, but is it really a book anymore?


Posted by Gigi Sayfan on June 9, 2016

In Rational Tech Adoption, I discussed how to decide about whether or not to adopt new technologies. But that decision is not context-free. There is always a switching cost. If the switching cost is too high, you might forgo potentially beneficial upgrades whose benefits are less than the switching cost.

In order to benefit from new technologies, you need to build agility right into your stack. Can you easily switch your relational database from MySQL to PostgreSQL? How about from MySQL to MongoDB? Can you move from Python 2 to Python 3? How about from Python to Go? Are you running your servers on AWS? Can you move them to the Google cloud platform? How about switching to a bare metal virtual service provider? Those are not theoretical questions. If you build successful systems that accumulate a lot of users who stay around for many years, those questions will come up.

Traditionally, the answer was always, "Nope. Can't switch," and you got stuck with your initial choices or, alternatively, a super complicated project was launched to upgrade some aspect of the system. The good news is that similar to testability, performance and security, if you follow best practices such as modularity, information hiding, interaction via interfaces and build loosely coupled systems, then it will be relatively simple to replace any particular component or technology.

The stumbling block may often be your DevOps infrastructure and not the code itself. The whole notion of DevOps is still new to many organizations and there isn't much awareness regarding the need to quickly switch fundamental parts of your DevOps infrastructure. If you consider these aspects when you build your system, you'll have the chance to reap enormous benefits as your system scales and evolves.


Posted by Sandeep Chanda on June 7, 2016

SonarQube is a popular open source platform for managing quality in the scope of an application life cycle. It covers the seven axes of quality around the source code, namely — code clones, unit testing, complexity, potential source of bugs, adherence to static rules, documentation in the form of comments, and architecture and design. The beauty of SonarQube is not only its ability to combine matrices for better correlation and analysis, but also to mix them with historical results. SonarQube is extensible using plugins and provides out-of-the-box support for multiple languages including C#. It also offers a plugin for MS Build, letting you integrate SonarQube with Team Build definitions in TFS and making code debt analysis part of your build definitions.

To configure SonarQube for TFS, first you can download SonarQube.

Next you can download the C# and MS Build plugins.

Note that you will need Java running on your system to configure and run SonarQube.

Extract the downloaded package to a local folder in your system and place the C# plugin jar file under the extensions\plugins directory. Run the StartSonar.bat file in the bin folder to start the SonarQube server. SonarQube by default runs on the 9000 port. Once the server is started you can navigate to the http://localhost:9000 url to access the SonarQube portal.

Next extract the MS Build plugin package to a local folder and verify that the sonar.host.url property in the SonarQube.Analysis.xml file has the correct SonarQube server address configured.

You are now ready to configure SonarQube analysis with your TFS Team build definition. Modify your team build definition to set the Pre-build script path under advance properties to the full path to MSBuild.SonarQube.Runner.exe file. Also set the Pre-build script arguments to contain the following four arguments:

  • begin
  • /k: [the project key of the SonarQube project]
  • /n: [the project name]
  • /v: [the project version]

Also set the Post-test script path to the full path to MSBuild.SonarQube.Runner.exe, and the Post-test script arguments to contain the argument "end".

You are all set. Once you run the build, in the build report you will see the SonarQube analysis summary and a link to see the analysis results that will direct you to the dashboard.


Posted by Sandeep Chanda on May 27, 2016

Ever since the introduction of ASP.NET MVC and subsequently Web API, there has been some confusion brewing in the .NET Web development community in relation to the versioning practices being followed by the platform developers within the realm of ASP.NET. ASP.NET MVC and Web API spawned their versions different from ASP.NET and continued to release their own in spite of being part of ASP.NET.

Popularity of ASP.NET Forms also took a beating given the ever-increasing demand for ASP.NET MVC and Web API in building enterprise-grade Web applications. This resulted in ASP.NET MVC and Web API garnering more attention from the platform developers and ultimately resulting in them releasing more frequent versions. The release of ASP.NET 5 only added to the confusion with vNext also being a term used interchangeably.

Sometime during the beginning of this year, the ASP.NET platform development team decided to drop this nomenclature and agreed to completely rebrand ASP.NET as ASP.NET Core 1.0. This came riding quickly on the heels of rebranding .NET as .NET Core. Now it is no longer a newer version of an existing Web development framework, that is better and bigger than its predecessor. It is a completely brand new Web platform written from the ground up for .NET Core. It is actually much more lightweight than ASP.NET 4.6.

While the Core 1.0 version is not as complete as 4.6, with the release of RC2 a few weeks back, the framework is really coming close to general availability. There are significant gaps still in what 4.6 offers, and what is available in Core 1.0, but it is a unified platform, nevertheless, with MVC and Web API being part of it and not being branded as separate frameworks. This is very promising indeed!

The biggest change in RC2 is that there is a new .NET CLI, that replaces DNX- the unified .NET library for running applications in Windows, Mac, and Linux. RC2 has also updated the hosting mechanism to a console app, giving developers more flexibility in controlling the way their Core app will run and making the tool chain consistent for both .NET Core and ASP.NET Core. ASP.NET Core provides for a WebHostBuilder class that gives you the power to configure your Web application the way you want it, including the ability to optionally host it on IIS. In addition to some groundbreaking changes, RC2 also gives you the ability to host your ASP.NET Core applications in Azure.

A paradigm shift in Web development is coming our way with ASP.NET Core, and at this point we are eagerly awaiting the RTM release!


Posted by Gigi Sayfan on May 25, 2016

My favorite Agile methodology, Extreme Programming, has five main values: Simplicity, Communication, Feedback, Respect and Courage. The first four are relatively uncontested — everybody agrees on these four. Courage is different. Courage ostensibly flies in the face of safety, security, stability and risk mitigation. But, this is not what courage is about. Courage is actually about trust. Trust yourself, trust your team, trust your methods and trust your tools. But, trusting doesn't mean being gullible or blindly trusting things will somehow work out. Your trust must be based on solid facts, knowledge and experience. If you have never done something you can't just trust that you'll succeed. If you join a new team you can't trust them to pull through in difficult times. Once you gain trust, you can then be courageous and push the envelope knowing that if anything goes wrong you have a safety net.

When trying something new, always make sure you have plan B or a fallback. Start by asking "what if it fails?" This is not being pessimistic. This is being realistic. Sometimes, the downside is so minimal that there is no need to take any measures. If it fails, it fails and no harm done. Let's consider a concrete example: suppose you decide with your team that you need to switch to a new NoSQL database with a different API. What can go wrong? Plenty.

The new DB may have some serious problems you only detect after migration. The migration may take longer than anticipated. Major query logic may turn out to depend on special properties of the original DB. How can even think of something like that? Well, if your data access is localized to a small number of modules and if you can test the new DB side-by-side and you have a pretty good understanding of how the new DB works, then you should be confident enough to give it a go. To summarize — being courageous is not taking unacceptable risks and being impervious to failure. It is taking well-measured risks based on sound analysis of the situation and full trust that you know what to do if things go south.


Posted by Gigi Sayfan on May 18, 2016

Software is infamously hard. This notion dates back to the 70s with the software crisis and the "No Silver Bullet" paper by Fred Brooks. The bigger the system, the more complicated it is to build software for it. The traditional trajectory is build a system to spec and watch it decay over time and rot until it is impossible to add new features or fix bugs due to the overwhelming complexity.

But, it doesn't have to be this way. Robust software (per my own definition) is a software system that gets better and better over time. Its architecture becomes simpler and more generic as it incorporates more real world use cases. Its test suite gets more comprehensive as it checks for more combinations of inputs and environment. Its real-world performance improves as insights into usage patterns allow for pragmatic optimizations. Its technology stack and third-party dependencies are getting upgraded to take advantage of their improvements. The team gets more familiar with the general architecture of the system and the business domain (working on such a system is a pleasure so churn will be low). The operations team gathers experience, automates more and more processes and builds or incorporates existing tools to manage the system.

The team develops APIs to expose the functionality in a loosely coupled way and integrates with external systems. This may sound like a pipe dream to some of you, but it is possible. It takes a lot of commitment and confidence, but the cool thing is that if you're able to follow this route you'll produce software that is not only high quality but also fast to develop and adapt to different business needs. It does take a significant amount of experience to balance the trade-offs between infrastructure and applications needs. If you can pull it off, you will be handsomely rewarded. The first step in your journey is to realize the status quo is broken.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date