Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Gigi Sayfan on July 28, 2016

Microsoft recently announced the release of .NET Core 1.0, a truly cross-platform runtime and development environment. It is a significant milestone in Microsoft's commitment to fully open and platform-agnostic computing. It supports Windows (duh!), Linux, Mac OSX, iOS and Android.

The transition over the years from a Windows-centric view to fully embrace other platforms, as well as the open source model (yes, .NET is now open source via the .NET foundation, is very impressive. Another interesting aspect is that Microsoft made the announcement at the Red Hat summit together with Red Hat, who will officially support .NET Core in its enterprise product.

In addition, Microsoft also announced ASP.NET Core 1.0, which unifies ASP.NET MVC and WebAPI. ASP.NET Core 1.0 can run on top of either .NET Core 1.0 or the full-fledged .NET framework. Exciting days are ahead for .NET developers whose skills, and the famous .NET productivity, suddenly become even more widely applicable through Microsoft's efforts.

Some of the distinctive features of .NET Core 1.0, in addition to its cross-platform abilities, are:

  • Flexible deployment (Can be included in your app or installed side-by-side user- or machine-wide)
  • Command-line tools (The 'dotnet' command. You don't need to rely on IDE although there is great IDE support too)
  • Compatibility (It works with the .NET framework, Xamarin and Mono)
  • Open Source (MIT or Apache 2 licenses, the documentation is open source too)
  • Official support from Microsoft


Posted by Sandeep Chanda on July 22, 2016

Distributed transaction management is a key architectural consideration that needs to be addressed whenever you are proposing a microservices deployment model to a customer. For developers from the monolithic relational database world, it is not an easy idea to grasp. In a monolithic application with a relational data store, transactions are ACID (atomic, consistent, isolated, and durable) in nature and follow a simple pessimistic control for data consistency. Distributed transactions are always guided by a two-phase commit protocol, where first all the changes are temporarily applied and then committed to, or rolled back, depending on whether or not all the transactions were successful or there was any error. You could easily write queries that fetched data from different relational sources and the distributed transaction coordinator model supported transaction control across disparate relational data sources.

In today's world, however, the focus is on modular, self-contained APIs. To make these services self-contained and aligned with the microservices deployment paradigm, the data is also self-contained within the module, loosely coupled from other APIs. Encapsulating the data allows the services to grow independently, but a major problem is dealing with keeping data consistent across the services.

A microservices architecture promotes availability over consistency, hence leveraging a distributed transaction coordinator with a two-phase commit protocol is usually not an option. It gets even more complicated with the idea that these loosely-coupled data repositories might not necessarily all be relational stores and could be a polyglot with a mix of relational and non-relational data stores. The idea of having an immediately consistent state is difficult. What you should look forward to, however, is an architecture that promotes eventual consistency like CQRS and Event Sourcing. An event driven architecture can support a BASE (basic availability, soft-state, and eventual consistency) transaction model. This is a great technique to solve the problem of handling transactions across services and customers can be convinced to negotiate the contract towards a BASE model since usually transactional consistency across functional boundaries is mostly about frequent change of states, and consistency rules can be relaxed to let the final state be eventually visible to the consumers.


Posted by Gigi Sayfan on July 21, 2016

I have often encountered code littered with lots of nested if and else statements. I consider it a serious code smell. Deeply nested code, especially if the nesting is due to conditionals, is very difficult to follow, wrap your head around and test. It even makes it difficult for the compiler or runtime to optimize. In many cases it is possible to flatten deeply nested code by very simple means. Here are a few examples in Python, but the concepts translate to most languages:

if some_predicate() == True:
result = True
else:
result = False
This can be simply replaced with:

reslt = some_predicate()

Inside loops you can often break out/return or continue and keep the body of the loop shallow.

for i in get_numbers():
if i <= 50:
if i % 2 == 0:
print('Even')
else:
print('Odd') 

Note, that there are two levels of nesting inside the loop. That can be replaced with:

for i in get_numbers():
if i > 50:
continue
msg = 'Even' if i % 2 == 0 else 'Odd'
print(msg) 

I used Python's continue statement to eliminate nesting due to the 50 check and then a ternary expression to eliminate the nesting due to the even/odd check. The result is that the main logic of the loop is not nested two levels deep.

Another big offender is the exception handling. Consider this code:

try:
try:
f = open(filename)
except IOError:
print('Can't open file')
raise
do_something_with_file(f)
except Exception as e:
print('Something went wrong') 

I've seen a lot of similar code. The exception handling here doesn't do much good. There is no real recovery or super clear error messages to the user. I always start with the simplest solution, which is just let the exceptions propagate up to the next level and let that code handle the exception. In this case, the code will become:

f = open(filename)
do_something_with_file(f) 

Specifically for files, you may want to sometimes close the file after you're done. In idiomatic Python, the 'with' statement is preferable to try-finally.

with open(filenam) as f: 
    do_something_with_file(f) 

Finally, multiple return statements are better than a deep hierarchy of if else statements and trying to keep track of a result variable that's returned in the end.


Posted by Gigi Sayfan on July 13, 2016

The T-shaped employee is a management and hiring concept regarding the skills of an employee. The vertical of the T represents skills with which the employee has deep expertise and the horizontal bar represents skills that exhibit less expertise, but are good enough to collaborate with other people. This concept applies in software development as well.

Successful developers have some expertise, even if it's just from working in a certain domain or being in charge of a particular aspect of the system (e.g. the database). But, every successful developer also has to work with other people--under source control, testing, read and write design documents, etc. As far as career planning goes, you want to think seriously about your expertise. If you're not careful you may find yourself a master of skills that nobody requires anymore. For example, early in my career I worked on a great tool called PowerBuilder. It was state-of-the-art back then and it still exists to this day, but I would be very surprised if more than a handful of you have ever heard about it. If I had defined myself as a PowerBuilder expert back then, and pursued a career path that focused on PowerBuilder, I would have limited my options significantly (maybe all the way to unemployment). Instead, I chose to focus on more general skills such as object-oriented design and architecture.

Programming language choice is also very important. Visual Basic was the most popular programming language when I started my career. It did evolve into VB.NET eventually, but I haven't heard much about any excitement in this area for quite some time. The same goes for operating systems. Windows was all the rage back then. Today, it's mostly Linux on the backend (along with Windows, of course) and either Web development or mobile apps on the frontend.

Consider the future of your area of current expertise carefully and be ready to switch if necessary. It's fine to develop several separate fields of expertise. If you're dedicated and have a strong software engineering foundation, you can be an expert on any topic within several months or a couple of years. All it takes is to read a little and do a couple of non-trivial projects. I have often delved into a new domain and within three months I could answer, or find out the answer to, most questions people asked on various public forums. My advice is to invest in timeless skills and core computer science and software engineering first, and then follow your passion and specialize in something about which you care deeply.


Posted by Sandeep Chanda on July 8, 2016

A lot of focus on C# version 6.0 has been towards enhancing the productivity of developers. Attention has been paid to areas where the same functionality can be attained by writing fewer lines of code, making it look clean and much easier to maintain. Similarly, the team has also worked towards improving the readability of string manipulations in code in terms of how they are formatted in output, conceiving the idea of interpolated strings. Then there are improvements in the area of exception handling. In this post we will discuss, in particular, the idea of exception filters that was newly introduced in C# 6.0.

First let us explore the idea of expression bodied members. Expression bodied members are a first class citizen in C# 6.0 and reflect a syntax that is a combination of the current syntax for members and the lambda expression syntax. The following code illustrates-

Let's say we had a method that returns the sum of two numbers. A standard syntax could look like:

public int Add(int a, int b)
{
    return a + b;
} 

The similar syntax could be demonstrated using an expression body which is much simpler to represent:

public int Add(int a, int b) => a + b; 

It can be applied to asynchronous operations as well:

public async Task<string> Request() => await Response();

Expression body also makes it simpler to represent properties as illustrated in the following code example:

public DateTime CurrentDateTime => DateTime.Now;

This is the equivalent representation of:

public DateTime CurrentDateTime
{
    get
    {
        return DateTime.Now;
    }
}

Note that there are currently some limitations to leveraging the expression body syntax, especially for branching statements such as if else and switch. Only the tertiary operator works for conditional assignment.

C# 6.0 introduces the idea of interpolated strings that essentially allows developers to use named variables in strings instead of the indexes.

The following code illustrates the comparison:

string name = string.Format("name: {0} {1}", person.FirstName, person.LastName);

In the interpolated format you can use:

string interpolatedName = $ "name: {person.FirstName} {person.LastName}";

This is a much cleaner way to represent formatted strings in code.

Finally, C# 6.0 also introduces the idea of exception filters that allows you to filter exceptions based on a particular condition using the "when" statement. It is not only a syntactical improvement for productivity and cleaner code representation, but also preserves the stack trace, thereby providing better traceability than an if condition to represent the equivalent filter in regular syntax.

try
    {
        ///try body
    }
    catch (HttpException ex) when (ex.ErrorCode == 500)
    {
        ///handle exception
    } 


Posted by Sandeep Chanda on June 29, 2016

With most of the SharePoint development now focusing around leveraging the Client Side Object Model (CSOM), guidance was long due from the community to write down the best practices to leverage the model using JavaScript. The Office 365 developer patterns and practices team has recently announced the release of a JavaScript Core Library to package some of the common practices and accelerate the development of SharePoint using client-side technologies.

The library provides fluent APIs to perform CSOM operations. In addition, it also has support for ES6 promise specifications for chaining asynchronous operations. The library works perfectly with in a SharePoint script editor Web part as well as with a module loader like requirejs.

To configure, first you can add the NodeJS package to your project using NPM:

npm install sp-pnp-js --save-dev

Once you have configured the package, you can import the root object and start interaction with the API. You can also leverage the API from within a Visual Studio TypeScript project. First you need to add the requirejs NuGet package and then use the module loader to load the pnp library.

Here is the requirejs code illustrating the module dependencies:

require(["jquery", "pnp", "fetch", "es6-promise.min"], function ($, app) {

    $(function () {
        app.render($("#content"), {
            "jquery": $
        });
    });

});

You will notice that apart from the module dependencies for jquery and the app launcher, there is additional dependencies for fetch and es6 promise modules. The fetch library supports cross origin request response against an API. The es6 promise library allows you to chain requests based on the promise style of programming in JS.

Here is a sample app code leveraging the pnp module:

import pnp from "pnp";

class App {

    render(element: HTMLElement, preloadedModules: any[]) {

        let $ = preloadedModules["jquery"];

        $(element).append(`${pnp.sp.web.select("Title").get()}`);
    }
}

You can also leverage the promise style as shown in the example below:

pnp.sp.crossDomainWeb().select("Title").get().then(function (result) {
         //perform further operations on result
    });


Posted by Gigi Sayfan on June 28, 2016

Xamarin creates mobile app development tools that are built on top of the Mono Project. Xamarin always provided, arguably, the most polished cross-platform development environment, but, it was pretty pricey. Recently Microsoft acquired Xamarin, and in the new spirit of openness Microsoft has made Xamarin free. That means it costs nothing to developers and you can also look at the code and even contribute if you're so inclined.

There are some services that you still need to pay for such as Xamarin test cloud and training in Xamarin university. But, those are extras most developers and organizations can do without. The organizations that do require them usually can afford to pay for them.

Why is it such a big deal? Xamarin provides a mature, well-thought-out and well-engineered solution for cross-platform app development.

With Xamarin, you develop in C# and have the power of the .NET framework behind you. Xamarin does the heavy lifting of translating your C# code to the native mobile OS. You can target iOS, Android and, of course, Windows phone. Xamarin provides an interesting mix of approaches. You get cross-platform capability with Xamarin.Forms, which gives you the native look and feel and you can also get full access to each target platform capabilities using Xamarin.Mac and Xamarin.Android. The main benefit is that you can start prototyping, and even begin actual development quickly, for all supported platforms using Xamarin.Forms, knowing that if you do need to write low-level platform-specific code this route is always open to you and it will integrate cleanly with the cross-platform code.


Posted by Gigi Sayfan on June 24, 2016

The traditional view of productivity and how to improve it is completely backwards. Most people think of productivity as a personal attribute of themselves (or their subordinates). X does twice as much as Y, or yesterday I had a really good day and accomplished much more than usual. This is a very limited view and it doesn't touch on the real issue.

The real issue is organizational productivity. The bottom line. The level of unproductively increases with scale. This is nothing new. It is one of the reasons that miniature startups can beat multi-billion corporations. But, most organizations look at the inefficiencies introduced with scale as a process or a communication problem, "If you improve the process or the communication between groups, then you'll improve your situation." There is some merit to this idea, but in the end, larger organizations still have a much greater amount of unproductivity compared to smaller organizations.

The individual employees at the bottom of an organizational hierarchy work approximately as hard as individual startup employees. The middle management does its thing, but is unable to eliminate or significantly minimize this large-organization unproductively tax. In some cases, there is a business justification for the larger organization to go more slowly. For example, if you have a mature product used by a large, and mostly happy, user base then you don't want to change the UI every three months and frustrate your existing users with weird design experiments. You might want to do A-B testing on a small percentage, however. The current thinking is that this unavoidable. Large companies just can't innovate quickly. Large companies either create internal autonomous "startups" or acquire startups and try to integrate them. But, both approaches miss out on important benefits.

I don't have an answer, just nagging feeling that we shouldn't accept this unproductively tax as an axiom. I look forward to some creative approaches that will let big companies innovate at startup-like speeds, while maintaining the advantages of scale.


Posted by Sandeep Chanda on June 22, 2016

While there are several scenarios that may require you to run .NET code from within Node.js like- programming against a Windows specific interface or running a T-SQL query, there could be possible scenarios where you might have to execute a Node.js code from a .NET application. The most obvious one is where you have to return results from the .NET code to the calling Node script using a callback function, but there could be other possible scenarios like hybrid teams working on processes that run both Node and .NET applications. With Node.js getting a fairly large share of the server side development in recent years, the possibility of such hybrid development could become commonplace.

Edge.js really solves the problem of marshalling between .NET and Node.js (using the V8 engine and .NET CLR) thereby allowing each of these server side platforms to run in-process with one another in Windows, Linux and Mac. Edge can compile the CLR Code (it is primarily C#, but could compile any CLR supported language) and provides an asynchronous mechanism for interoperable scripts. Edge.js allows you to not only marshal data but JS proxies, specifically for .NET to the Func<object, Task<object>> delegate.

To install Edge.js in your .NET application, you can use the NuGet package.

Once you have successfully installed the package, you will see the Edge folder appearing in your solution.

You can then reference the EdgeJs namespace in your class files. The following code illustrates:

Note how the code uses the .NET CLR async await mechanism to support asynchronous callback of a JavaScript function using Node.js and Edge.js. This opens up several possibilities to call server side JavaScript from a .NET application using Edge.


Posted by Gigi Sayfan on June 16, 2016

In today's information-rich world, people read more than ever. We are constantly bombarded with text. Software developers, in particular, read a great deal. But, what part do books play in all this reading? Also, what is a book exactly these days?

I was always an avid reader. I read a lot in general and software development related books were my preferred channel for improving my knowledge and understanding. Back then, the Internet had barely started reaching the mainstream. Companies had libraries and developers had stacks of books on their desks with lots of post-it notes and highlighted sections. Browsing meant physically turning pages in a book. The equivalent of Stack Overflow was asking the department genius. Fast forward to the present — developers have an overwhelming number of options for accessing information — across all dimensions: programming languages, frameworks, databases and methodologies.

The pace of innovation in all of these areas seems to have increased as well. How can a developer make sense out of this abundance? Many developers give up and don't try to understand things in depth. They focus on getting the job done, following architectures and patterns designed by others, using frameworks that encapsulate many operational best practices and assembling together loosely-coupled components. When they need to address a specific problem they look for a similar project on GitHub, a Stack Overflow answer or a blog. This is not necessarily a bad thing. A small number of people write the foundational frameworks and libraries and many other people reap the benefits. This shows maturity and advances in ergonomic design. The 90's holy grail of reuse is finally here. But, that leaves software development books in an awkward position. They are not a useful medium anymore, by and large, for the majority of developers.

There are some books that communicate general concepts well, but most software development books explain how to use a particular framework or tool. Paper books are disappearing fast. Even e-books don't seem to cover these needs. In the past, books tried to keep up-to-date by releasing new versions. But, there is a new trend of "live" books that are constantly updated. This may be the future of software books, but is it really a book anymore?


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date