Posted by Sandeep Chanda
on May 27, 2016
Ever since the introduction of ASP.NET MVC and subsequently Web API, there has been some confusion brewing in the .NET Web development community in relation to the versioning practices being followed by the platform developers within the realm of ASP.NET. ASP.NET MVC and Web API spawned their versions different from ASP.NET and continued to release their own in spite of being part of ASP.NET.
Popularity of ASP.NET Forms also took a beating given the ever-increasing demand for ASP.NET MVC and Web API in building enterprise-grade Web applications. This resulted in ASP.NET MVC and Web API garnering more attention from the platform developers and ultimately resulting in them releasing more frequent versions. The release of ASP.NET 5 only added to the confusion with vNext also being a term used interchangeably.
Sometime during the beginning of this year, the ASP.NET platform development team decided to drop this nomenclature and agreed to completely rebrand ASP.NET as ASP.NET Core 1.0. This came riding quickly on the heels of rebranding .NET as .NET Core. Now it is no longer a newer version of an existing Web development framework, that is better and bigger than its predecessor. It is a completely brand new Web platform written from the ground up for .NET Core. It is actually much more lightweight than ASP.NET 4.6.
While the Core 1.0 version is not as complete as 4.6, with the release of RC2 a few weeks back, the framework is really coming close to general availability. There are significant gaps still in what 4.6 offers, and what is available in Core 1.0, but it is a unified platform, nevertheless, with MVC and Web API being part of it and not being branded as separate frameworks. This is very promising indeed!
The biggest change in RC2 is that there is a new .NET CLI, that replaces DNX- the unified .NET library for running applications in Windows, Mac, and Linux. RC2 has also updated the hosting mechanism to a console app, giving developers more flexibility in controlling the way their Core app will run and making the tool chain consistent for both .NET Core and ASP.NET Core. ASP.NET Core provides for a
WebHostBuilder class that gives you the power to configure your Web application the way you want it, including the ability to optionally host it on IIS. In addition to some groundbreaking changes, RC2 also gives you the ability to host your ASP.NET Core applications in Azure.
A paradigm shift in Web development is coming our way with ASP.NET Core, and at this point we are eagerly awaiting the RTM release!
Posted by Gigi Sayfan
on May 25, 2016
My favorite Agile methodology, Extreme Programming, has five main values: Simplicity, Communication, Feedback, Respect and Courage. The first four are relatively uncontested — everybody agrees on these four. Courage is different. Courage ostensibly flies in the face of safety, security, stability and risk mitigation. But, this is not what courage is about. Courage is actually about trust. Trust yourself, trust your team, trust your methods and trust your tools. But, trusting doesn't mean being gullible or blindly trusting things will somehow work out. Your trust must be based on solid facts, knowledge and experience. If you have never done something you can't just trust that you'll succeed. If you join a new team you can't trust them to pull through in difficult times. Once you gain trust, you can then be courageous and push the envelope knowing that if anything goes wrong you have a safety net.
When trying something new, always make sure you have plan B or a fallback. Start by asking "what if it fails?" This is not being pessimistic. This is being realistic. Sometimes, the downside is so minimal that there is no need to take any measures. If it fails, it fails and no harm done. Let's consider a concrete example: suppose you decide with your team that you need to switch to a new NoSQL database with a different API. What can go wrong? Plenty.
The new DB may have some serious problems you only detect after migration. The migration may take longer than anticipated. Major query logic may turn out to depend on special properties of the original DB. How can even think of something like that? Well, if your data access is localized to a small number of modules and if you can test the new DB side-by-side and you have a pretty good understanding of how the new DB works, then you should be confident enough to give it a go. To summarize — being courageous is not taking unacceptable risks and being impervious to failure. It is taking well-measured risks based on sound analysis of the situation and full trust that you know what to do if things go south.
Posted by Gigi Sayfan
on May 18, 2016
Software is infamously hard. This notion dates back to the 70s with the software crisis and the "No Silver Bullet" paper by Fred Brooks. The bigger the system, the more complicated it is to build software for it. The traditional trajectory is build a system to spec and watch it decay over time and rot until it is impossible to add new features or fix bugs due to the overwhelming complexity.
But, it doesn't have to be this way. Robust software (per my own definition) is a software system that gets better and better over time. Its architecture becomes simpler and more generic as it incorporates more real world use cases. Its test suite gets more comprehensive as it checks for more combinations of inputs and environment. Its real-world performance improves as insights into usage patterns allow for pragmatic optimizations. Its technology stack and third-party dependencies are getting upgraded to take advantage of their improvements. The team gets more familiar with the general architecture of the system and the business domain (working on such a system is a pleasure so churn will be low). The operations team gathers experience, automates more and more processes and builds or incorporates existing tools to manage the system.
The team develops APIs to expose the functionality in a loosely coupled way and integrates with external systems. This may sound like a pipe dream to some of you, but it is possible. It takes a lot of commitment and confidence, but the cool thing is that if you're able to follow this route you'll produce software that is not only high quality but also fast to develop and adapt to different business needs. It does take a significant amount of experience to balance the trade-offs between infrastructure and applications needs. If you can pull it off, you will be handsomely rewarded. The first step in your journey is to realize the status quo is broken.
Posted by Sandeep Chanda
on May 16, 2016
The Azure Internet of Things team has recently open sourced the gateway SDKs that can be used to build and deploy applications for Azure IoT.
There are two classes of SDKs that have been made available. The device SDK that allows developers to connect client devices to the Azure IoT Hub and the service SDK that enables management of the IoT service instances in your hub. The device SDK supports a range of OSes running on low fidelity devices that typically support network communication, have the ability to establish a secure communication channel with the IoT Hub, are able to generate secure tokens for authentication and have a minimum of 64 KB RAM as the memory footprint.
The device SDK is available in C, .NET, Java, Node.js, and Python, while the service SDK is currently available in .NET, Node.js, and Java. In order to register clients using the device SDK, you will first create an IoT Hub instance in Azure using the management portal and then use the connection string of your IoT Hub to register a new device. If you reference the .NET SDK, you can use the
Microsoft.Azure.Devices.Client library that exposes various methods to interact with the gateway such as the
Create method to create a
SendEventAsync to send an event to the device hub. The
Microsoft.Azure.Devices.Client library supports both AMQP and HTTPS protocols. The messages can also be sent in batches using the
SendEventBatchAsync method that will send a collection of
Message to the device hub.
The services SDK is available as the
Microsoft.Azure.Devices library. You can use the
RegistryManager class to register a device:
To receive device-to-cloud messages, you can create a receiver using the
EventHubClient class and use the
ReceiveAsync method to start receiving event data asynchronously.
You can clone the IoT Gateway SDKs repository from GitHub and customize it for your own gateway solutions using Azure IoT.
The Azure IoT Gateway is promising because, while developers can connect their devices to IoT platforms, there are many scenarios that require edge intelligence, e.g. sensors that cannot connect to the cloud on their own. The IoT Gateway SDKs make it simple for developers to develop on-premise custom computation wherever a standard solution doesn't work.
Posted by Gigi Sayfan
on May 12, 2016
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian Kernighan (co-inventor of the C programming language).
You shouldn't take it literally of course, but there is a grain of true to both parts: debugging is harder than programming and also, if you write your code too cleverly, you'll have trouble debugging it. One lesson to take away is that you should write clear and simple code that is easy to understand and troubleshoot when something goes wrong.
Another way to look at it is that when something does go wrong you want all the help you can get to fix it. One of the main the Agile practices is pair programming, where two programmers collaborate in front of a single screen and keyboard — probably the least popular in the Agile arsenal. However, pair debugging is taking all the goodness of pair programming into the debugging experience. You don't have to feel you're all alone against the inscrutable code. Your partner will help you through tight spots and see things you don't notice. Sometimes just explaining the problem to your partner may shed a light on what's going on. I've had a very good track record where the combined efforts of the pair found the root cause after a single programmer was stuck for a long time banging their head against the metaphorical wall.
My recommendation would be that when you face a difficult problem, try to solve it on your own, but as soon as you run out of ideas or feel overwhelmed call in the cavalry. Describe what you know and what you have done so far and you'll be surprised that often that alone will get you unstuck. If not, your partner will probably have some ideas regarding how to move forward. Happy pair debugging.
Posted by Sandeep Chanda
on May 5, 2016
During the Build 2016 conference, Vittorio Bertocci, the Principal Program Manager at the Microsoft Identity division announced the availability of a new authentication library named MSAL (Microsoft Authentication Library). It is poised to become one unified library that provides a single programming model for different identity providers such as Microsoft Accounts, and Azure Active Directory.
MSAL finds its origins in ADAL which was tailored to work exclusively with Azure AD and ADFS. MSAL is better in terms that it supports apps, agnostic of the authority mechanism being MSA or any Azure AD tenant. It also provides better protocol compliance and overcomes some of the issues with ADAL such as working with cache in multi-tenant applications. Another feature that makes it a universal identity provider is that it supports standard definition scopes instead of resources that are proprietary to Active Directory. With MSAL you don’t need to know native protocols like OAuth and Open ID Connect. It provides the necessary wrappers for you to program with the library and perform identity related operations at a high level without having to know a lot of details about the native protocols. Notably multi-factor authentication is supported out of the box. Overall, however, the most fascinating feature of this library is the ability for the app to ask for permissions incrementally and support transparent refresh tokens.
The two primary operations exposed by MSAL are:
- PublicClientApplication — used for desktop clients and mobile apps
- ConfidentialClientApplication — for server side apps and other web based resources
You can start using MSAL using the new authority endpoint. Note that you need to register your app first and get the client id. The new endpoint supports both personal and work accounts. During the authentication process you will receive both the sign in info and also an authorization code that can be used to obtain an access token. In a single sign-on scenario, that token can be used to access other secured resources that are part of the same sign-in. The following code illustrates how the ConfidentialClientApplication primitive is used to fetch the token and access the resource securely:
ConfidentialClientApplication clientApp = new ConfidentialClientApplication(clientId, null,
new ClientCredential(appKey), new MSALSessionCache(userId, this.HttpContext));
You can then use the
AcquireTokenSilentAsync method to get the token by asking for the scopes you need.
Posted by Gigi Sayfan
on May 2, 2016
The ability to hire excellent software developers is a limiting factor. There is a real shortage, and as a result there is serious competition for every qualified candidate. As a manager on the lookout for talented developer I experience it personally. There are two very clear trends here. Candidates who put themselves on the market will get queries from more companies than they can handle. In one hiring platform that I use shows competing offers and it is common to see a well-qualified candidate receiving 20+ invitations for initial interviews. The other trend is that the big companies out there: Google, Facebook, Uber, etc., use their war chests to offer unheard of levels of compensation. An engineer decides which company to join based on many factors. But, if a big company offers $50K - $100K more than competing offers (in total compensation) then it's very likely the engineer will pick the big company. How can a small startup compete? By using its relative strengths - speed. This is where Agile hiring methods can give you the advantage.
What if you compress the entire hiring process so that you can get an actual job offer out to a candidate, before they have even had a chance to set up phone interviews with most of the other companies who might be interested in them? In the best case scenario, maybe your competitors are so slow that they didn't get around to even sending to initial invitation. I will not discuss here how to find those candidates.
Let's start from the assumption that you have some source of candidates. With Agile hiring your focus is to minimize the time from zero to offer (or rejection). The typical hiring process consists of:
- Review of resume/application/cover letter
- Phone interview[s]
- Coding challenge
- Face-to-face interviews
- Reference checks
- Background check
- Offer letter
The goal is to avoid wasting valuable time by rejecting inadequate candidates early in the process. Let's see what we can get rid of. The phone interview can go away. I very rarely reject a person based on the phone interview. If the original application is solid, the candidate will make it to the next phase. The coding challenge can go away if the candidate has a decent GitHub account. The reference checks can go away. I never got any useful information whatsoever from a reference the candidate provided. So, here is the Agile process:
- Applications are processed by an experienced technical person who can look beyond buzzwords and evaluate coding competency based on public code.
- Immediate invitation to face-to-face interviews, or if problematic, video conference interviews (easier to schedule).
- Until the interview exchange information with the candidate for expected compensation and role. (This may be a phone conversation, but not a gating review to delay the face-to-face interview that was already scheduled.)
- At the end of the face-to-face interview the candidate should have an offer in hand.
Negotiation may take place and background checks can be conducted later. That's it. You could have a new engineer within 2-3 days since first contact vs. 2-3 months. Think about it. If you're worried about wasting people's time with too many interviews, consider having fewer people interview each candidate. It is not a rule of nature that every team member must meet and approve every candidate. If you are confident in your filtering abilities, then maybe just the hiring manager and two other people need to speak with the candidate.
Posted by Sandeep Chanda
on April 28, 2016
Alberto Brandolini has been evangelizing the idea of EventStorming for a while now. It is a powerful workshop format to breakdown complex business problems pertaining to real world scenarios. The idea took its shape from the Event Sourcing implementation style laid out by Domain Driven Design. The outcome of the workshop format produces a model perfectly aligned with the idea of DDD and lets you identify aggregate and context boundaries fairly quickly. The approach also leverages easy-to-use notations and doesn't require UML, that in itself, might become a deterrent for some participants of a workshop who are not so familiar with UML notations.
The core idea of EventStorming is to make the workshop more engaging and evoke thought provoking responses from the participants. A lot of times discovery is superficial and figuring out the details are deferred for later. On the contrary, EventStorming allows participants to ask some very deep questions about the business problem that were very likely playing in their sub-conscious minds. It creates an atmosphere where the right questions can arise.
A core theme of this approach is unlimited modeling space. Modeling complex business problems is often constrained by space limitations (mostly whiteboard), but the approach allows anything to be leveraged as a platform where the problem can be modeled. You may pick anything that can come in handy and help you get rid of the space limitations.
Another core theme of this approach is the focus on Domain Events. Domain Events represent meaningful actions in the domain with suitable predecessors and successors. Placing events in a timeline on a surface allows people to visualize upstream and downstream activities and model the flow easily in their minds. Domain events are further annotated with user actions that are represented as Commands. You can also color code the representation to distinguish between user actions and system commands.
The next aspect to investigate is Aggregates. Aggregates here should represent a section of the system that receives the commands and decides on their execution. Aggregates produce Domain Events.
While a Domain Event is key to this exploration technique, along the way you are also encouraged and motivated to explore Subdomains, Bounded Context and User Personas. Subsequently, you also look at Acceptance Tests to remove any amount of ambiguity arising out of edge-case scenarios.
Posted by Gigi Sayfan
on April 27, 2016
In recent years the database scene has been boiling. New databases for special purposes seem to emerge every day and for a good reason. Data processing needs have become more and more specialized and with scale you often need to store your data in a way that reflects its structure in order to support proper queries. The default of storing everything in a relational database is often not the right solution. Graphs, which are a superset of trees and hierarchies are a very natural way to represent many real world concepts.
Facebook and its social graph brought it to center stage, but graphs, trees and hierarchies were always important. It is possible to model graphs pretty easily with a relational database because a graph is really just vertices connected by edges, that map very well to the entity-relational model. But, performance is a different story at scale and with deep hierarchies.
Many graph databases exist today with different levels of maturity. One of the most interesting developments in this domain is the TinkerPop Apache incubator project, which is a graph computing framework with active industry contribution. Checkout the getting started tutorial here to get familiarized with the terrain.
Other interesting projects include DataStax (Cassandra) acquiring TitanDB and incorporating it and of course Facebook's GraphQL. And, there is always Neo4J that has a mature enterprise-ready solution and an impressive list of customers.
The exciting part is how to combine graph databases with other data systems and how to construct viable data intensive systems that scale well to address new data challenges such as the coming of the IoT era where sensors will generate unprecedented amounts of data.
Posted by Sandeep Chanda
on April 20, 2016
At the recently concluded Build Conference, the Microsoft Azure team announced the preview of Functions, an app service that allows you to execute code on demand. Azure Functions is event-driven and allows serverless execution of code triggered as a result of an event or a coordinated set of events. It allows you to extend the application platform capabilities of Azure without having to worry about compute or storage. The events don’t have to only be occurring in Azure, but in virtually any environment including on-premise systems. You can also make Functions connected to be part of a data processing or messaging pipeline, thereby executing code asynchronously as part of a workflow. In addition, Azure Functions Apps can scale on demand, allowing you to pay for only what you use.
Azure Functions offers support for a myriad of languages, such as PHP, C# and most of the scripting languages (Bash and PowerShell, for example). You can also upload and trigger pre-compiled code and support dependencies using NuGet and NPM. Azure Functions can also run in a secure setting by making them part of an app service environment configured for a private network. Out of the box, it supports OAuth providers such as Azure Active Directory, Facebook, Google, etc.
To create a Functions app login to your Azure portal and search for Functions in the Marketplace search bar.
Select the Function App service, provide a name, select a resource group and create the app.
Once you create a Function app, it also creates an AppInsights monitoring app that attaches to monitor the health of the Functions app. After the Function app is deployed, you can navigate to the Function App console from the dashboard. The console has two windows. The Code window (an editor with a code highlighter) is where you can put your script and then you can verify the outcome under the Logs window.
You can then make the Function integrate with an event driven workflow as illustrated in the figure below:
You can configure an input and an output for the trigger in the Integrate tab. In this example, a storage queue is set as the default trigger but you can set it to be a Schedule, a Webhook, or Push Notifications amongst others.
There are also a bunch of pre-defined templates you can use to start creating your function from scratch.
Additionally, there are several advanced settings available for you to configure, for example setting up authentication, authorization, integrating with a source control for continuous integration, enabling CORS and providing API definitions to allow clients to easily call them.
In a world heavily dominated by the likes of Amazon, Google, and IBM, it is a good move for Microsoft to release Azure Functions, providing more options for developers and enterprises to choose and extend their PaaS implementation.