Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Posted by Jason Bloomberg on September 10, 2014

The new Apple Watch has many cool features to be sure, but I just don’t like the fact that Apple discriminates on the basis of handedness.

The Apple Watch comes in a right-handed configuration. Yes, there’s a left-handed setting, but you need to switch the band around, and then the button on the side is in an awkward lower position.

In other words, left-handed people either have to suck it up and use the watch in the right-handed configuration, or go through the hassle of reconfiguring it only to end up with an inferior design. Thanks a lot, Apple. But hey, we're only lefties, and we're only being inconvenienced.

We should be used to it, right? After all, user interfaces have been right-handed for years. To this day the arrow cursor is right-handed, and scrollbars are always on the right. And for software that does have a left-handed configuration, more often than not some aspect of the UI doesn’t work properly in left-handed mode.

If we were a legally protected minority then it wouldn't be a question of being inconvenienced, right? Were separate water fountains simply inconvenient?

10% of the population is left-handed. And all us lefties know that left-handedness correlates with intelligence, so I wouldn’t be surprised if the percentage is higher within the storied walls of Apple. So, why didn’t Apple release a left-handed version of the Apple Watch?

I think Apple is being offensive by paying lip service to handedness, but giving lefties a second-class experience nevertheless. But that's just me. Who cares what lefties think?


Posted by Jason Bloomberg on September 5, 2014

“Never believe anything you read on the Internet.” – Abraham Lincoln

Honest Abe never spoke a truer word – even though he didn’t say anything of the sort, of course. And while we can immediately tell this familiar Facebook saying is merely a joke, there are many documents on the Internet that have such a veneer of respectability that we’re tempted to take them at their word – even though they may be just as full of nonsense as the presidential proclamation above.

Among the worst offenders are survey reports, especially when they are surveys of professionals about emerging technologies or approaches. Fortunately, it’s possible to see through the bluster, if you know the difference between a good survey and a bad one. Forewarned is forearmed, as the saying goes – even though Lincoln probably didn’t say that.

The Basics of a Good Survey

The core notion of a survey is that a reputable firm asks questions of a group of people who represent a larger population. If the surveyed group accurately represents the larger population, the answers are truthful, and questions are good, then the results are likely to be relatively accurate (although statistical error is always a factor). Unfortunately, all of these criteria present an opportunity for problems. Here are a few things to look for.

Does the sample group represent the larger population? The key here is that the sample group must be selected randomly from the population, and any deviation from randomness must be compensated for in the analysis. Ensuring randomness, however, is quite difficult, since respondents may or may not want to participate, or may or may not be easy to find or identify.

Here’s how reputable political pollsters handle deviations from randomness. First, they have existing demographic data about the population in question (say, voters in a county). Based on census data, they know what percent are male and female, what percent are registered Democrat or Republican, what the age distribution of the population is, etc. Then they select, say, 100 telephone numbers at random in the county, and call each of them. Some go to voicemail or don’t answer, and many people who do answer refuse to participate. For the ones that do participate, they ask demographic questions as well as the questions the survey is actually interested in. If they find, say, that 50% of voters in a county are female, but 65% of respondents were female, they have to adjust the results accordingly. Making such adjustments for all factors – including who has phones, which numbers are mobile, etc. – is complex and error-prone, but is the best they can do to get the most accurate result possible.

Compare that political polling selection process to how, say, Digital Transformation, Big Data, or Cloud Computing adoption surveys assemble their populations. Perhaps the survey company emails their mailing list and asks for volunteers. Maybe it’s a Web page or a document handed out at a conference. Or worst of all, perhaps survey participants are hand-selected by the sponsor of the survey. None of these methods produces a sample that’s even close to being random. The result? The results of the survey cannot be expected to represent the opinions of any population other than the survey participants themselves.

Are the answers truthful? I’m willing to posit that people are generally honest folks, so the real question here is, what motivations would people have not to be completely honest on a survey? For emerging technologies and approaches the honesty question is especially important, because people like to think they’re adopting some new buzzword, even if they’re not. Furthermore, people like to think they understand a new buzzword, even if they don’t. People also tend to exaggerate their adoption: they may say they’re “advanced Cloud adopters” when they simply use online email, for example. Finally, executives may have different responses than people in the trenches. CIOs are more likely to say they’re doing DevOps than developers in the same organization, for example.

Are the questions good? This criterion is the most subtle, as the answer largely amounts to a matter of opinion. If the surveying company or the sponsor thinks the questions are good, then aren’t they? Perhaps, but the real question here is one of ulterior motives. Is the sponsor looking for the survey to achieve a particular result, and thus is influencing the questions accordingly? Were certain questions thrown out after responses were received, because those answers didn’t make the surveying company or sponsor happy? If scientific researchers were to exclude certain questions because they didn’t like the results, they’d get fired and blacklisted. Unfortunately, there are no such punishments in the world of business surveys.

So, How Do You Tell?

I always recommend taking surveys with a large grain of salt regardless, but the best way to get a sense of the quality of a survey is to look at the methodology section. The survey you’re wondering about doesn’t have a methodology section, you say? Well, it might be good for wrapping fish, but not much else, since every survey report should have one.

Even if it has one, take a look at it with a critical eye, not just for what it says, but for what it doesn’t say. Then, if some critical bit of information is missing, assume the worst. For example, here is the entire methodology section from a recent Forrester Research “Thought Leadership Paper” survey on Business Transformation commissioned by Tata Consultancy Services (TCS):

In this study, Forrester interviewed 10 business transformation leaders and conducted an online survey of 100 US and UK decision-makers with significant involvement in business transformation projects. Survey participants included Director+ decision-makers in IT and line of business. Questions provided to the participants asked about their goals, metrics, and best practices around business transformation projects. Respondents were offered an incentive as a thank you for time spent on the survey. The study began in February 2014 and was completed in May 2014.

How did Forrester ensure the randomness of their survey sample? They didn’t. Is there any reason to believe the survey sample accurately represents a larger population? Nope. How did they select the people they surveyed? It doesn’t say, except to point out they have significant involvement in business transformation projects. So if we assume the worst, we should assume the respondents were hand-selected by the sponsor. Does the report provide an analysis of the answers to every question asked? It doesn’t say. The methodology statement does point out respondents were offered an incentive for participating, however. This admission indicates Forrester is a reputable firm to be sure, but doesn’t say much for the accuracy or usefulness of the results of the report.

So, what should a business survey report methodology look like? Take a look at this one from the International Finance Corporation (IFC), a member of the World Bank Group. The difference is clear. Consider yourself forewarned!


Posted by Sandeep Chanda on September 3, 2014

Microsoft's recent addition into the world of NoSQL Databases has been greeted with quite a fanfare and with mixed reviews from competing products. What is interesting is that Microsoft chose DocumentDB as a new Azure-only feature against enhancing its already existing table storage capabilities.

DocumentDB is a JSON document only database as a service. A significant feature included in DocumentDB, that is missing in its traditional rivals, is the support for rich queries (including support for LINQ) and transaction support. What is also interesting is that the new SQL syntax for querying JSON documents automatically recognizes native JavaScript constructs. It also supports programmability features such as user defined functions, stored procedures, and triggers. Given that it is backed by Azure with high availability and scalability, the offering seems to hold an extremely promising future.

To start, first create a new instance of DocumentDB in your Microsoft Azure Preview portal.

Click New in the preview portal and select DocumentDB. Specify a name and additional details like the capacity configuration and resource group. Go ahead and click the Create button to create an instance of DocumentDB. After creating the instance you can get the URI and keys by clicking on the Keys tile.

Done! You are now good to start using DocumentDB to store and query JSON documents. In your instance of Visual Studio, run the following NuGet command using the package manager console to install the pre-requisites in order to start programming with DocumentDB.

PM> Install-Package Microsoft.Azure.Documents.Client -Pre

If you want to program it using JavaScript, you can also install the JavaScript SDK from here https://github.com/Azure/azure-documentdb-js, and then leverage the REST interface to access DocumentDB using permissions authorization. In a future post, we will look at some of the language constructs in programming with DocumentDB.


Posted by Jason Bloomberg on August 26, 2014

I attended Dataversity’s NoSQL Now! Conference last week, and among the many vendors I spoke with, one story caught my interest. This vendor (who alas must remain nameless) is a leader in the NoSQL database market, specializing in particular on supporting XML as a native file type.

In their upcoming release, however, they’re adding JavaScript support – native JSON as well as Server-Side JavaScript as a language for writing procedures. And while the addition of JavaScript/JSON may be newsworthy in itself, the interesting story here is why they decided to add such support to their database.

True, JavaScript/JSON support is a core feature of competing databases like MongoDB. And yes, customers are asking for this capability. But they don’t want JavaScript support because they think it will solve any business problems better than the XML support the database already offers.

The real reason they’re adding JavaScript support is because developers are demanding it – because they want JSON on their resumes, and because JSON is cool, whereas XML isn’t. So for the people actually responsible for buying database technology, they’re asking for JSON support as a recruitment and retention tool.

Will adding JavaScript/JSON support make their database more adept at solving real business problems? Perhaps. But if developers will bolt if your database isn’t cool, then coolness suddenly becomes your business driver, for better or worse. One can only wonder: how many other software features are simply the result of the developer coolness factor, independent of any other value to the businesses footing the bill?


Posted by Sandeep Chanda on August 25, 2014

Enterprise monitoring needs over the years have been addressed by Microsoft Systems Centre Operations Manager to a large extent. The problem however is that SCOM produces a lot of noise and the data could very quickly become irrelevant for producing any actionable information. IT teams very easily fall in the trap of configuring SCOM for every possible scheme of alerts, but do not put effective mechanisms in place to improve the alert to noise ratio by creating usable knowledge base out of the alerts that are generated by SCOM. Splunk and its cloud avatar, Hunk could be very useful in the following aspects:

  1. Providing actionable analytics using the alert log in the form of self-service dashboards
  2. Isolation of vertical and horizontal monitoring needs
  3. Generating context around alerts or a group of alerts
  4. Collaboration between IT administrators and business analysts
  5. Creating a consistent alerting scale for participating systems
  6. Providing a governance model for iteratively fine tuning the system.

In your enterprise, Splunk could be positioned in a layer above SCOM, where it gets the alert log as input for processing and analysis. This pair can be used to address the following enterprise monitoring needs of an organization:

  1. Global Service Monitoring - Provides information on the overall health of the infrastructure, which includes surfacing actionable information on disk and CPU usage. It could also be extended to include the network performance and the impact specific software applications are having on the health of the system. Splunk will augment SCOM in creating dashboards from the data collected that could help make decisions. For example, looking at the CPU usage trends on a timeline, IT owners can decide increasing or decreasing the core fabric.
  2. Application Performance Monitoring - Splunk can be extremely useful in making business decisions out of the instrumentation you do in code and the trace log it generates. You can identify purchase patterns of your customers. The application logs and alerts generated by custom applications and commercial of the shelf software (COTS) could be routed to Splunk via SCOM using the management packs. Splunk can then help you create management dashboards that in-turn will help the executive team decide the future course of business.

Using Splunk in conjunction with SCOM provides you a very robust enterprise monitoring infrastructure. That said, the true benefit of this stack can be realized only with an appropriate architecture for alert design, a process guidance on thresholds, and identification of key performance indicators to improve the signal to noise ratio.


Posted by Jason Bloomberg on August 21, 2014

In my latest Cortex newsletter I referred to “tone deaf” corporations who have flexible technology like corporate social media in place, but lack the organizational flexibility to use it properly. The result is a negative customer experience that defeats the entire purpose of interacting with customers.

Not all large corporations are tone deaf, however. So instead of finding an egregious example of tone deafness and lambasting it, I actually found an example of a corporation who uses social media in an exemplary way. Let’s see what Delta Airlines is doing right.

The screenshot above is from the Delta Facebook page. Delta regularly posts promotional and PR pieces to the page, and in this case, they are telling the story of a long-time employee. Giving a human face to the company is a good practice to be sure, but doesn’t leverage the social aspect of Facebook – how Delta handles the comments does.

As often happens, a disgruntled customer decided to post a grievance. Delta could have answered with a formulaic response (tone deaf) or chosen not to respond at all (even more tone deaf). But instead, a real person responded with an on-point apology. Furthermore, this real person signed the response with her name (I’ll assume Alex is female for the sake of simplicity) – so even though she is posting under the Delta corporate account, the customer, as well as everybody else viewing the interchange, knows a human being at Delta is responding.

If Alex’s response ended at a simple apology, however, such a response would still be tone deaf, because it wouldn’t have addressed the problem. But in this case, she also provided a link to the complaints page and actually recommended to the customer that she file a formal complaint. In other words, Delta uses social media to empower its customers – the one who complained, and of course, everyone else who happens to see the link.

It could be argued that Alex was simply handing off the customer to someone else, thus passing the buck. In this case, however, I believe the response was the best that could be expected, as the details of the customer’s complaint aren’t salient for a public forum like social media. Naturally, the complaints Web site might drop the ball, but as far as Delta’s handling of social media, they have shown a mastery of the medium.

So, who is Alex? Is she in customer service or public relations? The answer, of course, is both – which shows a customer-facing organizational strategy at Delta that many other companies struggle with. Where is your customer service? Likely in a call center, which you may have even outsourced. Where is your PR? Likely out of your marketing department, or yes, even outsourced to a PR firm.

How do these respective teams interact with customers? The call center rep follows a script, and if a problem deviates, the rep has to escalate to a manager. Any communications from the PR firm go through several approvals within the firm and at the client before they hit the wire. In other words, the power rests centrally with corporate management.

However, not only does a social media response team like Alex’s bring together customer service and PR, but whatever script she follows can only be a loose guideline, or responses would sound formulaic, and hence tone deaf. Instead, Delta has empowered Alex and her colleagues to take charge of the customer interaction, and in turn, Alex empowers customers to take control of their interactions with Delta.

The secret to corporate social media success? Empowerment. Trust the people on the front lines to interact with customers, and trust the customer as well. Loosen the ties to management. Social media are social, not hierarchical. After all, Digital Transformation is always about transforming people.


Posted by Sandeep Chanda on August 14, 2014

In Visual Studio 2013, the team unified the performance and diagnostics experience (memory profiling, etc.) under one umbrella and named it Performance and Diagnostics Hub. Available under the Debug menu, this option reduces lot of clutter in terms of profiling client and server side code during a debug operation. There was lot of visual noise in the IDE in the 2012 version and the hub is a significant addition in improving developer productivity.

In the Performance and Diagnostics hub, you may select the target, and specify the performance tools with which you would want to run diagnostics. There are various tools that you can use to start capturing the performance matrices like CPU Usage and Memory Allocation. You can collect CPU utilization matrices on a Windows forms based or WPF application.

The latest release of Update 3 brings with it some key enhancements to the CPU and memory usage tools. In the CPU usage tool, you can now right-click on a function name that was captured as part of the diagnostics and click View Source. This will allow you to easily navigate to the code that is consuming CPU in your application. The memory usage tool now allows you to capture memory usage for Win32 and WPF applications.

The hub will also allow you to figure hot paths in the application code that might be causing more CPU cycles and may need refactoring.

You can also look for functions that is doing most work as illustrated in the figure below.

Overall, the Performance and Diagnostics hub has become a useful arsenal for developer productivity and catering to non-functional aspects of the application scope.


Posted by Jason Bloomberg on August 12, 2014

Two stories on the Internet of Things (IoT) caught my eye this week. First, IDC’s prediction that the IoT market will balloon from US$1.9 trillion in 2013 to $7.1 trillion in 2020. Second, the fact it took hackers 15 seconds to hack the Google Nest thermostat – the device Google wants to make the center of the IoT for the home.

These two stories aren’t atypical, either. Gartner has similarly overblown market growth predictions, although they do admit a measure of overhypedness in the IoT market (ya think?). And as far as whether Nest is an unusual instance, unfortunately, the IoT is rife with security problems.

What are we to make of these opposite, potentially contradictory trends? Here are some possibilities:

We simply don’t care that the IoT is insecure. We really don’t mind that everyone from Russian organized criminals to the script kiddie down the block can hack the IoT. We want it anyway. The benefits outweigh any drawbacks.

Vendors will sufficiently address the IoT’s security issues, so by 2020, we’ll all be able to live in a reasonably hacker-free (and government spying-free) world of connected things. After all, vendors have done such a splendid job making sure our everyday computers are hack and spy-free so far, right?

Perhaps one or both of the above possibilities will take place, but I’m skeptical. Why, then, all the big numbers? Perhaps it’s the analysts themselves? Here are two more possibilities:

Vendors pay analysts (directly or indirectly) to make overblown market size predictions, because such predictions convince customers, investors, and shareholders open their wallets. Never mind the hacker behind the curtain, we’re the great and terrible Wizard of IoT!

Analysts simply ignore factors like the public perception of security when making their predictions. Analysts make their market predictions by asking vendors what their revenues were over the last few years, putting the numbers into a spreadsheet, and dragging the cells to the right. Voila! Market predictions. Only there’s no room in the spreadsheet for adverse influences like security perception issues.

Maybe the analysts are the problem. Or just as likely, I got out on the wrong side of bed this morning. Be that as it may, here’s a contrarian prediction for you:

Both consumers and executives will get fed up with the inability of vendors to secure their gear, and the IoT will wither on the vine.

The wheel is spinning, folks. Which will it be? Time to place your bets!

the IoT will wither on the vine


Posted by Jason Bloomberg on August 8, 2014

One of the most fascinating aspects of the Agile Architecture drum I’ve been beating for the last few years is how multifaceted the topic is. Sometimes the focus is on Enterprise Architecture. Other times I’m talking about APIs and Services. And then there is the data angle, as well as the difficult challenge of semantic interoperability. And finally, there’s the Digital Transformation angle, driven by marketing departments who want to tie mobile and social to the Web but struggle with the deeper technology issues.

As it happens, I’ll be presenting on each of these topics over the next few weeks. First up, a Webinar on Agile Architecture Challenges & Best Practices I’m running jointly with EITA Global on Tuesday August 19 at 10:00 PDT/1:00 EDT. I’ll provide a good amount of depth on Agile Architecture – both architecture for Agile development projects as well as architecture for achieving greater business agility. This Webinar lasts a full ninety minutes, and covers the central topics in Bloomberg Agile Architecture™. If you’re interested in my Bloomberg Agile Architecture Certification course, but don’t have the time or budget for a three-day course (or you simply don’t want to wait for the November launch), then this Webinar is for you.

Next up: my talk at the Dataversity Semantic Technology & Business Conference in San Jose CA, which is collocated with their NoSQL Now! Conference August 19 – 21. My talk is on Dynamic Coupling: The Pot of Gold under the Semantic Rainbow, and I’ll be speaking at 3:00 on Thursday August 21st. I’ll be doing a deep dive into the challenges of semantic integration at the API level, and how Agile Architectural approaches can resolve such challenges. If you’re in the Bay Area the week of August 18th and you’d like to get together, please drop me a line.

If you’re interested in lighter, more business-focused fare, come see me at The Innovation Enterprise’s Digital Strategy Innovation Summit in San Francisco CA September 25 – 26. I’ll be speaking the morning of Thursday September 25th on the topic Why Enterprise Digital Strategies Must Drive IT Modernization. Yes, I know – even for this marketing-centric Digital crowd, I’m still talking about IT, but you’ll get to see me talk about it from the business perspective: no deep dives into dynamic APIs or Agile development practices, promise! I’ll also be moderating a panel on Factoring Disruptive Tech into Business with top executives from Disney, Sabre, Sephora, and more.

I’m particularly excited about the Digital Strategy Innovation Summit because it’s a new crowd for me. I’ve always tried to place technology into the business context, but so far most of my audience has been technical. Hope you can make it to at least one of these events, if only to see my Digital Transformation debut!


Sitemap