Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Gigi Sayfan on November 17, 2015

Agile development, when executed well, is a thing of beauty. A sprint starts, stories/tasks are assigned. People start working in a test-driven manner and the system comes to life piece after piece. With each sprint new functionality is delivered. Refactoring is ongoing. The system architecture is always close to the conceptual ideal. Stakeholders are happy. Developers are happy. Everybody is happy.

And then, there is the real world--pressure, mid sprint changes, unclear requirements, tests without much coverage, technical debt mounting, production systems crashing, etc. With agile development that's not supposed to happen, but agile practices require dedication and discipline from all stakeholders--including top management, which is not always there. Even at development team level, some people are not fully on board or are simply not organized enough.

There are some inglorious tasks such as heavy documentation, verifying compliance with obscure standards and checking backward-compatibility with Netscape Navigator 4 on Windows 95, for example. If you reach a point where you notice Agile fatigue where basic practices are not followed, you can try gamification. Gamification is all about creating a work environment where you get rewarded for performing your duties by incorporating game design elements.

Whenever you squash a bug you get some experience points, the system keeps track of your not being late for meetings streak, the whole team gets a gold star when all of the sprint is completed successfully. This may sound silly if you never tried it, but movements such as the quantified self demonstrate that people enjoy it and will adapt their behavior to accomplish little game goals that just happen to correspond to real life goals. It can add fun, as well as help keep track on actual important metrics.

Posted by Sandeep Chanda on November 13, 2015

Just a few days back, Google, in a not so surprising move, announced open sourcing its machine learning platform called TensorFlow. TensorFlow employs the concept of neural computation using Data Flow Graphs. Data flow graphs are represented as nodes and edges where nodes represent a mathematical operation or an endpoint for data ingestion, sending output, or reading and writing persistent variables. Edges, on the other hand, describe the input output relationship between the nodes. Typically a node represents a computational system and is executed asynchronously in parallel with others.

The primary reason Google quotes for making this an open source platform is to inspire more research, not only in the world of unique computational domains, but also research related to deep neural networks. One of the most important architectural aspects of TensorFlow is that it can be run on a multitude of devices such as desktop, server, mobile, you name it — leveraging computing power from all these different devices and in turn allowing it to demonstrate its capabilities as a truly portable machine learning platform.

Asynchronous computing, threads, and queues are first class citizens in TensorFlow allowing it to maximize performance from the available hardware running the compute elements of the data flow graph. It also supports auto-differentiation out of the box. It can automatically compute derivatives based on the available data combined with the objective function and predictive model definition. It is also extremely flexible in terms of supporting custom libraries. If you can express a computation in terms of a data flow graph, you can leverage TensorFlow's capabilities. As simple as that!

Today, it supports Python and C++ interfaces as is, but Google is encouraging you to contribute SWIG (simplified wrapper and interface generator) interfaces in JavaScript, R, and other popular machine learning and data science languages.

Posted by Gigi Sayfan on November 12, 2015

Data is king. Everyone knows that. Big Data has been on the buzzword most wanted list for a while now, along with machine learning, Hadoop and Spark.

Companies eagerly collect data and build massive pipelines to process that data in order to gain insight into what it all means. Guess what? Some companies are able to do that successfully for particular aspects of their operation. This typically happens when the company has:

  • the ability to collect data on a massive scale and over a long timeframe
  • data that is relatively stable
  • a talented team of data scientists combined with domain expertise that can sift through the enormous piles of data and make informed decisions about what's relevant and what's not
  • the know-how to configure and tune the huge big data/machine learning apparatus
  • the knowledge to interpret the results and know how to tie the numbers to concrete steps or actions in the field.

This is extremely difficult.

A few companies transcend the hype and can actually demonstrate measurable improvements. Many others live in a world of illusion that their fate and strategic direction are optimized by hard data, when in practice they follow a false path. The problem is that there are no controls. No company can afford to split into two and compare its performance over time with two different approaches.

That said, big data, analysis and machine learning are amazing technologies that can really improve and scale a company. The trick is to do it right and that takes a lot of hard work. It gets easier and easier to hook up various data sources into big data pipelines, but the important parts are difficult to automate.

Posted by Gigi Sayfan on November 3, 2015

Test Driven Development (TDD) is arguably the most impactful Agile practice. Nobody even talks about it anymore, but automated tests were revolutionary when they came on the scene. Fifteen years ago, the common practice was to have a big monolithic application. The developers would produce a build (usually an executable) once in a blue moon, throw it over the fence to the QA department which would bang on it for a while and then open tens if not hundreds of bugs. Then, the developers would engage is a bug squashing period.

Luckily, that's no longer the case and everyone recognizes the importance of automated tests and rapid iterations. TDD means that your whole development process is driven by tests. When you think about problems you phrase the discussion in terms of what tests you'll need. When you model things you consider how testable they are. You might change choose between alternatives based on the impact on your test suite.

The ultimate in TDD is the test first movement. This is truly putting tests on a pedestal and treating tests as the most crucial building blocks of your system and its architecture. There are multiple advantages to TDD. The best one is that you get to say TDD all the time. Try it. It's fun: TDD, TDD, TDD. Almost as important is the fact that you will have automated tests for all your code and those tests will even be kept up to date as you evolve and refactor your system. Don't underestimate TDD. Every development team starts with the best intentions regarding all kinds of best practices, but as the pressure mounts many of them go down the drain: test coverage, modularity, coding conventions, refactoring, clean dependencies, etc.

Having tests as your primary driver ensures they will not be neglected, which will give you better a chance at evolving your systems.

Posted by Sandeep Chanda on November 2, 2015

While the famous trio in JavaScript platforms (Ember.js, Angular.js and React.js) are lining up new releases, there is a new challenger on the block. While it is still early, Aurelia is, in my honest opinion, promising. The framework is the latest from the stable of Durandal Inc., the group that also created DurandalJS.

Aurelia is written to work with simple conventions that help you save time by not having to write a ton of configuration code. In addition, it also doesn't require writing a lot of boilerplate code to get you up and running. It is written to the ES2016 specifications, and hence incorporates a bunch of modern conventions provided in the standard. It also has robust routing capabilities and supports dynamic routes, child routers and asynchronous screen activation patterns. Not only that, it also supports extending your HTML. You can create custom HTML elements and control the template generation process.

Setting up Aurelia is easy. First you need to set up Gulp for build automation. Using your Node console, run the npm install –g gulp command to install Gulp. You also need to install the browser package manager jspm  using npm. Now to setup Aurelia, you can either use Yeoman  to generate a bare bones Aurelia project in your target folder using the command yo generator-aurelia or alternatively, download the Aurelia skeleton navigation project  and extract the contents to your target folder. Now navigate to the folder in your Node console and then run npm install command to configure the dependencies and the jspm install –ycommand to install the Aurelia library and bootstrap.

You are all set! Aurelia demonstrates pretty amazing levels of browser compatibility and works well in all modern browsers. Not only that, you are not bound by ES7 specification. You can program pretty much in ES5, ES6, TypeScript, CoffeeScript, and the list goes on.

Posted by Gigi Sayfan on October 29, 2015

Software is eating the world, as Marc Andreessen said. More and more functions that used to require people are controlled by software. The demand for software engineers is on the rise and the trend seems fairly stable. Many people speculate that at some point software will replace most software engineers as well. But we're not quite there yet.

There is no denying that code and software are becoming increasingly important in our society. The big question is whether or not everybody should learn to code. There are several aspects to the matter. One of the arguments for it is that since code is running so much of our world, everyone should at least study to have a sense of what software is and what it takes to create it and maintain it. I'm not sure I subscribe to this viewpoint. Most people are perfectly happy being ignorant of the inner working of the many crucial things on which they depend. How many people know what electricity really is or how an internal combustion engine works? More specifically, how many people use apps on their phone without the slightest idea of how they were built?

People that are interested have as many resources as you can dream of available for free to learn on their own via articles, YouTube videos, code academies, sample code on GitHub, etc.

Another argument for the idea that everyone should code is that it will expose people (especially less privileged people) to a very lucrative opportunity to earn a decent living. Again, I'm not sure how effective that would be. It takes a certain aptitude to be a professional developer who actually gets paid. You need to be the type of person who can sit in front of the screen for hours, focus, telling an idiot machine precisely what to do and not get frustrated when it gets it wrong.

If everyone learns to code, either a lot of people will fail or you dial down the content to total fluff. What about the people who have the aptitude and can be successful, but were never exposed to code? This is a different story. Schools should introduce code to everyone and let them experiment, but the purpose would be to pique the interest of kids that might succeed and not to churn out hoards of inept software developers.

Posted by Gigi Sayfan on October 26, 2015

One of the most basic building blocks of programming is the function. You call some piece of code, passing some parameters and you get back a returned value. If you grew up with any modern programming language you probably don't even think about it. But, the function was a major innovation. In the guts of your computer there were no functions. There was just a program counter that went forward and you could tell it to jump somewhere else if you really want to (the dreaded goto).

I started programming BASIC at 11 years old. Note, this was not Visual Basic, which is a full-fledged modern programming language. This was unstructured BASIC. For me, a program was this linear set of instructions to the computer that accepted input, did some processing and then printed or drew something on the screen (a TV back then, no monitors). Each line in the code had a line number and a common control structure was "goto <line number>". So, the program was literally a linear set of instructions executed one after the other.

I still remember my awe when I learned about the "GOSUB" command (go subroutine), which lets you execute a block of commands and then return to the line after the call. Amazing, isn't it? No parameters, of course, and no local variable, just go somewhere and return automatically. This was still a major improvement over the "goto" statement, because with "GOSUB" you could call the same subroutine from different places in your program and it will return to the right place. If you wanted to implement it with "goto" you had to store the return line number in a global variable and "goto" it at the end of the code block. There was also the "DEF FN" pair of keywords that confused me to no end. This is arguably the worst and most inconsistent programming language syntactic structure I ever encountered. It allows you define as expression similar to Python's Lambda functions. You have to concatenate your function name to the "FN", so each function has a mandatory "FN" prefix such as:

DEF FNSQUARE(x) = x * x

Now you can call the your function


Nasty. Isn't it?

So, whenever you are frustrated with your programming language or why stack overflow has the best answer buried in third place, remember that it has never been so good and it will only get better.

Posted by Gigi Sayfan on October 20, 2015

Agile practices work best with small cohesive and co-located teams. How do you scale it for a larger organizational with multiple teams, possibly in different locations and even time zones? One approach would be to treat all these teams as one big agile team and try to work around the issues of remoteness and time zones. This approach is doomed for multiple reasons.

A better approach would be to break the development into vertical silos in which each team is completely responsible for a particular set of applications or services. This approach scales well as long as you can neatly break development into independent pieces that can be fully owned by one team. One downside of this approach is that knowledge sharing and opportunities for reuse are much more difficult.

A third approach would be to treat teams as users of each other. This allows collaboration and combining multiple teams to tackle big projects without losing the benefits of the original Agile team. This requires careful management to ensure teams' schedules are properly coordinated and dependencies don't halt development. This is nothing new, but in an agile context the planning game is typically done at the team level. When multiple teams work together (even loosely) there needs to be another layer of management that takes care of the cross-team planning.

Posted by Sandeep Chanda on October 19, 2015

While Amazon Web Services is leaving no strings behind and continuously striving to complete the loop in being a provider of end to end data services, Microsoft Azure is not too far behind. In my previous post, I talked about AWS launching QuickSight for business intelligence. Azure has now come up with its own offering in the form of Azure Data Lake. Azure is currently collecting a lot of streaming data from different devices and letting users focus on the insights using their managed Hadoop service HDInsight.

Azure Data Lake brings two capabilities. First is the Data Lake Store, which is a hyper-scale HDFS repository designed for analytics on big data workloads. It pretty much does the job of high volume, high speed data processing at any level of scale, supporting near real time sensors and devices. The Data Lake Store has no limitations in terms of the type of data or its size. It supports processing of data in any format, size, and scale. It is HDFS compatible and hence provides support for Hadoop out of the box. It also supports Azure Active Directory, allowing you to configure data streams from within your enterprise network.

The second capability is the Azure Data Lake Analytics which allows you to run queries on any storage in Azure (blobs, Data Lake Store, SQL DB, etc.) and make sense out of the large volumes of data stored. It also comes with the U-SQL query offering designed on the lines of familiar SQL syntax. You can use the query language to declaratively create big data jobs that run against the stored datasets. U-SQL jobs can also be designed from your familiar Visual Studio environment. Data Lake Analytics jobs are not limited only to U-SQL, you can write your code as well to create them.

Posted by Sandeep Chanda on October 12, 2015

A slew of recent product releases in the world of Amazon Web Service (AWS) indicate Amazon's hurry to complete its portfolio of offerings in the world of Big Data. They have pretty much covered their ground when it comes to collecting, storing, and processing large volumes of data. Platforms such as Amazon RDS, DynamoDB and Redshift were created for the purpose. What was really missing was a product that could derive insights, in real time, and not just for technology experts, but also for business users who could then make business decisions on what they are able to interpret by leveraging visually interactive dashboards.

QuickSight aims at completing the loop in making AWS the provider of low cost, full-fledged, scalable business intelligent services that deliver data insights from a wide range of data sources for a fraction of the cost of legacy BI solutions. According to Werner Vogels, CTO at Amazon.com, there is an inherent gap between the volumes of data that are generated, stored and processed by several enterprise scale applications and key decisions that business users make on a daily basis. QuickSight aims at bridging this gap.

QuickSight is a business intelligence service powered by cloud and solves the problem of speed, complexity and cost of generating insights from large volumes of data. It is also pretty easy to setup and use. QuickSight is powered behind the scenes by a superfast, parallel, in-memory calculation engine named SPICE. It is designed to provide responses in milliseconds for queries run on very large datasets. This allows QuickSight to scale pretty quickly to thousands of users on a range of data sources available in AWS. In addition to SPICE, QuickSight also has technologies for auto discovery of data changes and curating it for analysis and also making suggestions based on several parameters like the metadata of the data source, query history, etc.

Thanks for your registration, follow us on our social networks to keep up-to-date