"You can't load the website? Try another browser!" This is the most common advice we receive when unable to load a website on the default browser. To prevent Internet Explorer users from switching to Firefox and vice versa, any website should be thoroughly checked with cross-browser testing before its release. It is important to understand the nature of this testing and the hidden obstacles you should be ready to solve when performing cross-browser testing.
First of all, there are no applications that work identically in all web browsers. Even old versions of the same browser work differently sometimes. Discrepant performance can be caused by desktop and mobile version differences, browser settings and browser specificity. All those challenges determine the cross-browser testing target; testing should ensure different browsers perform the same application similarly across browsers.
The first stage of cross-browser testing is selecting the browsers. At least four of the world's top browsers should be included in the list. According to gs.statcounter.com, the following browsers are the most popular:
- Google Chrome (the latest version is usually used for testing): 48 percent of Internet users installed Chrome for Internet surfing
- Internet Explorer: 19.6 percent of users
- Mozilla Firefox: 16.74 percent of users
- Safari: 10.63 percent of users
Nevertheless, each region should be analyzed in advance to ensure that no browsers of importance to that specific region are overlooked. For example, while the rest of the world doesn't remember Opera, this Norwegian browser is still really popular in Eastern Europe and Central Asia, where it reaches up to 20 percent of users. China also has its own map of browsers, where local Sogou Explorer and QQ Browser are still popular.
However, Chrome use is rising all over the world, and it remains the leading browser in most countries. This is why, many times, Chrome is considered to be a default browser for Web tests.
Choosing the Primary Browser
Testers are typically prepared to provide recommendations on browser selection, but the client product owner is the one who has the final word. Depending on the specificity of the application and the professional target, the client will likely have in mind certain groups of users for which the product is designed; this helps determine the list of browsers to be used in testing. However, sometimes clients overestimate their aspirations and don't really know what kind of audiences will ultimately use the product. Hence, testers experienced in Web technologies are called in to advise on the list of browsers that should be selected.
There are several tools for cross-browser testing that can be found online. For instance, testing on virtual machines is possible. That's a convenient option in some cases, when testing should be done on different versions of the same browser, for example. However, testing this way might miss some bugs that will appear when a user launches the application on a real device. For example, element placing is hard to imitate on virtual machines.
Let's imagine that you have chosen your primary browser. You should perform a deep test in this particular browser but can restrict the level of testing within secondary browsers, saving time and money. The user interface requires thorough testing under all browsers chosen to be a part of the testing process. And it must be taken into account that specific features may be considered bugs (e.g. Safari displays buttons in different style, pop-up designs can change depending on a browser, etc.).
Why you Must Include an Old IE Version
The function of compatibility mode allows testers to emulate older versions of the browser. However, it's not the best option in performing tests of older versions, since the results won't be 100 percent precise. Creating several snapshots (system images) on a virtual machine is one way that has proven effective in testing different versions of IE.
But why even test old versions of IE? The reason is that IE is incorporated in Windows OS, and a great number of people among conservative users might not even know about other browsers and run old IE, the default browser for Windows OS. Another reason could be that the computer is connected to the corporate network, and automated updating is disabled because of the security requirements. It can also simply come down to a person's habits; some people just get used to IE, and they see no reason to update to a new version or switch to another browser.
Unlike most browsers, every version of IE has its unique bugs and manner of displaying Web pages and elements. Also, it doesn't support some CSS features. This is why checking for bugs using the latest version of IE is not enough.
Therefore, if it's one-time testing, the best option is to roll back the browser to a previous version. It doesn't take much time and could be done in just a couple of clicks.
Tools are Secondary
Cross-browser testing is usually performed on the positive test level on one version of the application. GUI should be tested under scrutiny, since most cross-browser defects are interface bugs. Then, the bugs entered earlier should be validated.
Another important issue to keep in mind is, if there is a bug found in all browsers and it can later be reproduced everywhere, the bug is not related to cross-browser testing. This means you should start cross-browser testing only once the system is adjusted, stable and functional. Otherwise, the amount of work needed will increase dramatically, as well as the project budget and the testing time.
Testers should avoid impossible scenarios such as trying to check combinations like Safari performance on Windows or IE performance on Mac OS, since both of those browsers are preinstalled on Mac OS and Windows OS respectively.
Last but not least, although there are many tools to make a tester's job easier (Saucelabs, Scout and CrossBrowserTesting, for instance), each of them has their weaknesses. For one thing, most of them come with a financial cost. But the most important issue is that they allow you to see statistical data only; functional testing is impossible. Therefore, these tools should be used only if there are no other options available to do cross-browser testing.
Understanding the common features of cross-browser testing are critical, particularly since the importance of cross-browser testing will only continue to grow.
Alexander Panchenko works as the Head of the Complex Web QA Department for A1QA. During his career with A1QA, Alexander has gained a breadth of experience in QA and quality control of various projects: from backup and recovery standalone application to medical social networking. He has also participated in projects involving complex business logic (e.g. corporate portals based on SharePoint, banking systems, and government portals). He currently leads several teams of more than seven people each and manages a division of more than 50 engineers.