Sharpen Your Code with Benchmarking in PHP

Sharpen Your Code with Benchmarking in PHP

ny software product must successfully pass the optimization step before it can hit the market and become a reference product. Finding memory leaks and increasing a product’s performance is a delicate job that takes many hours of work and many human resources. Benchmarking is an important step in the optimization puzzle, because it can test both individual chunks of code and the entire codebase, and provides reports and statistics that reveal the true runtime parameters and performance.

In PHP you can use the Benchmark package—a PEAR package used to benchmark PHP scripts or functions calls. The latest released version is 1.2.7 (stable); after downloading the package, you can install it like this:

   >> pear install Benchmark-1.2.7

To illustrate the Benchmark package’s capabilities you’ll see two different solutions (iterative and recursive) for the classical problem of generating Fibonacci numbers. The complete problem description is beyond the scope of this article, but it’s a very common problem. If you are not familiar with Fibonacci generation, or you just want to re-familiarize yourself with the Fibonacci sequence, you can find more information here.

The Class Trees for Benchmark PEAR
The package has a small set of flexible classes that you use to perform benchmark timing:

  • Benchmark_Timer
  • Benchmark_Iterate
  • Benchmark_Profiler

This article discusses each class, shows its commonly-used methods, and provides an example of using it.

The Benchmark_Timer Class
The Benchmark_Timer class provides a set of methods that return highly-accurate timing information. The prototypes for the most used methods of this class are:

  • void start(): Set the start time of the marker.
  • void stop(): Set the stop time of the marker.
  • void setMarker(string $name): Sets a marker. The $name parameter represents the name of the marker to set.
  • void display([boolean $showTotal = FALSE], [string $format = ‘auto’]): This function returns formatted information. If is set to true, the $showTotal parameter outputs verbose information. The $format parameter controls the desired output format (auto (the default), plain, or html). When the value is auto the PEAR implementation chooses between plain and html; in most cases it chooses plain.
  • array getProfiling(): Returns the profiler info as an associative array. The $profiling[x][‘name’] value represents the name of marker x; the $profiling[x][‘time’] value represents the time index of marker x; the $profiling[x][‘diff’] value represents the execution time from marker x-1 to marker x; and the $profiling[x][‘total’] value represents the total execution time up to marker x.

Benchmarking Fibonacci (Iterative Solution)
This example applies the Benchmark_Timer class to the iterative solution for the Fibonacci numbers, providing formatted timing information:

   start();               $a=0;$b=1;               for ($i = 0; $i < 10; $i++) {         $s=$a+$b;                  //Set the markers fibonacci         $timer->setMarker('fibonacci'.$i);                      $a=$b;         $b=$s;      }                //Set "Stop" marker      $timer->stop();               //Returns formatted informations      $timer->display();                echo '
';               //Get the profiler info as an associative array      $profiling = $timer->getProfiling();                //Display all the information: name,       // time, difference between two        //consecutive markers and total time      print_r($profiling[1]);      print_r($profiling[2]);      print_r($profiling[3]);      echo '

'; } fibonacci(); ?>The code sets a start marker, and then begins a loop that generates the first 10 numbers in the Fibonacci series. After each generation, it sets a marker named “fibonacci” plus the loop counter value (fibonacci1, fibonacci2, etc.).

Listing 1 contains similar code that benchmarks the recursive Fibonacci solution.

Benchmarking Output
Both the iterative and recursive Fibonacci programs output the results in two forms: a formatted table, and as a raw associative array.

The formatted table results show each marker name, the elapsed time when that marker was reached (time index), the time required to reach that marker from the previous marker (ex time), and the percentage of the total time required to reach that marker from the previous marker (%).

The iterative result is:

 Time IndexEx TimePercentage

The associative array contains the same information in a machine-usable form.

   Array   (       [name] => fibonacci0       [time] => 1209633208.32546500       [diff] => 0.000070       [total] => 0.000070   )   Array   (       [name] => fibonacci1       [time] => 1209633208.32549400       [diff] => 0.000029       [total] => 0.000099   )   Array   (       [name] => fibonacci2       [time] => 1209633208.32551600       [diff] => 0.000022       [total] => 0.000121   )

The recursive result is:

 Time IndexEx TimePercentage

Again, the associative array contains the same information in a machine-usable form.

   Array   (       [name] => fibonacci0       [time] => 1209633188.10322500       [diff] => 0.000165       [total] => 0.000165   )   Array   (       [name] => fibonacci1       [time] => 1209633188.10330500       [diff] => 0.000080       [total] => 0.000245   )   Array   (       [name] => fibonacci2       [time] => 1209633188.10335800       [diff] => 0.000053       [total] => 0.000298   )

As you can see, the timer provides fine-grained results that you can control using the marker capabilities. In most cases, however, you are likely to be less interested in benchmarking individual lines of code and more interested in benchmarking distinct functions, particularly when you can do that without modifying the function code itself. For that, you use the Benchmark_Iterate class.

The Benchmark_Iterate Class
The Benchmark_Iterate class provides two methods for benchmarking a function:

  • void run(): Benchmarks a function.
  • array get([ $simple_output = false]): Returns the benchmark results. In the results, the code $result[x] represents the execution time of iteration x, $result[‘iterations’] represents the number of iterations and $result[‘mean’] represents the mean execution time.

Before attempting to benchmark a complex function, here’s a simple example that should clarify the benchmarking process. This example defines a function, and then calls it four times using the run method. Finally, it outputs the results:

   ';   }      //Benchmarks the example function   $benchmark->run(4, 'example', 'Octavia');      //Returns benchmark result   $result = $benchmark->get();      echo 'The number of iterations is '.$result['iterations'].'
'; echo 'The mean is: '.$result['mean']; ?>

When you run this application, the output is:

   Octavia   Octavia   Octavia   Octavia   The number of iterations is 4   The mean is: 0.000064

With that simple example in hand, Listing 2 shows a slightly more complex example that applies the Benchmark_Iterate class to the iterative Fibonacci solution, while Listing 3 applies it to the recursive Fibonacci solution. The programs return these results:

Iterative Result

   1 1 2 3 5 8 13 21 34 55 89    The execution time of 1 iteration: 0.000223   The number of iterations is: 1   The mean is: 0.000223

The Recursive Result

   1 1 2 3 5 8 13 21 34 55 89    The execution time of 1 iteration is: 0.001135   The number of iterations is: 1   The mean is: 0.001135

The Benchmark_Profiler Class
The Benchmark_Profiler class provides a set of methods that return formatted profiling information. The prototypes of the most used methods of this class are:

  • void enterSection(string $name): This function enters a code section. The $name parameter represents the name of the code section.
  • void leaveSection(string $name): This function leaves a code section. The $name parameter represents the name of the code section.
  • void display([string $format = ‘auto’]): This function returns formatted profiling information. The $format parameter represents the desired output format: auto (the default), plain or html.

Here’s the code to apply the Benchmark_Profiler class to the iterative and recursive Fibonacci solutions:

      // Iterative version   enterSection('fibonacci'.$i);                $s=$a+$b;          $a=$b;          $b=$s;                //Leaves code section          $profiler->leaveSection('fibonacci'.$i);            }             //Returns formatted profiling information      $profiler->display();      return;   }   fibonacci();   ?>   // Recursive version   =0)and($n<2))      {         return 1;      }      else       {         return fibonacci($n-1)+fibonacci($n-2);      }   }      global $profiler;      for ($i = 0; $i < 10; $i++) {         //Enters code section      $profiler->enterSection('fibonacci'.$i);         fibonacci($i).'
'; //Leaves code section $profiler->leaveSection('fibonacci'.$i); } //Returns formatted profiling information $profiler->display(); return; ?>

Result Comparison
The iterative result is:

 Total Ex. TimeCallsPercentageCallers
fibonacci00.000252008438110351N/AGlobal (1)
fibonacci14.6968460083008E-0051N/AGlobal (1)
fibonacci23.814697265625E-0051N/AGlobal (1)
fibonacci35.1021575927734E-0051N/AGlobal (1)
fibonacci45.2928924560547E-0051N/AGlobal (1)
fibonacci55.6028366088867E-0051N/AGlobal (1)
fibonacci63.7908554077148E-0051N/AGlobal (1)
fibonacci74.0054321289063E-0051N/AGlobal (1)
fibonacci83.6954879760742E-0051N/AGlobal (1)
fibonacci90.000293016433715821N/AGlobal (1)

The recursive result is:

 Total Ex. TimeCalls%Callers
fibonacci00.00013685226440431N/AGlobal (1)
fibonacci10.000316143035888671N/AGlobal (1)
fibonacci28.4877014160156E-0051N/AGlobal (1)
fibonacci37.9870223999023E-0051N/AGlobal (1)
fibonacci40.000118970870971681N/AGlobal (1)
fibonacci50.00012278556823731N/AGlobal (1)
fibonacci60.000164985656738281N/AGlobal (1)
fibonacci70.000385999679565431N/AGlobal (1)
fibonacci80.000369071960449221N/AGlobal (1)
fibonacci90.000570058822631841N/AGlobal (1)

Getting Benchmark Results Graphically
Getting raw numbers is useful, but often it’s more useful to view benchmarking information in a more palatable form, such as bar or pie charts. You can add graphical capabilities to the Timer tests by extending the fibonacci_iterative_timer.php application, using SVG to obtain a bar/pie chart image that visually shows the time required to run each iteration of the main loop. To do that, you plot the last column from the Timer Results table—giving a visual representation of the “ex time” column:

   start();               $a=0;$b=1;               for ($i = 0; $i < 15; $i++) {         $s=$a+$b;                  //Set the markers         $timer->setMarker('f'.$i);                  $a=$b;         $b=$s;      }                //Set "Stop" marker      $timer->stop();               //Returns formatted informations      echo '';      echo '';      echo '';      echo '';      echo '';      echo '
'; $timer->display(); echo ''; //Get the profiler info as an associative array $profiling = $timer->getProfiling(); //serialize the $profiling $ser = serialize($profiling); // Display all the information: name, time, // difference between two consecutive // markers, and total time echo ""; echo '
'; } //start the process fibonacci(); ?>
Figure 1. Graphical Representation of Benchmark Results: This view shows the raw data and superimposed pie and bar charts for 15 iterations of the Fibonacci method.

The preceding code outputs an HTML page containing a table with the raw profiling information you’ve already seen. It uses that to generate two SVG-formatted charts (see Listing 4), which it displays using the Adobe SVG player in the right column of the table.

Figure 1 shows the table the code returns, letting users view possible results for 15 Fibonacci iterations.

As you can see, adding benchmarking in PHP using this PEAR is a simple and linear task. It’s not limited to new code, either; implementing benchmarking for your existing PHP code doesn’t require much time, and the results are easy to understand and process—leaving few excuses for not optimizing your PHP applications! What you might consider an unnecessary step now, can really make a big difference later.

See also  Comparing different methods of testing your Infrastructure-as-Code

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist