Profiling PHP Applications With xdebug

      2 Comments on Profiling PHP Applications With xdebug

By: Stefan Priebsch

Other Articles in the Series

Part One: Introducing xdebug
Part Two: Tracing PHP Applications with xdebug
Part Three: Profiling PHP Applications With xdebug
Part Four: Debugging PHP applications with xdebug
Part Five: Creating Code Coverage Statistics with xdebug

Welcome back to the third installment of this five-part series of xdebug articles on Zend DevZone.
By now, you should at least have tried out xdebug on one of your development systems – if not, you should
really do so today. xdebug is a free and open source PHP extension that supports tracing, profiling, debugging, and creating code coverage statistics. I like to call it the PHP developer’s swiss army knife.

The first article described how to install and configure xdebug, and showcased some
basic features that are really useful, like beautifying the var_dump() output, or displaying a stack trace with error mesasges.
In the second part, we have explored xdebug’s tracing feature. A trace contains each function and method call a script, the execution time, and optionally also the memory size, the parameter values passed to a function, and even the return values. A trace log can help you understand the control flow of a complex program, or even act as a replacement for adding explicit logging. Instead of trying to log all events, you can selectively turn on and off the trace logging and use tools like grep or a small PHP script to analyze the trace log.

In this week’s issue, we will have a look at profiling.
Profiling is a feature that seems to be similar to trace logging at first sight.
Profiling logs, however, are not meant for human redability, and are not meant to visualize the program’s
control flow, but provide the data for a statistical analysis of the program run.

Creating a profiling log

Here is a short excerpt from a profiling log generated by xdebug:

As you can see, a profiling log is not really human-readable. We will have to use additional tools to visualize and analyze the profiling data xdebug has provided us with.
Basically, profiling is about counting how many times a statement has been excecuted and how much execution time
it took.

Creating a profiling log seriously degrades your PHP script’s performance, like creating a trace log does,
because several lines have to be written to a file for each function or method call. Therefore, you should
never profile on a production system. There are rare cases when you absolutely need to profile on the live system,
In that case, keep in mind that you should be careful in mixing xdebug with other Zend extensions that
might be used on your production system, like loaders, optimizers, or caches.

To have xdebug write a profiling log, add

to php.ini. As you know by now, you must restart your web server for the modified php.ini to
take effect.

Please note that you cannot enable profiling at script runtime with ini_set, because this would prevent any auto prepend scripts from being profiles.

Since the profiling log is designed to be machine-readable, there are no additional
configuration settings that allow displaing additional information in the profiling log, like there are for the trace log.
But there are some general configuration settings that affect profiling. Most of them are similar to settings
we have already used together with trace logging.

First of all, xdebug writes profiling logs to /tmp by default, which causes problems
on Windows because this directory usually does not exist. You should therefore always specify an existing output directory in php.ini on Windows, for example:

By default, xdebug overwrites an existing profiling log.
You can configure xdebug append to an existing profiling log instead of overwriting log by adding


to php.ini. Sometimes, however, you don’t want to profile all files, yet turning on the profiler at runtime is problematic because
of the auto_prepend files. Instead of permanently turning on the profiler, you can add


to php.ini. Now you can enable and disable profiling by passing the special GET or POST parameter XDEBUG_PROFILE to a PHP script. This will turn on profiling just for the one PHP script that receives the parameter. You need not set a value for XDEBUG_PROFILE, it is sufficient to append the parameter to the URL: test.php?XDEBUG_PROFILE.

Naming the profiling log

The default file name of an xdebug profiling log is cachegrind.out. plus the process id.
Just like you can change the trace log name in php.ini, you can change the profiling log name as well with xdebug.profiler_output_name. This setting takes a string as argument that can contain various
format specifiers. The most important format specifiers are:

  • %p for the process id
  • %r for a random number
  • %u for a timstamp with microseconds
  • %H for the value of $_SERVER['HTTP_HOST']
  • %R for the value of $_SERVER['REQUEST_URI']
  • %s the name including full path, with slashes converted to underscores

Please note that the %s modifier is only available for xdebug.profiler_output_name,
not for xdebug.trace_output_name.
If you want to know the current profiling log filename, you can call the function
xdebug_get_profiler_filename(), which returns the file name as a string.

Analyzing xdebug profiling logs

We have already mentioned above that we will need an additional tool to visualize the data from the
xdebug profiling logs. xdebug’s profiling logs are in a Cachegrind compatible format. Cachegrind is a
cache time profiler that is part of the Valgrind suite, a suite of tools
for debugging and profiling Linux software
Cachegrind was originally used to locate cache misses, memory references and the instructions executed by
a program. Another tool of the Valgrind suite, Callgrind, draws call graphs. A call graph shows which how functions are
called by other functions. In the PHP context, we can use these tools to visualize and analyze profiling logs generated by xdebug.

The tool commonly used work with xdebug profiling data is KCachegrind. KCachegrind is freely available under GPL,
but runs only on Unix.
Luckily, there is a similar tool for Windows, WinCachegrind, which is also free software.
Let us have a look at the Windows version first.

WinCacheGrind: analyzing profiling logs on Windows

The current version of WinCachegrind is This version dates back to 2005, which shows that WinCachegrind is not being very actively maintained. If you look at Wincachegrind’s release notes, the authors themselves mention that the program is buggy shows some strange behaviour.

I would therefore recommend using KCachegrind in a virtual machine running a recent Linux distribution like Ubuntu. There are various free virtual machines products available for Windows, so there is no additional cost for setting up a dedicated virtual machine just for analyzing profiling data.
If you cannot use Unix or a virtual machine for whatever reason, you can still go with WinCachegrind, at least for a basic analysis of profiling data. WinCachegrind does, however, not support drawing call graphs, like KCachegrind does.

The installation of Wincachegrind is extremely easy. Start the installer by double-clicking, accept the license, and you
are all set. Now you can start the program and open one of the cachegrind profiling logs created by xdebug.

By clicking on the clock or sigma icon, you can toggle between display of absolute times in milliseconds and
a percentage view. The Percentage view shows how many percent of the whole code block’s total execution timeeach function call has used . These percentages do usually not add up to 100% because the code in the
code block itself also takes some time to execute.

Two very useful settings are Profiler -> Hide Fast Functions and Profiler -> Hide Library Functions.
The first switch hides functions that do not take a long time to execute. These are most of the built-in PHP functions,
as they do not account for a major part of the script runtime, but can be small and fast user functions as well.

The second setting, Profiler -> Hide Library Functions hides all built-in PHP functions.
When you toggle one of both of these switches, you will see less unimportant data, which allows you to focus
on the those parts of the code that are a better basis for optimizations.

The main windows of WinCachegrind has two tabs, Line by line and Overall. Both tabs show the same information, but Overall aggregates the information for a better overwiew. Self time lists the execution time of the code in the current block, whereas Cumulative time (Cum.) also takes the execution time of all functions called from this block into account.

KCacheGrind: analyzing profiling logs on Unix

The Unix version KCachegrind offers much more functionality than WinCachegrind. KCachegrind visualizes the profiling data and draws call graphs.

First of all, we need to install KCachegrind.
The current release is 0.4.6. A newer version,
0.10.1, is available, but this version seems to be part of the full Valgrind package only.

If possible, use your system’s packet manager to install the KCachegrind package.
KCachegrind uses GraphViz to draw the call graphs, so you must also install the GraphViz package if your
packet manager does not automatically install it as a required dependecy.

If you cannot find binary packages of KCachegrind, you can still compile KCachegrind by yourself.
After downloading the source code, run

As you can see, the path to the KDE libraries on your system is required.
To configure the KCachegrind sources. If you are not sure where the KDE libraries are located on your system, use

kde-config --prefix

to display the location of the KDE libraries.

After installation, you can start KCacheGrind from the command line with

/opt/kde3/bin/kcachegrind &

The tabular data displayed by KCachegrind is similar to WinCachegrind. You can also switch between absoulte time
and percentage values. Some of the features KCachegrind offers do not apply to PHP code, though.
The following screenshot shows a call graph of phpMyAdmin in KCachegrind:

As you can see, the major part of execution time in this example is taken up by
The following screenshot shows a visualization of the functions called by

In this code block, two more require_once statements are executed, which take about one half of‘s execution time to complete.
By double-clicking on one of the boxes, you can drill down into the application’s profiling data.

Optimizing Your Code Based on Profiling Data

Always profile your application before you start optimizing. You might be tempted to start optimizing
a program where you think the optimization makes sense – but in most cases our guesses are just wrong.
Optimizations, in gemeral, only make sense when a part of the program is responsible for a major part of the
overall execution time. There is no point in optimizing a program part that but accounts only for a few milliseconds of execution time. Even if you speed up such a program part by more than 50%, the overall performance increase is not measurable.

Yet, if multiple instances of of the program are run concurrently,
as it is the case on a high-load web server with multiple Apache processes serving the requests,
you still might want to optimize parts of your program that do not account for a major part of the overall execution time.
In this case optimizing will not make serving one individual request much faster, but allow your server
to handle a higher load since serving each individual request requires less resources.

When you look at the execution times in your profiling data, keep in mind that the absolute figures are less important than the the relative ratio of the measured times. Measured on another system, the absolute times can be very different. But before you get into code optimization, there are a few things to keep in mind.

An important rule of thumb in optimization is that you should try to reduce the number input/output operations first. Some I/O always takes very long in comparison to quite some computation. Thus reducing I/O operations can be a very efficient way of speeding up your programs. Removing one I/O call can give you more speedup instantly than spending a couple of hours optimizing code. So you should ocus on I/O first, before you go for the code.

Still, before you invest time in optimizing your code, you should consider scaling the hardware. You can buy a bigger server or take benefit from PHP’s shared nothing architecture that allows you to easily increase performance by adding another server and splitting the load between the two.
Developer time is rather expensive compared to the price of a new server. And if you scale by hardware, you can be sure that you get an immediate speedup in return, without having to touch the existing PHP code. When a developer spends one or two days optimizinig code, you can never tell how much speedup the optimization will gain. And, on top of that, you can’t really be sure that no bugs or security risks were introduced inadvertedly.

Especially when dealing with web sites, statifying content is the best possible performance optimization. Imagine a high traffic site where a PHP script creates the home page on every request, pulling information from a database or XML files. If the page changes every couple of minutes, you can automatically re-create the static copy.
Even if statifying the whole page is not possible, because you need to include personalized content, you can still statify certain content blocks.

Yet another level of performance optimization that does not require changing the PHP code is possible.
As we know, PHP is an interpreted language. This means that at runtime, every script is being translated to an intermediate code that is executed. This translation is repeated every time a script is being called. This makes PHP – by design – slower in comparison to compiled languages like C or Java, where no code parsing occurs at runtime.
For PHP, you can use opcode caches to save and reuse the intermediate code, thus making the script faster since the pre-parsed intermediate code can be reused, so the PHP code does not have to parsed every time a script is executed.

All this is not to say that there is not a time and place for optimizing PHP code. Some clever code optimizations can increase the performance a lot. Bear in mind, however, that changing the code always has a risk of introducing bugs and security issues. Even if you just change the behaviour in a harmless way, existing tests may fail, requiring care and attention. Also, do not forget that optimized code is more difficult to read.


Creating – and visualizing – a profiling log is the most important precondition for
PHP code optimization. You need to know where your application spends the most time,
and start optimizing right there, otherwise you will potentially waste your time and energy
by optimizing a program in the wrong spot. So use profiling to find out where your application
spends most time and start optimizing right there – if you have at least considered other measures.

Next week, we will explore the debugging features of xdebug. xdebug can turn your PHP-enabled
server into a remote debugging server. Using a compatible debug client like Eclipse PDT, you can debug your script
without changing a single line of code, set breakpoints, step through the code, or watch how and when variable values change.

So, make sure you check back here next week for the fourth part in this series of xdebug articles.
Until then: Happy profiling with xdebug!

About the author:

Stefan Priebsch has been solving IT problems for over 25 years.
He is founder and CEO of e-novative GmbH, one of the first German IT
consultancies offering PHP-based solutions.
Stefan holds a university degree in computer science,
is an internationally recognized PHP expert, book author, trainer and consultant.
You can reach him at

About Cal Evans

Many moons ago, at the tender age of 14, Cal touched his first computer. (We're using the term "computer" loosely here, it was a TRS-80 Model 1) Since then his life has never been the same. He graduated from TRS-80s to Commodores and eventually to IBM PCs.   For the past 10 years, Cal has worked with PHP and MySQL on Linux OSX, and when necessary, Windows. He has built on a variety of projects ranging in size from simple web pages to multi-million dollar web applications. When not banging his head on his monitor, attempting a blood sacrifice to get a particular piece of code working, he enjoys building and managing development teams using his widely imitated but never patented management style of "management by wandering around". Cal is happily married to wife 1.33, the lovely and talented Kathy. Together they have 2 kids who were both bright enough not to pursue a career in IT. Cal blogs at and is the founder and host of Nomad PHP