3.1 Getting Started ¶
The information presented here provides a good and common basis for many profiling tasks, but there are more features that you may want to leverage.
These are covered in subsequent sections in this chapter.
3.1.1 The Example Program ¶
Throughout this guide we use the same example C code that implements the multiplication of a vector of length n by an m by n matrix. The result is stored in a vector of length m. The algorithm has been parallelized using Posix Threads, or Pthreads for short.
The code was built using the gcc compiler and the name of the executable
is
mxv-pthreads.
The matrix sizes can be set through the -m and -n options. The
number of threads is set with the -t option. These are additional
threads that are used in the multiplication. To increase the duration of
the run, the computations are executed repeatedly.
This is an example that multiplies a 8000 by 4000 matrix with a vector of length 4000. Although this is a multithreaded application, initially we will be using 1 threads. Later on we will show examples using multiple threads.
$ ./mxv-pthreads -m 8000 -n 4000 -t 1 mxv: error check passed - rows = 8000 columns = 4000 threads = 1 $
The program performs an internal check to verify that the computed results are correct. The result of this check is printed, as well as the matrix sizes and the number of threads used.
3.1.2 A First Profile ¶
The first step is to collect the performance data. It is important to remember that much more information is gathered than may be shown by default. Often a single data collection run is sufficient to get a lot of insight.
The gprofng collect app command is used for the data collection. Nothing needs to be
changed in the way the application is executed. The only difference is that it
is now run under control of the tool, as shown below:
$ gprofng collect app ./mxv-pthreads -m 8000 -n 4000 -t 1 |
This produces the following output:
Creating experiment directory test.1.er (Process ID: 2749878) ... mxv: error check passed - rows = 8000 columns = 4000 threads = 1
We see a message that an experiment directory with the name test.1.er has been created. The process id is also echoed. The application completes as usual and we have our first experiment directory that can be analyzed.
The tool we use for this is called gprofng display text. It takes the name of
the experiment directory as an argument.
If invoked this way, the tool starts in the interactive interpreter mode. While in this environment, commands can be given and the tool responds. This is illustrated below:
$ gprofng display text test.1.er Warning: History and command editing is not supported on this system. (gprofng-display-text) quit $
While useful in certain cases, we prefer to use this tool in command line mode by specifying the commands to be issued when invoking the tool. The way to do this is to prepend the command(s) with a hyphen (‘-’) if used on the command line.
Since this makes the command appear to be an option, they are also sometimes referred to as such, but technically they are commands. This is the terminology we will use in this user guide, but for convenience the commands are also listed as options in the index.
For example,
below we use the functions command to request a list of the functions
that have been executed, plus their respective CPU times:
$ gprofng display text -functions test.1.er |
Functions sorted by metric: Exclusive Total CPU Time Excl. Total Incl. Total Name CPU CPU sec. % sec. % 9.367 100.00 9.367 100.00 <Total> 8.926 95.30 8.926 95.30 mxv_core 0.210 2.24 0.420 4.49 init_data 0.080 0.85 0.210 2.24 drand48 0.070 0.75 0.130 1.39 erand48_r 0.060 0.64 0.060 0.64 __drand48_iterate 0.010 0.11 0.020 0.21 _int_malloc 0.010 0.11 0.010 0.11 sysmalloc 0. 0. 8.926 95.30 <static>@0x47960 (<libgp-collector.so>) 0. 0. 0.440 4.70 __libc_start_main 0. 0. 0.020 0.21 allocate_data 0. 0. 8.926 95.30 driver_mxv 0. 0. 0.440 4.70 main 0. 0. 0.020 0.21 malloc 0. 0. 8.926 95.30 start_thread
As easy and simple as these steps are, we do have a first profile of our program!
There are five columns. The first four contain the ”Total CPU Time”, which is the sum of the user and system time. See Inclusive and Exclusive Metrics for an explanation of “exclusive” and “inclusive” times.
The first line echoes the metric that is used to sort the output. By default,
this is the exclusive CPU time, but through the sort command, the
sort metric can be changed by the user.
Next, there are four columns with the exclusive and inclusive CPU times and the respective percentages. This is followed by the name of the function.
The function with the name <Total> is not a user function. It is a
pseudo function introduced by gprofng. It is used to display the
accumulated measured metric values. In this example, we see that the total
CPU time of this job was 9.367 seconds and it is scaled to 100%. All
other percentages in the same column are relative to this number.
With 8.926 seconds, function mxv_core takes 95.30% of the
total time and is by far the most time consuming function.
The exclusive and inclusive metrics are identical, which means that it
is a leaf function not calling any other functions.
The next function in the list is init_data. Although with 4.49%,
the CPU time spent in this part is modest, this is an interesting entry because
the inclusive CPU time of 0.420 seconds is twice the exclusive CPU time
of 0.210 seconds. Clearly this function is calling another function,
or even more than one function and collectively this takes 0.210 seconds.
Below we show the call tree feature that provides more details on the call
structure of the application.
The function <static>@0x47960 (<libgp-collector.so>) does odd and
certainly not familiar. It is one of the internal functions used by
gprofng collect app and can be ignored. Also, while the inclusive time is high,
the exclusive time is zero. This means it doesn’t contribute to the
performance.
The question is how we know where this function originates from. There are several commands to dig deeper an get more details on a function. See Information on Load Objects.
3.1.3 The Source Code View ¶
In general, the tuning efforts are best focused on the most time consuming
part(s) of an application. In this case that is easy, since over 95% of
the total CPU time is spent in function mxv_core.
It is now time to dig deeper and look
at the metrics distribution at the source code level. Since we measured
CPU times, these are the metrics shown.
The source command is used to accomplish this. It takes the name of the
function, not the source filename, as an argument. This is demonstrated
below, where the gprofng display text command is used to show the annotated
source listing of function mxv_core.
Be aware that when using the gcc compiler, the source code has to
be compiled with the -g option in order for the source code feature
to work. Otherwise the location(s) can not be determined. For other compilers
we recommend to check the documentation for such an option.
Below the command to display the source code of a function is shown. Since at
this point we are primarily interested in the timings only, we use the
metrics command to request the exclusive and inclusive total CPU
timings only. See Display and Define the Metrics for more information
how to define the metrics to be displayed.
$ gprofng display text -metrics ei.totalcpu -source mxv_core test.1.er |
The output is shown below. It has been somewhat modified to fit the formatting constraints and reduce the number of lines.
Current metrics: e.totalcpu:i.totalcpu:name
Current Sort Metric: Exclusive Total CPU Time ( e.totalcpu )
Source file: <apath>/mxv.c
Object file: mxv-pthreads (found as test.1.er/archives/...)
Load Object: mxv-pthreads (found as test.1.er/archives/...)
Excl. Incl.
Total Total
CPU sec. CPU sec.
<lines deleted>
<Function: mxv_core>
43. void __attribute__ ((noinline))
mxv_core (int64_t row_index_start,
44. int64_t row_index_end,
45. int64_t m,
46. int64_t n,
47. double **restrict A,
48. double *restrict b,
49. double *restrict c)
50. {
0. 0. 50. {
0. 0. 51. for (int64_t i=row_index_start;
i<=row_index_end; i++)
52. {
0. 0. 53. double row_sum = 0.0;
## 4.613 4.613 54. for (int64_t j=0; j<n; j++)
## 4.313 4.313 55. row_sum += A[i][j] * b[j];
0. 0. 56. c[i] = row_sum;
57. }
0. 0. 58. }
The first line echoes the metrics that have been selected. The second line is not very meaningful when looking at the source code listing, but it shows the metric that is used to sort the data.
The next three lines provide information on the location of the source file, the object file and the load object (See Load Objects and Functions).
Function mxv_core is part of a source file that has other functions
as well. These functions will be shown with the values for the metrics, but
for lay-out purposes they have been removed in the output shown above.
The header is followed by the annotated source code listing. The selected
metrics are shown first, followed by a source line number, and the source code.
The most time consuming line(s) are marked with the ## symbol. In
this way they are easier to identify and find with a search.
What we see is that all of the time is spent in lines 54-55.
A related command sometimes comes handy as well. It is called lines
and displays a list of the source lines and their metrics, ordered according
to the current sort metric (See Sorting the Performance Data).
Below the command and the output. For lay-out reasons, only the top 10 is shown here and the last part of the text on some lines has been replaced by dots. The full text is ‘instructions without line numbers’ and means that the line number information for that function was not found.
$ gprofng display text -lines test.1.er |
Lines sorted by metric: Exclusive Total CPU Time Excl. Total Incl. Total Name CPU CPU sec. % sec. % 9.367 100.00 9.367 100.00 <Total> 4.613 49.25 4.613 49.25 mxv_core, line 54 in "mxv.c" 4.313 46.05 4.313 46.05 mxv_core, line 55 in "mxv.c" 0.160 1.71 0.370 3.95 init_data, line 118 in "manage_data.c" 0.080 0.85 0.210 2.24 <Function: drand48, instructions ...> 0.070 0.75 0.130 1.39 <Function: erand48_r, instructions ...> 0.060 0.64 0.060 0.64 <Function: __drand48_iterate, ...> 0.040 0.43 0.040 0.43 init_data, line 124 in "manage_data.c" 0.010 0.11 0.020 0.21 <Function: _int_malloc, instructions ...> 0.010 0.11 0.010 0.11 <Function: sysmalloc, instructions ...>
What this overview immediately highlights is that the third most time consuming source line takes 0.370 seconds only. This means that the inclusive time is only 3.95% and clearly this branch of the code hardly impacts the performance.
3.1.4 The Disassembly View ¶
The source view is very useful to obtain more insight where the time is spent, but sometimes this is not sufficient. The disassembly view provides more details since it shows the metrics at the instruction level.
This view is displayed with the
disasm
command and as with the source view, it displays an annotated listing. In this
case it shows the instructions with the metrics, interleaved with the
source lines. The
instructions have a reference in square brackets ([ and ])
to the source line they correspond to.
We again focus on the tmings only and set the metrics accordingly:
$ gprofng display text -metrics ei.totalcpu -disasm mxv_core test.1.er |
Current metrics: e.totalcpu:i.totalcpu:name
Current Sort Metric: Exclusive Total CPU Time ( e.totalcpu )
Source file: <apath>/src/mxv.c
Object file: mxv-pthreads (found as test.1.er/archives/...)
Load Object: mxv-pthreads (found as test.1.er/archives/...)
Excl. Incl.
Total Total
CPU sec. CPU sec.
<lines deleted>
43. void __attribute__ ((noinline))
mxv_core (int64_t row_index_start,
44. int64_t row_index_end,
45. int64_t m,
46. int64_t n,
47. double **restrict A,
48. double *restrict b,
49. double *restrict c)
50. {
<Function: mxv_core>
0. 0. [50] 401d56: mov 0x8(%rsp),%r10
51. for (int64_t i=row_index_start;
i<=row_index_end; i++)
0. 0. [51] 401d5b: cmp %rsi,%rdi
0. 0. [51] 401d5e: jg 0x47
0. 0. [51] 401d60: add $0x1,%rsi
0. 0. [51] 401d64: jmp 0x36
52. {
53. double row_sum = 0.0;
54. for (int64_t j=0; j<n; j++)
55 row_sum += A[i][j] * b[j];
0. 0. [55] 401d66: mov (%r8,%rdi,8),%rdx
0. 0. [54] 401d6a: mov $0x0,%eax
0. 0. [53] 401d6f: pxor %xmm1,%xmm1
0.110 0.110 [55] 401d73: movsd (%rdx,%rax,8),%xmm0
1.921 1.921 [55] 401d78: mulsd (%r9,%rax,8),%xmm0
2.282 2.282 [55] 401d7e: addsd %xmm0,%xmm1
## 4.613 4.613 [54] 401d82: add $0x1,%rax
0. 0. [54] 401d86: cmp %rax,%rcx
0. 0. [54] 401d89: jne 0xffffffffffffffea
56. c[i] = row_sum;
0. 0. [56] 401d8b: movsd %xmm1,(%r10,%rdi,8)
0. 0. [51] 401d91: add $0x1,%rdi
0. 0. [51] 401d95: cmp %rsi,%rdi
0. 0. [51] 401d98: je 0xd
0. 0. [53] 401d9a: pxor %xmm1,%xmm1
0. 0. [54] 401d9e: test %rcx,%rcx
0. 0. [54] 401da1: jg 0xffffffffffffffc5
0. 0. [54] 401da3: jmp 0xffffffffffffffe8
57. }
58. }
0. 0. [58] 401da5: ret
For each instruction, the timing values are given and we can immediately
identify the most expensive instructions. As with the source level view,
these are marked with the ## symbol.
It comes as no surprise that the time consuming instructions originate from the source code at lines 54-55. One thing to note is that the source line numbers no longer appear in sequential order. This is because the compiler has re-ordered the instructions as part of the code optimizations it has performed.
As illustrated below and similar to the lines command, we can get
an overview of the instructions executed by using the
pcs
command.
Below the command and the output, which again has been restricted to 10 lines. As before, some lines have been shortened for lay-out purposes.
$ gprofng display text -pcs test.1.er |
PCs sorted by metric: Exclusive Total CPU Time Excl. Total Incl. Total Name CPU CPU sec. % sec. % 9.367 100.00 9.367 100.00 <Total> 4.613 49.25 4.613 49.25 mxv_core + 0x0000002C, line 54 in "mxv.c" 2.282 24.36 2.282 24.36 mxv_core + 0x00000028, line 55 in "mxv.c" 1.921 20.51 1.921 20.51 mxv_core + 0x00000022, line 55 in "mxv.c" 0.150 1.60 0.150 1.60 init_data + 0x000000AC, line 118 in ... 0.110 1.18 0.110 1.18 mxv_core + 0x0000001D, line 55 in "mxv.c" 0.040 0.43 0.040 0.43 drand48 + 0x00000022 0.040 0.43 0.040 0.43 init_data + 0x000000F1, line 124 in ... 0.030 0.32 0.030 0.32 __drand48_iterate + 0x0000001E 0.020 0.21 0.020 0.21 __drand48_iterate + 0x00000038
What we see is that the top three instructions take 94% of the total CPU time and any optimizations should focus on this part of the code..
3.1.5 Display and Define the Metrics ¶
The metrics shown by gprofng display text are useful, but there is more recorded
than displayed by default. We can customize the values shown by defining the
metrics ourselves.
There are two commands related to changing the metrics shown:
metric_list and
metrics.
The first command shows the currently selected metrics, plus all the metrics that have been stored as part of the experiment. The second command may be used to define the metric list.
This is the way to get the information about the metrics:
$ gprofng display text -metric_list test.1.er |
This is the output:
Current metrics: e.%totalcpu:i.%totalcpu:name
Current Sort Metric: Exclusive Total CPU Time ( e.%totalcpu )
Available metrics:
Exclusive Total CPU Time: e.%totalcpu
Inclusive Total CPU Time: i.%totalcpu
Size: size
PC Address: address
Name: name
This shows the metrics that are currently used, the metric that is used to sort the data and all the metrics that have been recorded, but are not necessarily shown.
In this case, the current metrics are set to the exclusive and inclusive total CPU times, the respective percentages, and the name of the function, or load object.
The metrics command is used to define the metrics that need to be
displayed.
For example, to swap the exclusive and inclusive metrics, use the following
metric definition: i.%totalcpu:e.%totalcpu.
Since the metrics can be tailored for different views, there is also a way
to reset them to the default. This is done through the special keyword
default for the metrics definition (-metrics default).
3.1.6 Customization of the Output ¶
With the information just given, the function overview can be customized. For sake of the example, we would like to display the name of the function first, only followed by the exclusive CPU time, given as an absolute number and a percentage.
Note that the commands are parsed in order of appearance. This is why we need to define the metrics before requesting the function overview:
$ gprofng display text -metrics name:e.%totalcpu -functions test.1.er |
Current metrics: name:e.%totalcpu
Current Sort Metric: Exclusive Total CPU Time ( e.%totalcpu )
Functions sorted by metric: Exclusive Total CPU Time
Name Excl. Total
CPU
sec. %
<Total> 9.367 100.00
mxv_core 8.926 95.30
init_data 0.210 2.24
drand48 0.080 0.85
erand48_r 0.070 0.75
__drand48_iterate 0.060 0.64
_int_malloc 0.010 0.11
sysmalloc 0.010 0.11
<static>@0x47960 (<libgp-collector.so>) 0. 0.
__libc_start_main 0. 0.
allocate_data 0. 0.
driver_mxv 0. 0.
main 0. 0.
malloc 0. 0.
start_thread 0. 0.
This was a first and simple example how to customize the output. Note that we did not rerun our profiling job and merely modified the display settings. Below we will show other and also more advanced examples of customization.
3.1.7 Name the Experiment Directory ¶
When using gprofng collect app, the default names for experiments work fine, but
they are quite generic. It is often more convenient to select a more
descriptive name. For example, one that reflects conditions for the experiment
conducted, like the number of threads used.
For this, the mutually exclusive -o and -O options come in handy.
Both may be used to provide a name for the experiment directory, but the
behaviour of gprofng collect app is different.
With the ‘-o’ option, an existing experiment directory is not overwritten. Any directory with the same name either needs to be renamed, moved, or removed, before the experiment can be conducted.
This is in contrast with the behaviour for the ‘-O’ option. Any existing directory with the same name is silently overwritten.
Be aware that the name of the experiment directory has to end with .er.
3.1.8 Control the Number of Lines in the Output ¶
The limit <n> command can be used to control the number of
lines printed in various views. For example it impacts the function view, but
also takes effect for other display commands, like lines.
The argument <n> should be a positive integer number. It sets the number of lines in the (function) view. A value of zero resets the limit to the default.
Be aware that the pseudo-function <Total> counts as a regular function.
For example limit 10 displays nine user level functions.
3.1.9 Sorting the Performance Data ¶
The sort <key> command sets the key to be used when sorting the
performance data.
The key is a valid metric definition, but the
visibility field
(See Metric Definitions)
in the metric
definition is ignored, since this does not affect the outcome of the sorting
operation.
For example if the sort key is set to e.totalcpu, the values
will be sorted in descending order with respect to the exclusive total
CPU time.
The data can be sorted in reverse order by prepending the metric definition
with a minus (‘-’) sign. For example sort -e.totalcpu.
A default metric for the sort operation has been defined and since this is
a persistent command, this default can be restored with default as
the key (sort default).
3.1.10 Scripting ¶
The list with commands for gprofng display text can be very long. This is tedious
and also error prone. Luckily, there is an easier and elegant way to control
the output of this tool.
Through the script command, the name of a file with commands can be
passed in. These commands are parsed and executed as if they appeared on
the command line in the same order as encountered in the file. The commands
in this script file can actually be mixed with commands on the command line
and multiple script files may be used.
The difference between the commands in the script file and those used on the
command line is that the latter require a leading dash (‘-’) symbol.
Comment lines in a script file are supported. They need to start with the ‘#’ symbol.
3.1.11 A More Elaborate Example ¶
With the information presented so far, we can customize our data gathering and display commands.
As an example, we would like to use mxv.1.thr.er as the name for the experiment directory. In this way, the name of the algorithm and the number of threads that were used are included in the name. We also don’t mind to overwrite an existing experiment directory with the same name.
All that needs to be done is to use the
‘-O’
option, followed by the directory name of choice when running gprofng collect app:
$ exe=mxv-pthreads $ m=8000 $ n=4000 $ gprofng collect app -O mxv.1.thr.er ./$exe -m $m -n $n -t 1 |
Since we want to customize the profile and prefer to keep the command line short, the commands to generate the profile are put into a file with the name my-script:
$ cat my-script # This is my first gprofng script # Set the metrics metrics i.%totalcpu:e.%totalcpu:name # Use the exclusive time to sort sort e.totalcpu # Limit the function list to 5 lines limit 5 # Show the function list functions
This script file is specified as input to the gprofng display text command
that is used to display the performance information stored in experiment
directory mxv.1.thr.er:
$ gprofng display text -script my-script mxv.1.thr.er |
This command produces the following output:
# This is my first gprofng script # Set the metrics Current metrics: i.%totalcpu:e.%totalcpu:name Current Sort Metric: Exclusive Total CPU Time ( e.%totalcpu ) # Use the exclusive time to sort Current Sort Metric: Exclusive Total CPU Time ( e.%totalcpu ) # Limit the function list to 5 lines Print limit set to 5 # Show the function list Functions sorted by metric: Exclusive Total CPU Time Incl. Total Excl. Total Name CPU CPU sec. % sec. % 9.703 100.00 9.703 100.00 <Total> 9.226 95.09 9.226 95.09 mxv_core 0.455 4.69 0.210 2.17 init_data 0.169 1.75 0.123 1.26 erand48_r 0.244 2.52 0.075 0.77 drand48
In the first part of the output the comment lines in the script file are echoed. These are interleaved with an acknowledgement message for the commands.
This is followed by a profile consisting of 5 lines only. For both metrics,
the percentages plus the timings are given. The numbers are sorted with
respect to the exclusive total CPU time. Although this is the default, for
demonstration purposes we use the sort command to explicitly define
the metric for the sort.
While we executed the same job as before and only changed the name of the
experiment directory, the results are somewhat different. This is sampling
in action. The numbers are not all that different though.
It is seen that function mxv_core is responsbile for
95% of the CPU time and init_data takes 4.5% only.
3.1.12 The Call Tree ¶
The call tree shows the dynamic structure of the application by displaying the functions executed and their parent. The CPU time attributed to each function is shown as well. This view helps to find the most expensive execution path in the program.
This feature is enabled through the calltree command. For example,
this is how to get the call tree for our current experiment:
$ gprofng display text -calltree mxv.1.thr.er |
This displays the following structure:
Functions Call Tree. Metric: Attributed Total CPU Time Attr. Total Name CPU sec. % 9.703 100.00 +-<Total> 9.226 95.09 +-start_thread 9.226 95.09 | +-<static>@0x47960 (<libgp-collector.so>) 9.226 95.09 | +-driver_mxv 9.226 95.09 | +-mxv_core 0.477 4.91 +-__libc_start_main 0.477 4.91 +-main 0.455 4.69 +-init_data 0.244 2.52 | +-drand48 0.169 1.75 | +-erand48_r 0.047 0.48 | +-__drand48_iterate 0.021 0.22 +-allocate_data 0.021 0.22 | +-malloc 0.021 0.22 | +-_int_malloc 0.006 0.06 | +-sysmalloc 0.003 0.03 | +-__default_morecore 0.003 0.03 | +-sbrk 0.003 0.03 | +-brk 0.001 0.01 +-pthread_create 0.001 0.01 +-__pthread_create_2_1
At first sight this may not be what is expected and some explanation is in place.
The top function is the pseudo-function <Total> that we have seen
before. It is introduced and shown here to provide the total value of the
metric(s).
We also see function <static>@0x47960 in the call tree and apparently
it is from libgp-collector.so, a library that is internal to
gprofng.
The <static> marker, followed by the program counter, is shown if the
name of the function cannot be found. This function is part of the
implementation of the data collection process and should be hidden to the
user. This is part of a planned future enhancement.
In general, if a view has a function that does not appear to be part of the
user code, or seems odd anyhow, the objects and fsingle
commands are very useful
to find out more about load objects in general, but also to help identify
an unknown entry in the function overview. See Load Objects and Functions.
Another thing to note is that there are two main branches. The one under
<static>@0x47960 and the second one under __libc_start_main.
This reflects the fact that this is a multithreaded program and the
threaded part shows up as a separate branch in the call tree.
The way to interpret this structure is as follows. The program starts
under control of __libc_start_main. This executes the main program
called main, which at the top level executes functions
init_data, allocate_data, and pthread_create.
The latter function creates and executes the additional thread(s).
For this multithreaded part of the code, we need to look at the branch
under function start_thread that calls the driver code for the
matrix-vector multiplication (driver_mxv), which executes the function
that performs the actual multiplication (mxv_core).
There are two things worth noting for the call tree feature:
- This is a dynamic tree and since sampling is used, it most likely looks slighlty different across seemingly identical profile runs. In case the run times are short, it is worth considering to use a high resolution through the ‘-p’ option. For example use ‘-p hi’ to increase the sampling rate.
- In case hardware event counters have been enabled (See Profile Hardware Event Counters), these values are also displayed in the call tree view.
3.1.13 More Information on the Experiment ¶
The experiment directory not only contains performance related data. Several system characteristics, the profiling command executed, plus some global performance statistics are stored and can be displayed.
The header command displays information about the experiment(s).
For example, this is command is used to extract this data from for our
experiment directory:
$ gprofng display text -header mxv.1.thr.er |
The above command prints the following information. Note that some of the
lay-out and the information has been modified. Directory paths have been
replaced <apath> for example. Textual changes are
marked with the ‘<’ and ‘>’ symbols.
Experiment: mxv.1.thr.er
No errors
No warnings
Archive command ` /usr/bin/gprofng-archive -n -a on --outfile
<apath>/archive.log <apath>/mxv.1.thr.er'
Target command (64-bit): './mxv-pthreads -m 8000 -n 4000 -t 1'
Process pid 2750071, ppid 2750069, pgrp 2749860, sid 2742080
Current working directory: <apath>
Collector version: `2.40.00'; experiment version 12.4 (64-bit)
Host `<the-host-name>', OS `Linux <version>', page size 4096,
architecture `x86_64'
4 CPUs, clock speed 2294 MHz.
Memory: 3506491 pages @ 4096 = 13697 MB.
Data collection parameters:
Clock-profiling, interval = 997 microsecs.
Periodic sampling, 1 secs.
Follow descendant processes from: fork|exec|combo
Experiment started <date and time>
Experiment Ended: 9.801216173
Data Collection Duration: 9.801216173
The output above may assist in troubleshooting, or to verify some of the operational conditions and we recommend to include this command when generating a profile.
Related to this command there is a useful option to record comment(s) in
an experiment.
To this end, use the ‘-C’ option on the gprofng collect app tool to
specify a comment string. Up to ten comment lines can be included.
These comments are displayed with the header command on
the gprofng display text tool.
The overview command displays information on the experiment(s) and
also shows a summary of the values for the metric(s) used. This is an example
how to use it on the newly created experiment directory:
$ gprofng display text -overview mxv.1.thr.er |
Experiment(s):
Experiment :mxv.1.thr.er
Target : './mxv-pthreads -m 8000 -n 4000 -t 1'
Host : <hostname> (<ISA>, Linux <version>)
Start Time : <date and time>
Duration : 9.801 Seconds
Metrics:
Experiment Duration (Seconds): [9.801]
Clock Profiling
[X]Total CPU Time - totalcpu (Seconds): [*9.703]
Notes: '*' indicates hot metrics, '[X]' indicates currently enabled
metrics.
The metrics command can be used to change selections. The
metric_list command lists all available metrics.
This command provides a dashboard overview that helps to easily identify where the time is spent and in case hardware event counters are used, it shows their total values.
3.1.14 Control the Sampling Frequency ¶
So far we did not go into details on the frequency of the sampling process, but in some cases it is useful to change the default of 10 milliseconds.
The advantage of increasing the sampling frequency is that functions that do not take much time per invocation are more accurately captured. The downside is that more data is gathered. This has an impact on the overhead of the collection process and more disk space is required.
In general this is not an immediate concern, but with heavily threaded applications that run for an extended period of time, increasing the frequency may have a more noticeable impact.
The -p option on the gprofng collect app tool is used to enable or disable
clock based profiling, or to explicitly set the sampling rate.
This option takes one of the following keywords:
offDisable clock based profiling.
onEnable clock based profiling with a per thread sampling interval of 10 ms. This is the default.
loEnable clock based profiling with a per thread sampling interval of 100 ms.
hiEnable clock based profiling with a per thread sampling interval of 1 ms.
value¶Enable clock based profiling with a per thread sampling interval of value.
It may seem unnecessary to have an option to disable clock based profiling,
but there is a good reason to support this.
By default, clock profiling is enabled when conducting hardware event counter
experiments (See Profile Hardware Event Counters).
With the -p off option, this can be disabled.
If an explicit value is set for the sampling, the number can be an integer or a floating-point number. A suffix of ‘u’ for microseconds, or ‘m’ for milliseconds is supported. If no suffix is used, the value is assumed to be in milliseconds. For example, the following command sets the sampling rate to 5123.4 microseconds:
$ gprofng collect app -p 5123.4u ./mxv-pthreads -m 8000 -n 4000 -t 1 |
If the value is smaller than the clock profiling minimum, a warning message is issued and it is set to the minimum. In case it is not a multiple of the clock profiling resolution, it is silently rounded down to the nearest multiple of the clock resolution. If the value exceeds the clock profiling maximum, is negative, or zero, an error is reported.
Note that the header command echoes the sampling rate used.
3.1.15 Information on Load Objects ¶
It may happen that the function view shows a function that is not known to the user. This can easily happen with library functions for example. Luckily there are three commands that come in handy then.
These commands are objects, fsingle, and
fsummary. They provide details on
load objects (See Load Objects and Functions).
The objects command lists all load objects that have been referenced
during the performance experiment.
Below we show the command and the result for our profile job. Like before,
some path names in the output have been shortened and replaced by the
<apath> symbol that represents an absolute directory path.
$ gprofng display text -objects mxv.1.thr.er |
The output includes the name and path of the target executable:
<Unknown> (<Unknown>) <mxv-pthreads> (<apath>/mxv-pthreads) <libdl-2.28.so> (/usr/lib64/libdl-2.28.so) <librt-2.28.so> (/usr/lib64/librt-2.28.so) <libc-2.28.so> (/usr/lib64/libc-2.28.so) <libpthread-2.28.so> (/usr/lib64/libpthread-2.28.so) <libm-2.28.so> (/usr/lib64/libm-2.28.so) <libgp-collector.so> (/usr/lib64/gprofng/libgp-collector.so) <ld-2.28.so> (/usr/lib64/ld-2.28.so) <DYNAMIC_FUNCTIONS> (DYNAMIC_FUNCTIONS)
The fsingle command may be used to get more details on a specific
entry in the function view, say. For example, the command below provides
additional information on the pthread_create function shown in the
function overview.
$ gprofng display text -fsingle pthread_create mxv.1.thr.er |
Below the output from this command. It has been somewhat modified to match the display requirements.
+ gprofng display text -fsingle pthread_create mxv.1.thr.er
pthread_create
Exclusive Total CPU Time: 0. ( 0. %)
Inclusive Total CPU Time: 0.001 ( 0.0%)
Size: 258
PC Address: 8:0x00049f60
Source File: (unknown)
Object File: (unknown)
Load Object: /usr/lib64/gprofng/libgp-collector.so
Mangled Name:
Aliases:
In this table we not only see how much time was spent in this function, we also see where it originates from. In addition to this, the size and start address are given as well. If the source code location is known it is also shown here.
The related fsummary command displays the same information as
fsingle, but for all functions in the function overview,
including <Total>:
$ gprofng display text -fsummary mxv.1.thr.er |
Functions sorted by metric: Exclusive Total CPU Time
<Total>
Exclusive Total CPU Time: 9.703 (100.0%)
Inclusive Total CPU Time: 9.703 (100.0%)
Size: 0
PC Address: 1:0x00000000
Source File: (unknown)
Object File: (unknown)
Load Object: <Total>
Mangled Name:
Aliases:
mxv_core
Exclusive Total CPU Time: 9.226 ( 95.1%)
Inclusive Total CPU Time: 9.226 ( 95.1%)
Size: 80
PC Address: 2:0x00001d56
Source File: <apath>/src/mxv.c
Object File: mxv.1.thr.er/archives/mxv-pthreads_ss_pf53V__5
Load Object: <apath>/mxv-pthreads
Mangled Name:
Aliases:
... etc ...
3.2 Support for Multithreading ¶
In this chapter the support for multithreading is introduced and discussed. As is shown below, nothing needs to be changed when collecting the performance data.
The difference is that additional commands are available to get more information on the multithreading details, plus that several filters allow the user to zoom in on specific threads.
3.2.1 Creating a Multithreading Experiment ¶
We demonstrate the support for multithreading using the same code and settings as before, but this time 2 threads are used:
$ exe=mxv-pthreads $ m=8000 $ n=4000 $ gprofng collect app -O mxv.2.thr.er ./$exe -m $m -n $n -t 2 |
First of all, in as far as gprofng is concerned, no changes are needed.
Nothing special is needed to profile a multithreaded job when using
gprofng.
The same is true when displaying the performance results. The same commands that were used before work unmodified. For example, this is all that is needed to get a function overview:
$ gprofng display text -limit 5 -functions mxv.2.thr.er |
This produces the following familiar looking output:
Print limit set to 5 Functions sorted by metric: Exclusive Total CPU Time Excl. Total Incl. Total Name CPU CPU sec. % sec. % 9.464 100.00 9.464 100.00 <Total> 8.961 94.69 8.961 94.69 mxv_core 0.224 2.37 0.469 4.95 init_data 0.105 1.11 0.177 1.88 erand48_r 0.073 0.77 0.073 0.77 __drand48_iterate
3.2.2 Commands Specific to Multithreading ¶
The function overview shown above shows the results aggregated over all the threads. The interesting new element is that we can also look at the performance data for the individual threads.
The thread_list command displays how many threads have been used:
$ gprofng display text -thread_list mxv.2.thr.er |
This produces the following output, showing that three threads have been used:
Exp Sel Total === === ===== 1 all 3
The output confirms there is one experiment and that by default all threads are selected.
It may seem surprising to see three threads here, since we used the
-t 2 option, but it is common for a Pthreads program to use one
additional thread.
Typically, there is one main thread that runs from start to finish.
It handles the sequential portions of the code, as well as thread
management related tasks.
It is no different in the example code. At some point, the main thread
creates and activates the two threads that perform the multiplication
of the matrix with the vector. Upon completion of this computation,
the main thread continues.
The threads command is simple, yet very powerful. It shows the
total value of the metrics for each thread.
$ gprofng display text -threads mxv.2.thr.er |
The command above produces the following overview:
Objects sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 9.464 100.00 <Total> 4.547 48.05 Process 1, Thread 3 4.414 46.64 Process 1, Thread 2 0.502 5.31 Process 1, Thread 1
The first line gives the total CPU time accumulated over the threads selected. This is followed by the metric value(s) for each thread.
From this it is clear that the main thread is responsible for a little over 5% of the total CPU time, while the other two threads take 47-48% each.
This view is ideally suited to verify if there are any load balancing issues and also to find the most time consuming thread(s).
While useful, often more information than this is needed. This is
where the thread selection filter comes in. Through the
thread_select
command, one or more threads may be selected.
See The Selection List how to define the selection list.
Since it is most common to use this command in a script, we do so as well here. Below the script we are using:
# Define the metrics metrics e.%totalcpu # Limit the output to 5 lines limit 5 # Get the function overview for thread 1 thread_select 1 functions # Get the function overview for thread 2 thread_select 2 functions # Get the function overview for thread 3 thread_select 3 functions |
The definition of the metrics and the output limit have been shown and
explained earlier. The new command to focus on is thread_select.
This command takes a list (See The Selection List) to select specific
threads. In this case, the individual thread numbers that were
obtained earlier with the thread_list command are selected.
This restricts the output of the functions command to the thread
number(s) specified. This means that the script above shows which
function(s) each thread executes and how much CPU time they consumed.
Both the exclusive timings and their percentages are given.
Note that technically this command is a filter and persistent. The selection remains active until changed through another thread selection command, or when it is reset with the ‘all’ selection list.
This is the relevant part of the output for the first thread:
Exp Sel Total === === ===== 1 1 3 Functions sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 0.502 100.00 <Total> 0.224 44.64 init_data 0.105 20.83 erand48_r 0.073 14.48 __drand48_iterate 0.067 13.29 drand48
As usual, the comment lines are echoed. This is followed by a confirmation
of the selection. The first table shows that one experiment is loaded and
that thread 1 out of the three threads has been selected. What is
displayed next is the function overview for this particular thread. Due to
the limit 5 command, there are only five functions in this list.
Clearly, this thread handles the data initialization part and as we know
from the call tree output, function init_data executes the 3 other
functions shown in this profile.
Below are the overviews for threads 2 and 3 respectively. It is seen that all
of the CPU time is spent in function mxv_core and that this time
is approximately the same for both threads.
# Get the function overview for thread 2 Exp Sel Total === === ===== 1 2 3 Functions sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 4.414 100.00 <Total> 4.414 100.00 mxv_core 0. 0. <static>@0x48630 (<libgp-collector.so>) 0. 0. driver_mxv 0. 0. start_thread # Get the function overview for thread 3 Exp Sel Total === === ===== 1 3 3 Functions sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 4.547 100.00 <Total> 4.547 100.00 mxv_core 0. 0. <static>@0x48630 (<libgp-collector.so>) 0. 0. driver_mxv 0. 0. start_thread
When analyzing the performance of a multithreaded application, it is sometimes useful to know whether threads have mostly executed on the same core, say, or if they have wandered across multiple cores. This sort of stickiness is usually referred to as thread affinity.
Similar to the commands for the threads, there are several commands related
to the usage of the cores, or CPUs as they are called in gprofng
(See The Concept of a CPU in gprofng).
Similar to the thread_list command, the cpu_list command
displays how many CPUs have been used.
The equivalent of the threads threads command, is the cpus
command, which shows the numbers of the CPUs that were used and the metric
values for each one of them. Both commands are demonstrated below.
$ gprofng display text -cpu_list -cpus mxv.2.thr.er |
This command produces the following output:
+ gprofng display text -cpu_list -cpus mxv.2.thr.er Exp Sel Total === === ===== 1 all 4 Objects sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 9.464 100.00 <Total> 4.414 46.64 CPU 2 2.696 28.49 CPU 0 1.851 19.56 CPU 1 0.502 5.31 CPU 3
The first table shows that there is only one experiment and that all of the four CPUs have been selected. The second table shows the exclusive metrics for each of the CPUs that have been used.
As also echoed in the output, the data is sorted with respect to the
exclusive CPU time, but it is very easy to sort the data by the CPU id
by using the sort command:
$ gprofng display text -cpu_list -sort name -cpus mxv.2.thr.er |
With the sort added, the output is as follows:
Exp Sel Total === === ===== 1 all 4 Current Sort Metric: Name ( name ) Objects sorted by metric: Name Excl. Total Name CPU sec. % 9.464 100.00 <Total> 2.696 28.49 CPU 0 1.851 19.56 CPU 1 4.414 46.64 CPU 2 0.502 5.31 CPU 3
While the table with thread times shown earlier may point at a load imbalance in the application, this overview has a different purpose.
For example, we see that 4 CPUs have been used, but we know that the application uses 3 threads only. We will now demonstrate how filters can be used to help answer the question why 4 CPUs are used, while the application has 3 threads only. This means that at least one thread has executed on more than one CPU.
Recall the thread level timings:
Excl. Total Name CPU sec. % 9.464 100.00 <Total> 4.547 48.05 Process 1, Thread 3 4.414 46.64 Process 1, Thread 2 0.502 5.31 Process 1, Thread 1
Compared to the CPU timings above, it seems very likely that thread 3 has used more than one CPU, because the thread and CPU timings are the same for both other threads.
The command below selects thread number 3 and then requests the CPU utilization for this thread:
$ gprofng display text -thread_select 3 -sort name -cpus mxv.2.thr.er |
The output shown below confirms that thread 3 is selected and then displays the CPU(s) that have been used by this thread:
Exp Sel Total === === ===== 1 3 3 Objects sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 4.547 100.00 <Total> 2.696 59.29 CPU 0 1.851 40.71 CPU 1
The results show that this thread has used CPU 0 nearly 60% of the time and CPU 1 for the remaining 40%.
To confirm that this is the only thread that has used more than one CPU, the same approach can be used for threads 1 and 2:
$ gprofng display text -thread_select 1 -cpus mxv.2.thr.er Exp Sel Total === === ===== 1 1 3 Objects sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 0.502 100.00 <Total> 0.502 100.00 CPU 3
$ gprofng display text -thread_select 2 -cpus mxv.2.thr.er Exp Sel Total === === ===== 1 2 3 Objects sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 4.414 100.00 <Total> 4.414 100.00 CPU 2
The output above shows that indeed threads 1 and 2 each have used a single CPU only.
3.3 View Multiple Experiments ¶
One thing we did not cover sofar is that gprofng fully supports the
analysis of multiple experiments. The gprofng display text tool accepts a list of
experiments. The data can either be aggregated across the experiments, or
used in a comparison.
The default is to aggregate the metric values across the experiments that have
been loaded. The compare command can be used to enable the
comparison of results.
In this section both modes are illustrated with an example.
3.3.1 Aggregation of Experiments ¶
If the data for multiple experiments is aggregrated, the gprofng display text tool
shows the combined results.
For example, below is the script to show the function view for the data
aggregated over two experiments, drop the first experiment and then show
the function view for the second experiment only.
We will call it my-script-agg.
# Define the metrics metrics e.%totalcpu # Limit the output to 5 lines limit 5 # Get the list with experiments experiment_list # Get the function overview for all functions # Drop the first experiment drop_exp mxv.2.thr.er # Get the function overview for exp #2 functions |
With the exception of the experiment_list command, all commands
used have been discussed earlier.
The experiment_list command provides a list of the experiments
that have been loaded. This may be used to get the experiment IDs and
to verify the correct experiments are loaded for the aggregation.
Below is an example that loads two experiments and uses the above script to display different function views.
$ gprofng display text -script my-script-agg mxv.2.thr.er mxv.4.thr.er |
This produces the following output:
# Define the metrics Current metrics: e.%totalcpu:name Current Sort Metric: Exclusive Total CPU Time ( e.%totalcpu ) # Limit the output to 5 lines Print limit set to 5 # Get the list with experiments ID Sel PID Experiment == === ======= ============ 1 yes 1339450 mxv.2.thr.er 2 yes 3579561 mxv.4.thr.er # Get the function overview for all Functions sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 20.567 100.00 <Total> 19.553 95.07 mxv_core 0.474 2.30 init_data 0.198 0.96 erand48_r 0.149 0.72 drand48 # Drop the first experiment Experiment mxv.2.thr.er has been dropped # Get the function overview for exp #2 Functions sorted by metric: Exclusive Total CPU Time Excl. Total Name CPU sec. % 11.104 100.00 <Total> 10.592 95.39 mxv_core 0.249 2.24 init_data 0.094 0.84 erand48_r 0.082 0.74 drand48
The first five lines should look familiar. The five lines following echo the comment line in the script and show the overview of the experiments. This confirms two experiments have been loaded and that both are active. This is followed by the function overview. The timings have been summed up and the percentages are adjusted accordingly.
3.3.2 Comparison of Experiments ¶
The support for multiple experiments really shines in comparison mode. In comparison mode, the data for the various experiments is shown side by side, as illustrated below where we compare the results for the multithreaded experiments using two and four threads respectively.
This
feature is controlled through the
compare command.
The comparison mode is enabled through compare on and with
compare off it is disabled again.
In addition to ‘on’, or ‘off’, this command also supports
the ‘delta’ and ‘ratio’ keywords.
This is the script that will be used in our example. It sets the comparison mode to ‘on’:
# Define the metrics metrics e.%totalcpu # Limit the output to 5 lines limit 5 # Set the comparison mode to differences compare on # Get the function overview functions
Assuming this script file is called my-script-comp, this is how it is used to display the differences:
$ gprofng display text -script my-script-comp mxv.2.thr.er mxv.4.thr.er |
This produces the output shown below. The data for the first experiment is shown as absolute numbers. The timings for the other experiment are shown as a delta relative to these reference numbers:
mxv.2.thr.er mxv.4.thr.er Excl. Total Excl. Total Name CPU CPU sec. % sec. % 9.464 100.00 11.104 100.00 <Total> 8.961 94.69 10.592 95.39 mxv_core 0.224 2.37 0.249 2.24 init_data 0.105 1.11 0.094 0.84 erand48_r 0.073 0.77 0.060 0.54 __drand48_iterate
This table is already helpful to more easily compare (two) profiles, but there is more that we can do here.
By default, in comparison mode, all measured values are shown. Often profiling is about comparing performance data. It is therefore sometimes more useful to look at differences or ratios, using one experiment as a reference.
The values shown are relative to this difference. For example if a ratio is below one, it means the reference value was higher.
In the example below, we use the same two experiments used in the comparison above. The script is also nearly identical. The only change is that we now use the ‘delta’ keyword.
As before, the number of lines is restricted to 5 and we focus on the exclusive timings plus percentages. For the comparison part we are interested in the differences.
This is the script that produces such an overview:
# Define the metrics metrics e.%totalcpu # Limit the output to 5 lines limit 5 # Set the comparison mode to differences compare delta # Get the function overview functions
Assuming this script file is called my-script-comp2, this is how we get the table displayed on our screen:
$ gprofng display text -script my-script-comp2 mxv.2.thr.er mxv.4.thr.er |
Leaving out some of the lines printed, but we have seen before, we get the following table:
mxv.2.thr.er mxv.4.thr.er Excl. Total Excl. Total Name CPU CPU sec. % delta % 9.464 100.00 +1.640 100.00 <Total> 8.961 94.69 +1.631 95.39 mxv_core 0.224 2.37 +0.025 2.24 init_data 0.105 1.11 -0.011 0.84 erand48_r 0.073 0.77 -0.013 0.54 __drand48_iterate
It is now easier to see that the CPU times for the most time consuming functions in this code are practically the same.
It is also possible to show ratio’s through the compare ratio
command. The first colum is used as a reference and the values for
the other columns with metrics are derived by dividing the value by
the reference. The result for such a comparison is shown below:
mxv.2.thr.er mxv.4.thr.er Excl. Total Excl. Total CPU Name CPU sec. % ratio % 9.464 100.00 x 1.173 100.00 <Total> 8.961 94.69 x 1.182 95.39 mxv_core 0.224 2.37 x 1.111 2.24 init_data 0.105 1.11 x 0.895 0.84 erand48_r 0.073 0.77 x 0.822 0.54 __drand48_iterate
Note that the comparison feature is supported at the function, source, and disassembly level. There is no practical limit on the number of experiments that can be used in a comparison.
3.4 Profile Hardware Event Counters ¶
Many processors provide a set of hardware event counters and gprofng
provides support for this feature.
See Hardware Event Counters Explained for those readers that are not
familiar with such counters and like to learn more.
In this section we explain how to get the details on the event counter support for the processor used in the experiment(s), and show several examples.
3.4.1 Getting Information on the Counters Supported ¶
The first step is to check if the processor used for the experiments is
supported by gprofng.
The -h option on gprofng collect app will show the event counter
information:
In case the counters are supported, a list with the events is printed. Otherwise, a warning message will be issued.
For example, below we show this command and the output on an Intel Xeon Platinum 8167M (aka “Skylake”) processor. The output has been split into several sections and each section is commented upon separately.
Run "gprofng collect app --help" for a usage message.
Specifying HW counters on `Intel Arch PerfMon v2 on Family 6 Model 85'
(cpuver=2499):
-h {auto|lo|on|hi}
turn on default set of HW counters at the specified rate
-h <ctr_def> [-h <ctr_def>]...
-h <ctr_def>[,<ctr_def>]...
specify HW counter profiling for up to 4 HW counters
The first line shows how to get a usage overview. This is followed by
some information on the target processor.
The next five lines explain in what ways the -h option can be
used to define the events to be monitored.
The first version shown above enables a default set of counters. This
default depends on the processor this command is executed on. The
keyword following the -h option defines the sampling rate:
autoMatch the sample rate of used by clock profiling. If the latter is disabled, Use a per thread sampling rate of approximately 100 samples per second. This setting is the default and preferred.
onUse a per thread sampling rate of approximately 100 samples per second.
loUse a per thread sampling rate of approximately 10 samples per second.
hiUse a per thread sampling rate of approximately 1000 samples per second.
The second and third variant define the events to be monitored. Note that the number of simultaneous events supported is printed. In this case we can monitor four events in a single profiling job.
It is a matter of preference whether you like to use the -h
option for each event, or use it once, followed by a comma separated
list.
There is one slight catch though. The counter definition below has
mandatory comma (,) between the event and the rate. While a
default can be used for the rate, the comma cannot be omitted.
This may result in a somewhat awkward counter definition in case
the default sampling rate is used.
For example, the following two commands are equivalent. Note the double comma in the second command. This is not a typo.
$ gprofng collect app -h cycles -h insts ... $ gprofng collect app -h cycles,,insts ... |
In the first command this comma is not needed, because a
comma (“,”) immediately followed by white space may
be omitted.
This is why we prefer the this syntax and in the remainder will use the first version of this command.
The counter definition takes an event name, plus optionally one or more attributes, followed by a comma, and optionally the sampling rate. The output section below shows the formal definition.
<ctr_def> == <ctr>[[~<attr>=<val>]...],[<rate>] |
The printed help then explains this syntax. Below we have summarized and expanded this output:
<ctr>The counter name must be selected from the available counters listed as part of the output printed with the
-hoption. On most systems, if a counter is not listed, it may still be specified by its numeric value.~<attr>=<val>This is an optional attribute that depends on the processor. The list of supported attributes is printed in the output. Examples of attributes are “user”, or “system”. The value can given in decimal or hexadecimal format. Multiple attributes may be specified, and each must be preceded by a ~.
<rate>-
The sampling rate is one of the following:
autoThis is the default and matches the rate used by clock profiling. If clock profiling is disabled, use ‘on’.
onSet the per thread maximum sampling rate to ~100 samples/second
loSet the per thread maximum sampling rate to ~10 samples/second
hiSet the per thread maximum sampling rate to ~1000 samples/second
<interval>Define the sampling interval. See Control the Sampling Frequency how to define this.
After the section with the formal definition of events and counters, a processor specific list is displayed. This part starts with an overview of the default set of counters and the aliased names supported on this specific processor.
Default set of HW counters:
-h cycles,,insts,,llm
Aliases for most useful HW counters:
alias raw name type units regs description
cycles unhalted-core-cycles CPU-cycles 0123 CPU Cycles
insts instruction-retired events 0123 Instructions Executed
llm llc-misses events 0123 Last-Level Cache Misses
br_msp branch-misses-retired events 0123 Branch Mispredict
br_ins branch-instruction-retired events 0123 Branch Instructions
The definitions given above may or may not be available on other processors.
The table above shows the default set of counters defined for this processor, and the aliases. For each alias the full “raw” name is given, plus the unit of the number returned by the counter (CPU cycles, or a raw count), the hardware counter the event is allowed to be mapped onto, and a short description.
The last part of the output contains all the events that can be monitored:
Raw HW counters:
name type units regs description
unhalted-core-cycles CPU-cycles 0123
unhalted-reference-cycles events 0123
instruction-retired events 0123
llc-reference events 0123
llc-misses events 0123
branch-instruction-retired events 0123
branch-misses-retired events 0123
ld_blocks.store_forward events 0123
ld_blocks.no_sr events 0123
ld_blocks_partial.address_alias events 0123
dtlb_load_misses.miss_causes_a_walk events 0123
dtlb_load_misses.walk_completed_4k events 0123
<many lines deleted>
l2_lines_out.silent events 0123
l2_lines_out.non_silent events 0123
l2_lines_out.useless_hwpf events 0123
sq_misc.split_lock events 0123
As can be seen, these names are not always easy to correlate to a specific event of interest. The processor manual should provide more clarity on this.
3.4.2 Examples Using Hardware Event Counters ¶
The previous section may give the impression that these counters are hard to use, but as we will show now, in practice it is quite simple.
With the information from the -h option, we can easily set up our first
event counter experiment.
We start by using the default set of counters defined for our processor and we use 2 threads:
$ exe=mxv-pthreads $ m=8000 $ n=4000 $ exp=mxv.hwc.def.2.thr.er $ gprofng collect app -O $exp -h auto ./$exe -m $m -n $n -t 2 |
The new option here is -h auto. The auto keyword enables
hardware event counter profiling and selects the default set of counters
defined for this processor.
As before, we can display the information, but there is one practical hurdle to take. Unless we like to view all metrics recorded, we would need to know the names of the events that have been enabled. This is tedious and also not portable in case we would like to repeat this experiment on another processor.
This is where the special hwc metric comes very handy. It
automatically expands to the active set of hardware event counters used
in the experiment(s).
With this, it is very easy to display the event counter values. Note that although the regular clock based profiling was enabled, we only want to see the counter values. We also request to see the percentages and limit the output to the first 5 lines:
$ exp=mxv.hwc.def.2.thr.er $ gprofng display text -metrics e.%hwc -limit 5 -functions $exp |
Current metrics: e.%cycles:e+%insts:e+%llm:name Current Sort Metric: Exclusive CPU Cycles ( e.%cycles ) Print limit set to 5 Functions sorted by metric: Exclusive CPU Cycles Excl. CPU Excl. Instructions Excl. Last-Level Name Cycles Executed Cache Misses sec. % % % 2.691 100.00 7906475309 100.00 122658983 100.00 <Total> 2.598 96.54 7432724378 94.01 121745696 99.26 mxv_core 0.035 1.31 188860269 2.39 70084 0.06 erand48_r 0.026 0.95 73623396 0.93 763116 0.62 init_data 0.018 0.66 76824434 0.97 40040 0.03 drand48
As we have seen before, the first few lines echo the settings. This includes a list with the hardware event counters used by default.
The table that follows makes it very easy to get an overview where the time is spent and how many of the target events have occurred.
As before, we can drill down deeper and see the same metrics at the source
line and instruction level. Other than using hwc in the metrics
definitions, nothing has changed compared to the previous examples:
$ exp=mxv.hwc.def.2.thr.er $ gprofng display text -metrics e.hwc -source mxv_core $exp |
This is the relevant part of the output. Since the lines get very long, we have somewhat modified the lay-out:
Excl. CPU Excl. Excl.
Cycles Instructions Last-Level
sec. Executed Cache Misses
<Function: mxv_core>
0. 0 0 32. void __attribute__ ((noinline))
mxv_core(...)
0. 0 0 33. {
0. 0 0 34. for (uint64_t i=...) {
0. 0 0 35. double row_sum = 0.0;
## 1.872 7291879319 88150571 36. for (int64_t j=0; j<n; j++)
0.725 140845059 33595125 37. row_sum += A[i][j]*b[j];
0. 0 0 38. c[i] = row_sum;
39. }
0. 0 0 40. }
In a smiliar way we can display the event counter values at the instruction level. Again we have modified the lay-out due to page width limitations:
$ exp=mxv.hwc.def.2.thr.er $ gprofng display text -metrics e.hwc -disasm mxv_core $exp |
Excl. CPU Excl. Excl.
Cycles Instructions Last-Level
sec. Executed Cache Misses
<Function: mxv_core>
0. 0 0 [33] 4021ba: mov 0x8(%rsp),%r10
34. for (uint64_t i=...) {
0. 0 0 [34] 4021bf: cmp %rsi,%rdi
0. 0 0 [34] 4021c2: jbe 0x37
0. 0 0 [34] 4021c4: ret
35. double row_sum = 0.0;
36. for (int64_t j=0; j<n; j++)
37. row_sum += A[i][j]*b[j];
0. 0 0 [37] 4021c5: mov (%r8,%rdi,8),%rdx
0. 0 0 [36] 4021c9: mov $0x0,%eax
0. 0 0 [35] 4021ce: pxor %xmm1,%xmm1
0.002 12804230 321394 [37] 4021d2: movsd (%rdx,%rax,8),%xmm0
0.141 60819025 3866677 [37] 4021d7: mulsd (%r9,%rax,8),%xmm0
0.582 67221804 29407054 [37] 4021dd: addsd %xmm0,%xmm1
## 1.871 7279075109 87989870 [36] 4021e1: add $0x1,%rax
0.002 12804210 80351 [36] 4021e5: cmp %rax,%rcx
0. 0 0 [36] 4021e8: jne 0xffffffffffffffea
38. c[i] = row_sum;
0. 0 0 [38] 4021ea: movsd %xmm1,(%r10,%rdi,8)
0. 0 0 [34] 4021f0: add $0x1,%rdi
0. 0 0 [34] 4021f4: cmp %rdi,%rsi
0. 0 0 [34] 4021f7: jb 0xd
0. 0 0 [35] 4021f9: pxor %xmm1,%xmm1
0. 0 0 [36] 4021fd: test %rcx,%rcx
0. 0 80350 [36] 402200: jne 0xffffffffffffffc5
0. 0 0 [36] 402202: jmp 0xffffffffffffffe8
39. }
40. }
0. 0 0 [40] 402204: ret
So far we have used the default settings for the event counters. It is quite straightforward to select specific counters. For sake of the example, let’s assume we would like to count how many branch instructions and retired memory load instructions that missed in the L1 cache have been executed. We also want to count these events with a high resolution.
This is the command to do so:
$ exe=mxv-pthreads $ m=8000 $ n=4000 $ exp=mxv.hwc.sel.2.thr.er $ hwc1=br_ins,hi $ hwc2=mem_load_retired.l1_miss,hi $ gprofng collect app -O $exp -h $hwc1 -h $hwc2 $exe -m $m -n $n -t 2 |
As before, we get a table with the event counts. Due to the very long name for the second counter, we have somewhat modified the output.
$ gprofng display text -limit 10 -functions mxv.hwc.sel.2.thr.er |
Functions sorted by metric: Exclusive Total CPU Time Excl. Incl. Excl. Branch Excl. Name Total Total Instructions mem_load_retired.l1_miss CPU sec. CPU sec. Events 2.597 2.597 1305305319 4021340 <Total> 2.481 2.481 1233233242 3982327 mxv_core 0.040 0.107 19019012 9003 init_data 0.028 0.052 23023048 15006 erand48_r 0.024 0.024 19019008 9004 __drand48_iterate 0.015 0.067 11011009 2998 drand48 0.008 0.010 0 3002 _int_malloc 0.001 0.001 0 0 brk 0.001 0.002 0 0 sysmalloc 0. 0.001 0 0 __default_morecore
When using event counters, the values could be very large and it is not easy
to compare the numbers. As we will show next, the ratio feature is
very useful when comparing such profiles.
To demonstrate this, we have set up another event counter experiment where we would like to compare the number of last level cache miss and the number of branch instructions executed when using a single thread, or two threads.
These are the commands used to generate the experiment directories:
$ exe=./mxv-pthreads $ m=8000 $ n=4000 $ exp1=mxv.hwc.comp.1.thr.er $ exp2=mxv.hwc.comp.2.thr.er $ gprofng collect app -O $exp1 -h llm -h br_ins $exe -m $m -n $n -t 1 $ gprofng collect app -O $exp2 -h llm -h br_ins $exe -m $m -n $n -t 2 |
The following script has been used to get the tables. Due to lay-out restrictions, we have to create two tables, one for each counter.
# Limit the output to 5 lines limit 5 # Define the metrics metrics name:e.llm # Set the comparison to ratio compare ratio functions # Define the metrics metrics name:e.br_ins # Set the comparison to ratio compare ratio functions |
Note that we print the name of the function first, followed by the counter
data.
The new element is that we set the comparison mode to ratio. This
divides the data in a column by its counterpart in the reference experiment.
This is the command using this script and the two experiment directories as input:
$ gprofng display text -script my-script-comp-counters \ mxv.hwc.comp.1.thr.er \ mxv.hwc.comp.2.thr.er |
By design, we get two tables, one for each counter:
Functions sorted by metric: Exclusive Last-Level Cache Misses
mxv.hwc.comp.1.thr.er mxv.hwc.comp.2.thr.er
Name Excl. Last-Level Excl. Last-Level
Cache Misses Cache Misses
ratio
<Total> 122709276 x 0.788
mxv_core 121796001 x 0.787
init_data 723064 x 1.055
erand48_r 100111 x 0.500
drand48 60065 x 1.167
Functions sorted by metric: Exclusive Branch Instructions
mxv.hwc.comp.1.thr.er mxv.hwc.comp.2.thr.er
Name Excl. Branch Excl. Branch
Instructions Instructions
ratio
<Total> 1307307316 x 0.997
mxv_core 1235235239 x 0.997
erand48_r 23023033 x 0.957
drand48 20020009 x 0.600
__drand48_iterate 17017028 x 0.882
A ratio less than one in the second column, means that this counter value was smaller than the value from the reference experiment shown in the first column.
This kind of presentation of the results makes it much easier to quickly interpret the data.
We conclude this section with thread-level event counter overviews, but before we go into this, there is an important metric we need to mention.
In case it is known how many instructions and CPU cycles have been executed, the value for the IPC (“Instructions Per Clockycle”) can be computed. See Hardware Event Counters Explained. This is a derived metric that gives an indication how well the processor is utilized. The inverse of the IPC is called CPI.
The gprofng display text command automatically computes the IPC and CPI values
if an experiment contains the event counter values for the instructions
and CPU cycles executed. These are part of the metric list and can be
displayed, just like any other metric.
This can be verified through the metric_list command. If we go
back to our earlier experiment with the default event counters, we get
the following result.
$ gprofng display text -metric_list mxv.hwc.def.2.thr.er |
Current metrics: e.totalcpu:i.totalcpu:e.cycles:e+insts:e+llm:name
Current Sort Metric: Exclusive Total CPU Time ( e.totalcpu )
Available metrics:
Exclusive Total CPU Time: e.%totalcpu
Inclusive Total CPU Time: i.%totalcpu
Exclusive CPU Cycles: e.+%cycles
Inclusive CPU Cycles: i.+%cycles
Exclusive Instructions Executed: e+%insts
Inclusive Instructions Executed: i+%insts
Exclusive Last-Level Cache Misses: e+%llm
Inclusive Last-Level Cache Misses: i+%llm
Exclusive Instructions Per Cycle: e+IPC
Inclusive Instructions Per Cycle: i+IPC
Exclusive Cycles Per Instruction: e+CPI
Inclusive Cycles Per Instruction: i+CPI
Size: size
PC Address: address
Name: name
Among the other metrics, we see the new metrics for the IPC and CPI listed.
In the script below, we use this information and add the IPC and CPI to the metrics to be displayed. We also use a the thread filter to display these values for the individual threads.
This is the complete script we have used. Other than a different selection of the metrics, there are no new features.
# Define the metrics metrics e.insts:e.%cycles:e.IPC:e.CPI # Sort with respect to cycles sort e.cycles # Limit the output to 5 lines limit 5 # Get the function overview for all threads functions # Get the function overview for thread 1 thread_select 1 functions # Get the function overview for thread 2 thread_select 2 functions # Get the function overview for thread 3 thread_select 3 functions |
In the metrics definition on the second line, we explicitly request the
counter values for the instructions (e.insts) and CPU cycles
(e.cycles) executed. These names can be found in output from the
metric_list command above.
In addition to these metrics, we also request the IPC and CPI to be shown.
As before, we used the limit command to control the number of
functions displayed. We then request an overview for all the threads,
followed by three sets of two commands to select a thread and display the
function overview.
The script above is used as follows:
$ gprofng display text -script my-script-ipc mxv.hwc.def.2.thr.er |
This script produces four tables. We list them separately below, and have left out the additional output.
The first table shows the accumulated values across the three threads that have been active.
Functions sorted by metric: Exclusive CPU Cycles Excl. Excl. CPU Excl. Excl. Name Instructions Cycles IPC CPI Executed sec. % 7906475309 2.691 100.00 1.473 0.679 <Total> 7432724378 2.598 96.54 1.434 0.697 mxv_core 188860269 0.035 1.31 2.682 0.373 erand48_r 73623396 0.026 0.95 1.438 0.696 init_data 76824434 0.018 0.66 2.182 0.458 drand48
This shows that IPC of this program is completely dominated
by function mxv_core. It has a fairly low IPC value
of 1.43.
The next table is for thread 1 and shows the values for the main thread.
Exp Sel Total === === ===== 1 1 3 Functions sorted by metric: Exclusive CPU Cycles Excl. Excl. CPU Excl. Excl. Name Instructions Cycles IPC CPI Executed sec. % 473750931 0.093 100.00 2.552 0.392 <Total> 188860269 0.035 37.93 2.682 0.373 erand48_r 73623396 0.026 27.59 1.438 0.696 init_data 76824434 0.018 18.97 2.182 0.458 drand48 134442832 0.013 13.79 5.250 0.190 __drand48_iterate
Although this thread hardly uses any CPU cycles, the overall IPC of 2.55 is not all that bad.
Last, we show the tables for threads 2 and 3:
Exp Sel Total
=== === =====
1 2 3
Functions sorted by metric: Exclusive CPU Cycles
Excl. Excl. CPU Excl. Excl. Name
Instructions Cycles IPC CPI
Executed sec. %
3716362189 1.298 100.00 1.435 0.697 <Total>
3716362189 1.298 100.00 1.435 0.697 mxv_core
0 0. 0. 0. 0. collector_root
0 0. 0. 0. 0. driver_mxv
Exp Sel Total
=== === =====
1 3 3
Functions sorted by metric: Exclusive CPU Cycles
Excl. Excl. CPU Excl. Excl. Name
Instructions Cycles IPC CPI
Executed sec. %
3716362189 1.300 100.00 1.433 0.698 <Total>
3716362189 1.300 100.00 1.433 0.698 mxv_core
0 0. 0. 0. 0. collector_root
0 0. 0. 0. 0. driver_mxv
It is seen that both execute the same number of instructions and take about the same number of CPU cycles. As a result, the IPC is the same for both threads.