Hypervolume Indicator


In [1]:
import numpy as np                   # for multi-dimensional containers
import pandas as pd                  # for DataFrames
import platypus as plat              # multi-objective optimisation framework
import pygmo as pg                   # multi-objective optimisation framework


In this section we're going to take a look at how to calculate the hypervolume indicator value for a population of solutions. The hypervolume indicator (or $s$-metric) is a performance metric for indicating the quality of a non-dominated approximation set, introduced by [1] where it is described as the "size of the space covered or size of dominated space". It can be defined as:

$$ HV\big(f ^{ref}, \mathrm{X}\big) = \Lambda \left( \bigcup_{\mathrm{X}_n \in \mathrm{X} } \Big[ f_{1}(\mathrm{X}_n), f_{1}^{ref} \Big] \times \dots \times \Big[ f_{m}(\mathrm{X}_n), f_{m}^{ref} \Big] \right) $$

where $HV\left(f ^{ref}, \mathrm{X}\right)$ resolves the size of the space covered by an approximation set $\mathrm{X}$, $f^{ref} \in \mathbb{R}$ refers to a chosen reference point, and $\Lambda \left(.\right)$ refers to the Lebesgue measure. This can be illustrated in two-dimensional objective space (to allow for easy visualisation) with a population of three solutions.

The hypervolume indicator is appealing because it is compatible with any number of problem objectives and requires no prior knowledge of the true Pareto-optimal front, this is important when working with real-world problems which have not yet been solved. The hypervolume indicator is currently used in the field of multi-objective optimisation as both a proximity and diversity performance metric, as well as in the decision-making process.

Unlike dominance-based criteria which require only two solutions for performing a comparison (which can be used on an ordinal scale), a reference vector is required to calculate the HV indicator value (i.e. it requires the objective to be measured on an interval scale). When used for pairwise or multiple comparison of optimisation algorithms, this reference vector must be the same, otherwise, the resulting HV indicator values are not comparable. This reference vector can be approximated as large values for each problem objective in order for all objective values in any approximation set to be within the reference vector.

Calculating the Hypervolume Indicator with Platypus

Let's define some necessary variables before invoking the Platypus implementation of the hypervolume indicator algorithm. We will use the ZDT1 test function with the number of design variables $\mathrm{D}=30$ throughout this example, and with the population size $\mathrm{N}=100$.

In [2]:
problem = plat.ZDT1()
D = 30
N = 100

With these variables defined, we will now move onto generating our initial population. We will be using Platypus Solution objects for this, which we will initialise with random problem variables, evaluate, and then append to a list named solutions.

In [3]:
solutions = []

for i in range(N):
    solution = plat.Solution(problem)
    solution.variables = np.random.rand(D)

Let's print out the variables and objectives for the first solution in this list to see what they look like.

In [4]:
print(f"Design variables:\n {solutions[0].variables}")
print(f"Objective values:\n {solutions[0].objectives}")
Design variables:
 [0.03319209 0.48451621 0.55483626 0.08112446 0.18948245 0.90540391
 0.41676484 0.65298363 0.59713341 0.89705621 0.17407802 0.00511218
 0.05250722 0.52101625 0.04062096 0.3296622  0.77899607 0.14547564
 0.18833536 0.71929345 0.65747571 0.77174683 0.01498789 0.77254087
 0.27025744 0.97433607 0.23659607 0.13750044 0.61372126 0.4816075 ]
Objective values:
 [0.033192094201290434, 4.526025533331595]

Now that we have a population of solutions stored in the solutions variable, we can prepare an instance of the Platypus.indicators.Hypervolume() object with the desired reference point for the hypervolume indicator calculation. For ZDT1, the reference point is typically $\langle11,11\rangle$.

In [5]:
hyp = plat.indicators.Hypervolume(minimum=[0, 0], maximum=[11, 11])

We can now use this hyp object to calculate the hypervolume indicator for any population.


The Platypus implementation of the hypervolume indicator requires either a minimum and a maximum point, or a reference_set (not the same as the hypervolume reference point). Normally, a hypervolume indicator algorithm would only require a single vector that defines the reference point $f ^{ref}$. In the case of Platypus, $f ^{ref}$ actually corresponds to maximum, but Platypus also forces us to provide a vector for minimum, which we have set to $\langle0, 0\rangle$

Let's calculate the hypervolume indicator value for the population of solutions we created above and named solution.

In [6]:
print(f"Hypervolume indicator value: {hyp.calculate(solutions)}")
Hypervolume indicator value: 0.7779645859838822

We now have this single hypervolume indicator value that we can use to gauge and compare the performance of our population. The higher the value is, the "better" the hypervolume quality.

Calculating the Hypervolume Indicator with PyGMO

We can also use a different framework's implementation of the hypervolume indicator on our population. We should be expecting the same value, but this is a good exercise to learn how to use a different framework, and perhaps to check that they do indeed arrive at the same value.

This time we will use the PyGMO framework. PyGMO's hypervolume indicator function can work with a few different data-types, including numpy.array(). We have previously moved our Platypus solutions to a pandas.DataFrame (which can easily be output as a numpy.array()). Let's begin by creating a new DataFrame with the columns f1 and f2 which will be used to store our objective values for each solution.

In [7]:
solutions_df = pd.DataFrame(index=range(N),columns=['f1','f2']).astype(float)
f1 f2
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN

We can see that we've also defined an index range that covers the number of solutions in our population, $\mathrm{N}=100$. This means we have $100$ rows ready, but their values are initialised to NaN (Not A Number), which in this case simply indicates missing data.

Let's use a loop to iterate through our solutions list of $100$ solutions and assign the desired values to the corresponding row in our solutions_df DataFrame

In [8]:
for i in range(N):
    solutions_df.loc[i].f1 = solutions[i].objectives[0]
    solutions_df.loc[i].f2 = solutions[i].objectives[1]
f1 f2
0 0.033192 4.526026
1 0.690335 3.981366
2 0.281927 3.885510
3 0.547213 3.484696
4 0.293064 4.689847

We can see our DataFrame now contains the desired values. We can now easily get this data as a numpy.array() to feed into PyGMO's hypervolume indicator object constructor.

In [9]:
hyp = pg.hypervolume(solutions_df[['f1','f2']].values)

Now we can invoke the compute() method on our hypervolume object and pass in the reference point to calculate the value.

In [10]:
hyp.compute([11, 11])

Upon first inspection, this value seems to be different to the one calculated by Platypus so we may decide that we've either done something wrong, or there is a mistake in at least one of the implementations. However, in this case, it is simply the case that one of them (Platypus) has normalised the output. We can do the same with the output from PyGMO by dividing the hypervolume indicator value by the product of the reference point.

In [11]:
hyp.compute([11, 11]) / np.prod([11, 11])

Now we can see that both frameworks have arrived at the same hypervolume indicator.


In this section, we have introduced the hypervolume indicator as a criterion that can be used in the selection of a population. We also demonstrated the application of two implementations of the hypervolume indicator, one in Platypus, and one in PyGMO.


Experiment with the hypervolume indicator: try using different reference points, different population sizes, and different problems.

  1. E. Zitzler, S. K ̈unzli, Indicator-based selection in multiobjective search, in: Parallel Problem Solving from Nature-PPSNVIII, Springer, 2004, pp. 832–842 

Support this work

You can access this notebook and more by getting the e-book on Practical Evolutionary Algorithms.