Using a Framework to Compare Algorithm Performance
Contents
Preamble¶
# used to create block diagrams
%reload_ext xdiag_magic
%xdiag_output_format svg
import numpy as np # for multidimensional containers
import pandas as pd # for DataFrames
import plotly.graph_objects as go # for data visualisation
import plotly.io as pio # to set shahin plot layout
import platypus as plat # multiobjective optimisation framework
pio.templates['shahin'] = pio.to_templated(go.Figure().update_layout(legend=dict(orientation="h",y=1.1, x=.5, xanchor='center'),margin=dict(t=0,r=0,b=40,l=40))).layout.template
pio.templates.default = 'shahin'
Introduction¶
When preparing to implement multiobjective optimisation experiments, it's often more convenient to use a readymade framework/library instead of programming everything from scratch. There are many libraries and frameworks that have been implemented in many different programming languages. With our focus on multiobjective optimisation, our choice is an easy one. We will choose Platypus which has a focus on multiobjective problems and optimisation.
Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs). It differs from existing optimization libraries, including PyGMO, Inspyred, DEAP, and Scipy, by providing optimization algorithms and analysis tools for multiobjective optimization.
In this section, we will use the Platypus framework to compare the performance of the Nondominated Sorting Genetic Algorithm II (NSGAII)^{1} and the Pareto Archived Evolution Strategy (PAES)^{2}. To do this, we will use them to generate solutions to four problems in the DTLZ test suite^{3}.
Because both of these algorithms are stochastic, meaning that they will produce different results every time they are executed, we will select a sufficient sample size of 30 per algorithm per test problem. We will also use the default configurations for all the test problems and algorithms employed in this comparison. We will use the Hypervolume Indicator (introduced in earlier sections) as our performance metric.
Executing an Experiment and Generating Results¶
In this section, we will using the Platypus implementation of NSGAII and PAES to generate solutions for the DTLZ1, DTLZ2, DTLZ3, and DTLZ4 test problems.
First, we will create a list named problems
where each element is a DTLZ test problem that we want to use.
problems = [plat.DTLZ1, plat.DTLZ2, plat.DTLZ3, plat.DTLZ4]
Similarly, we will create a list named algorithms
where each element is an algorithm that we want to compare.
algorithms = [plat.NSGAII, plat.PAES]
Now we can execute an experiment, specifying the number of function evaluations, $nfe=10,000$, and the number of executions per problem, $seed=30$. This may take some time to complete depending on your processor speed and the number of function evaluations.
Warning
Running the code below will take a long time to complete even if you have good hardware. To put things into perspective, you will be executing an optimisation process 30 times, per 4 test problems, per 2 algorithms. That's 240 executions of 10,000 function evaluations, totalling in at 2,400,000 function evaluations. You may prefer to change the value of seeds
below to something smaller like 5 for now.
We are also using the ProcessPoolEvaluator
in Platypus to speed things up.
with plat.ProcessPoolEvaluator(10) as evaluator:
results = plat.experiment(algorithms, problems, nfe=10000, evaluator=evaluator)
Once the above execution has completed, we can initialize an instance of the hypervolume indicator provided by Platypus.
hyp = plat.Hypervolume(minimum=[0, 0, 0], maximum=[1, 1, 1])
Now we can use the calculate
function provided by Platypus to calculate all our hypervolume indicator measurements for the results from our above experiment.
hyp_result = plat.calculate(results, hyp)
Finally, we can display these resultsu using the display
function provided by Platypus.
plat.display(hyp_result, ndigits=3)
Statistical Comparison of the Hypervolume Results¶
Now that we have a data structure that has been populated with results from each execution of the algorithms, we can do a quick statistical comparison to give us some indication as to which algorithmn (NSGAII or PAES) performs better on each problem.
We can see in the output of display
above that the data structure is organised as follows:
 Algorithm name (e.g. NSGAII)
 Problem name (e.g. DTLZ1)
 Performance metric (e.g. Hypervolume)
 The score for each run (e.g. 30 individual scores).
 Performance metric (e.g. Hypervolume)
 Problem name (e.g. DTLZ1)
As a quick test, let's try and get the hypervolume indicator score for the first execution of NSGAII on DTLZ1.
hyp_result['NSGAII']['DTLZ1']['Hypervolume'][0]
To further demonstrate how this works, let's also get the hypervolume indicator score for the sixth execution of NSGAII on DTLZ1.
hyp_result['NSGAII']['DTLZ1']['Hypervolume'][5]
Finally, let's get the hypervolume indicator scores for all executions of NSGAII on DTLZ1.
hyp_result['NSGAII']['DTLZ1']['Hypervolume']
Perfect. Now we can use numpy
to calculate the mean hypervolume indicator value for all of our executions of NSGAII on DTLZ1.
np.mean(hyp_result['NSGAII']['DTLZ1']['Hypervolume'])
Let's do the same for PAES.
np.mean(hyp_result['PAES']['DTLZ1']['Hypervolume'])
We can see that the mean hypervolume indicator value for PAES on DTLZ1 is higher than that of NSGAII on DTLZ1. A higher hypervolume indicator value indicates better performance, so we can tentatively say that PAES outperforms NSGAII on our configuration of DTLZ1 according to the hypervolume indicator. Of course, we haven't determined if this result is statistically significant.
Let's create a DataFrame where each column refers to the mean hypervolume indicator values for the test problems DTLZ1, DTLZ2, DTLZ3, and DTLZ4, and each row represent the performance of an algorithm (in this case, PAES and NSGAII).
df_hyp_results = pd.DataFrame(index = hyp_result.keys())
for key_algorithm, algorithm in hyp_result.items():
for key_problem, problem in algorithm.items():
for hypervolumes in problem['Hypervolume']:
df_hyp_results.loc[key_algorithm,key_problem] = np.mean(hypervolumes)
df_hyp_results
Now we have an overview of how our selected algorithms performed on the selected test problems according to the hypervolume indicator. Personally, I find it easier to compare algorithm performance when each column represents a different algorithm rather than a problem.
df_hyp_results.transpose()
Without consideration for statistical significance, which algorithm performs best on each test problem?
Conclusion¶
In this section we have demonstrated how we can compare two popular multiobjective evolutionary algorithms on a selection of four test problems using the hypervolume indicator to measure their performance. In this case, we simply compared the mean of a sample of executions per problem per algorithm without consideration for statistical significance, however, this it is important to take this into account to ensure that any differences haven't occurred by chance.
Exercise
Create your own experiment, but this time include different algorithms and problems and determine which algorithm performs the best on each problem.

Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. A. M. T. (2002). A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE transactions on evolutionary computation, 6(2), 182197. ↩

Knowles, J., & Corne, D. (1999, July). The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation. In Congress on Evolutionary Computation (CEC99) (Vol. 1, pp. 98105). ↩

Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2002, May). Scalable multiobjective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600) (Vol. 1, pp. 825830). IEEE. ↩