Contents Download Source
# used to create block diagrams %reload_ext xdiag_magic %xdiag_output_format svg import numpy as np # for multi-dimensional containers import pandas as pd # for DataFrames import platypus as plat # multi-objective optimisation framework from scipy import stats
When preparing to implement multi-objective optimisation experiments, it's often more convenient to use a ready-made framework/library instead of programming everything from scratch. Many libraries and frameworks have been implemented in many different programming languages. With our focus on multi-objective optimisation, our choice is an easy one. We will choose Platypus which has a focus on multi-objective problems and optimisation.
Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs). It differs from existing optimization libraries, including PyGMO, Inspyred, DEAP, and Scipy, by providing optimization algorithms and analysis tools for multiobjective optimization.
In this section, we will use the Platypus framework to compare the performance of the Non-dominated Sorting Genetic Algorithm II (NSGA-II)1 and the Pareto Archived Evolution Strategy (PAES)2. To do this, we will use them to generate solutions to three problems in the ZDT test suite3.
Because both of these algorithms are stochastic, meaning that they will produce different results every time they are executed, we will select a sufficient sample size of 30 per algorithm per test problem. We will also use the default configurations for all the test problems and algorithms employed in this comparison. We will use the Hypervolume Indicator (introduced in earlier sections) as our performance metric.
This time, we will also try to test the significance of our results.
Finally, let's test the significance of our pairwise comparison. The significance test you select depends on the nature of your data-set and other criteria, e.g. some select non-parametric tests if their data-sets are not normally distributed. We will use the Wilcoxon signed-rank test through the following function:
The Wilcoxon signed-rank test tests the null hypothesis that two related paired samples come from the same distribution. In particular, it tests whether the distribution of the differences x - y is symmetric about zero. It is a non-parametric version of the paired T-test.
This will give us some idea as to whether the results from one algorithm are significantly different from those from another algorithm.
In this section, we will be using the Platypus implementation of NSGA-II and PAES to generate solutions for the ZDT1, ZDT2, and ZDT3 test problems.
First, we will create a list named
problems where each element is a ZDT test problem that we want to use.
problems = [plat.ZDT1, plat.ZDT2, plat.ZDT3]
Similarly, we will create a list named
algorithms where each element is an algorithm that we want to compare.
algorithms = [plat.NSGAII, plat.PAES]
Now we can execute an experiment, specifying the number of function evaluations, $nfe=5,000$, and the number of executions per problem, $seed=30$. This may take some time to complete depending on your processor speed and the number of function evaluations.
Running the code below will take a long time to complete even if you have good hardware.
results = plat.experiment(algorithms, problems, nfe=5000, seeds=30)
Once the above execution has completed, we can initialize an instance of the hypervolume indicator provided by Platypus.
hyp = plat.Hypervolume(minimum=[0, 0], maximum=[11, 11])
Now we can use the
calculate function provided by Platypus to calculate all our hypervolume indicator measurements for the results from our above experiment.
hyp_result = plat.calculate(results, hyp)
Finally, we can display these results using the
display function provided by Platypus.
NSGAII ZDT1 Hypervolume : [0.987, 0.989, 0.986, 0.992, 0.988, 0.992, 0.991, 0.97, 0.986, 0.975, 0.993, 0.991, 0.98, 0.983, 0.988, 0.992, 0.99, 0.99, 0.964, 0.987, 0.989, 0.993, 0.988, 0.989, 0.99, 0.992, 0.991, 0.99, 0.991, 0.991] ZDT2 Hypervolume : [0.897, 0.9, 0.9, 0.91, 0.903, 0.896, 0.963, 0.904, 0.9, 0.899, 0.901, 0.898, 0.905, 0.902, 0.894, 0.897, 0.903, 0.901, 0.966, 0.918, 0.891, 0.902, 0.896, 0.897, 0.901, 0.968, 0.9, 0.891, 0.964, 0.96] ZDT3 Hypervolume : [0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998, 0.997, 0.998, 0.998, 0.998, 0.998, 0.998, 0.998] PAES ZDT1 Hypervolume : [0.987, 0.968, 0.977, 0.964, 0.991, 0.967, 0.979, 0.996, 0.986, 0.968, 0.984, 0.981, 0.964, 0.992, 0.978, 0.995, 0.995, 0.966, 0.975, 0.973, 0.98, 0.969, 0.985, 0.946, 0.983, 0.983, 0.987, 0.967, 0.974, 0.968] ZDT2 Hypervolume : [0.912, 0.955, 0.97, 0.982, 0.957, 0.958, 0.967, 0.92, 0.991, 0.951, 0.972, 0.973, 0.991, 0.931, 0.974, 0.961, 0.962, 0.979, 0.956, 0.952, 0.969, 0.948, 0.949, 0.94, 0.974, 0.986, 0.956, 0.979, 0.977, 0.939] ZDT3 Hypervolume : [0.977, 0.97, 0.998, 0.998, 0.998, 0.998, 0.977, 0.998, 0.986, 0.998, 0.965, 0.964, 0.966, 0.977, 0.982, 0.98, 0.977, 0.976, 0.998, 0.998, 0.977, 0.998, 0.964, 0.985, 0.998, 0.976, 0.998, 0.972, 0.998, 0.976]
Now that we have a data structure that has been populated with results from each execution of the algorithms, we can do a quick statistical comparison to give us some indication as to which algorithm (NSGA-II or PAES) performs better on each problem.
We can see in the output of
display above that the data structure is organised as follows:
- Algorithm name (e.g. NSGAII)
- Problem name (e.g. ZDT1)
- Performance metric (e.g. Hypervolume)
- The score for each run (e.g. 30 individual scores).
- Performance metric (e.g. Hypervolume)
- Problem name (e.g. ZDT1)
As a quick test, let's try and get the hypervolume indicator score for the first execution of NSGA-II on ZDT1.
To further demonstrate how this works, let's also get the hypervolume indicator score for the sixth execution of NSGA-II on ZDT1.
Finally, let's get the hypervolume indicator scores for all executions of NSGA-II on ZDT1.
[0.9872243312078205, 0.9892022234402659, 0.9863997994578185, 0.9915982807287159, 0.9884485646500336, 0.9916433924371617, 0.9905961946534321, 0.9700801527670488, 0.9859146085825952, 0.9753290035692634, 0.992509590438185, 0.9910253417699272, 0.9800297153958085, 0.9827622660875194, 0.9882648895085886, 0.9920703577487152, 0.9898114989017828, 0.9897103491726977, 0.9639734197208751, 0.9874159126774974, 0.9891865757271636, 0.9929353403083236, 0.9882912077351352, 0.9887773379300173, 0.9904956428451894, 0.99191043929259, 0.9908456708330089, 0.989791822753701, 0.990863372425, 0.9911200056022934]
Perfect. Now we can use
numpy to calculate the mean hypervolume indicator value for all of our executions of NSGA-II on ZDT1.
Let's do the same for PAES.
We can see that the mean hypervolume indicator value for PAES on ZDT1 is higher than that of NSGA-II on ZDT1. A higher hypervolume indicator value indicates better performance, so we can tentatively say that PAES outperforms NSGA-II on our configuration of ZDT according to the hypervolume indicator. Of course, we haven't determined if this result is statistically significant.
Let's create a DataFrame where each column refers to the mean hypervolume indicator values for the test problems ZDT1, ZDT2, and ZDT3, and each row represent the performance of an algorithm (in this case, PAES and NSGA-II).
df_hyp_results = pd.DataFrame(index = hyp_result.keys()) for key_algorithm, algorithm in hyp_result.items(): for key_problem, problem in algorithm.items(): for hypervolumes in problem['Hypervolume']: df_hyp_results.loc[key_algorithm,key_problem] = np.mean(hypervolumes) df_hyp_results
Now we have an overview of how our selected algorithms performed on the selected test problems according to the hypervolume indicator. It can be easier to compare algorithm performance when each column represents a different algorithm rather than a problem.
Without consideration for statistical significance, which algorithm performs best on each test problem?
Now let's use the Wilcoxon signed-rank test that we introduced above to see if our results are significant, or if any difference in observation occurred purely by chance.
Before using the test, we need to decide on a value for alpha, our significance level. This is essentially the “risk” of concluding a difference exists when it doesn’t, e.g., an alpha of $0.05$ indicates a 5% risk. We can consider alpha to be some kind of threshold. This will be covered in more detail in another article, but for now, we will set $0.05$ as our alpha. This means if our p-value is less than $0.05$, the null hypothesis has been rejected and the samples are likely not from the same distribution. Otherwise, the null hypothesis cannot be rejected, and the samples are likely from the same distribution.
We will use NSGA-II as our benchmark algorithm, meaning we will compare every other algorithm that we're considering to NSGA-II to determine if the results were significant. Let's write some code to determine this for us:
algorithms = ['NSGAII', 'PAES'] problems = ['ZDT1', 'ZDT2', 'ZDT3'] df_hyp_wilcoxon = pd.DataFrame(index = [algorithms]) for key_problem in problems: s, p = stats.wilcoxon(hyp_result[algorithms][key_problem]['Hypervolume'], hyp_result[algorithms][key_problem]['Hypervolume']) df_hyp_wilcoxon.loc[algorithms,key_problem] = p df_hyp_wilcoxon.transpose()
We can see that in every case, $p < 0.05$. Therefore according to our configuration of $alpha$, our results are statistically significant. Looking back at the mean hypervolume results:
With respect to hypervolume indicator quality, we can now say that:
- NSGA-II outperforms PAES on ZDT1.
- PAES outperforms NSGA-II on ZDT2.
- NSGA-II outperforms PAES on ZDT3.
The results are statistically significant.
In this section, we have demonstrated how we can compare two popular multi-objective evolutionary algorithms on a selection of three test problems using the hypervolume indicator to measure their performance. In this case, we have also determined the significance of our results using the Wilcoxon signed-rank test.
Create your own experiment, but this time include different algorithms and problems and determine which algorithm performs the best on each problem.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. A. M. T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE transactions on evolutionary computation, 6(2), 182-197. ↩
Knowles, J., & Corne, D. (1999, July). The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation. In Congress on Evolutionary Computation (CEC99) (Vol. 1, pp. 98-105). ↩
Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2002, May). Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600) (Vol. 1, pp. 825-830). IEEE. ↩