Objective Functions


In [1]:
# used to create block diagrams
%reload_ext xdiag_magic
%xdiag_output_format svg
import numpy as np                   # for multi-dimensional containers
import pandas as pd                  # for DataFrames
import plotly.graph_objects as go    # for data visualisation


Objective functions are perhaps the most important part of any Evolutionary Algorithm, whilst simultaneously being the least important part too. They are important because they encapsulate the problem the Evolutionary Algorithm is trying to solve, and they are unimportant because they have no algorithmic part in the operation of the Evolutionary Algorithm itself.

Put simply, objective functions expect some kind of solution input, i.e. the problem variables, and they use this input to calculate some output, i.e. the objective values. These objective values can be considered to be how the problem variables of a solution scored with respect to the current problem. For example, the input could be variables that define the components of a vehicle, the objective function could be a simulation which tests the vehicle in some environment, and the objective values could be the average speed and ride comfort of the vehicle.

In [2]:
    orientation = portrait
    "Problem Variables" -> "Objective Function" -> "Objective Values"
    "Objective Function" [color = '#ffffcc']

In the figure below, we have highlighted the stage at which the objective function is typically invoked - the evaluation stage. It is after this that we find out whether a potential solution to the problem performs well or not, and have some idea about trade-offs between multiple solutions using the objective values. The stage that typically follows this is the termination stage, where we can use this information to determine whether we stop the optimisation process or continue.

In [3]:
    orientation = portrait
    Initialisation -> Evaluation -> "Terminate?" -> Selection -> Variation -> Evaluation
    Evaluation [color = '#ffffcc']
blockdiag { orientation = portrait Initialisation -> Evaluation -> "Terminate?" -> Selection -> Variation -> Evaluation Evaluation [color = '#ffffcc'] } InitialisationEvaluationTerminate?SelectionVariation

Objective Functions in General

Let's have a quick look at what we mean by an objective function. We can express an objective function mathematically.

$$ f(x) =(f_{1}(x),f_{2}(x),\ldots,f_{\mathrm{M}}(x)) \tag{1} $$$$ $$

Before we can talk about this, we need to explain what $x$ is. In this case, $x$ is a solution to the problem, and it's defined as a vector of $\mathrm{D}$ decision variables.

$$ x=\langle x_{1},x_{2},\ldots,x_{\mathrm{D}} \rangle \tag{2} $$

Let's asume that the number of decision variables for a problem is 8 so, in this case, $\mathrm{D}=8$. We can create such a solution using Python and initialise it with random numbers.

In [4]:
D = 8
x = np.random.rand(D)
[0.21177976 0.15072328 0.69361583 0.27808369 0.42566027 0.10043317
 0.37540255 0.51079469]

Now we have a single solution consisting of randomly initialised values for $x_1$ through to $x_8$. It should be noted that this is a real-encoded solution, which is a distinction we make now as we will discuss solution encoding later in this book.


When running this notebook for yourself, you should expect the numbers to be different because we are generating random numbers.

Let's have a look at $f(x)$ in Equation 1. This is a function which takes the solution $x$ as input and then uses it for some calculations before giving us some output. The subscript $\mathrm{M}$ indicates the number of objectives we can expect, so for a two-objective problem, we can say $\mathrm{M}=2$. For the sake of example, let's say that $f_{1}(x)$ will calculate the sum of all elements in $x$, and $f_{2}$ will calculate the product of all elements in $x$.

$$ f_1(x) = \sum_{k=1}^{n} x_k \tag{3.1} $$$$ f_2(x) = \prod_{k=1}^{n} x_k \tag{3.2} $$

We can implement such an objective function in Python quite easily.

In [5]:
def f(x):
    f1 = np.sum(x);    # Equation (3.1)
    f2 = np.prod(x);   # Equation (3.2)
    return np.array([f1, f2])

Now let's invoke this function and pass in the solution $x$ that we made earlier. We'll store the results in a variable named $y$, in line with Equation 4.

$$ y_{M} = f(x) \tag{4} $$

Which translated to Python will look something like the following.

In [6]:
y = f(x)
[2.74649324e+00 5.04711459e-05]

This has returned our two objective values which quantify the performance of the corresponding solution's problem variables. There is much more to an objective function than what we've covered here, and the objectives we have defined here are entirely arbitrary. Nonetheless, we have implemented a two-objective (or bi-objective) function which we may wish to minimise or maximise.

Let's use Python to generate 50 more solutions $x$ with $\mathrm{D}=8$ variables and calculate their objective values according to Equations 3.1 and 3.2.

In [7]:
objective_values = np.empty((0, 2))

for i in range(50):
    x = np.random.rand(8)
    y = f(x)
    objective_values = np.vstack([objective_values, y])
# convert to DataFrame
objective_values = pd.DataFrame(objective_values, columns=['f1','f2'])

We won't output these 50 solutions in the interest of saving space, but let's instead visualise all 50 of them using a scatter plot.

In [8]:
fig = go.Figure()
fig.add_scatter(x=objective_values.f1, y=objective_values.f2, mode='markers')


In this section, we covered the very basics in what we mean by an objective function. We expressed the concept mathematically and then made a direct implementation using Python. We then generated a set of 50 solutions, calculated the objective values for each one, and plotted the objective space using a scatterplot.

In the next section, we will look at a popular and synthetic objective function named ZDT1, following a similar approach where we implement a Python function from its mathematical form.

Support this work

You can access this notebook and more by getting the e-book on Practical Evolutionary Algorithms.