In this post, we’ll walk through a novel method of solving a famous mathematical problem from computer science, and in the process, I’ll describe how it can be implemented effectively in Scala.

All the code in this post is available on GitHub.

Framing the problem

Let’s consider a map of locations, identified by their numerical index for convenience.

Map of locations

A salesperson wants to visit each location exactly once to peddle his wares, and then return to his original destination. Because gas costs considerably more these days, he wants to reduce his travel costs (through minimizing the total distance he travels) as much as possible. What’s the optimal route?

This is a classic problem in computer science called the travelling salesman problem.

In general, if there are \(n\) locations, then the number of possible solutions is \(\mathcal{O}\left(n!\right)\). Even for relatively small location graphs, a so-called "brute-force" algorithm (one that enumerates all possible solutions) will take too long to find the optimal solution.

There are many alternative approaches to solving problems like this. A common theme to these approaches is to be smarter about where in the solution space (the set of possible solutions) they search based on heuristics.

Some more details

Before we start discussing solving methods, let’s establish some more details that are necessary.

Let’s define a solution as some tour of all the locations, without visiting the same one more than once, and returning to the origin.

If the set of locations that we would like to visit is \(\{1, 2, 3, 4\}\), then some valid solutions are \[ 2 \rightarrow 3 \rightarrow 4 \rightarrow 1 \rightarrow 2 \] and \[ 4 \rightarrow 2 \rightarrow 1 \rightarrow 3 \rightarrow 4. \]

I also said the word “cost” without really establishing what I meant. The cost of a solution is how we’ll differentiate a good solution from a bad one. This is a representative quantity of the utility of a solution. The cost function of an optimization problem is often called the “objective function”.

In this problem, we’ll assume that our salesperson can travel in straight lines between all locations, so we’ll define the objective function as the sum of the euclidean distances between each of the steps in the tour.

As a refresher, given two points, \(p_1 = \left(x_1, y_1\right)\) and \(p_2 = \left(x_2, y_2\right)\), the distance between them (let's denote it \(d : \mathbb{R}^2 \times \mathbb{R}^2 \rightarrow \mathbb{R}\)) is \[ d{\left(p_1, p_2\right)} = \sqrt{\left(x_2 - x_1\right)^2 + \left(y_2 - y_1\right)^2} \]

Representing the problem domain

We can start by writing some code for our problem domain.

Each location is represented by a two-dimensional point:

Connected, undirected graphs

The essence of our problem is that the locations form an undirected graph where the edges between nodes (locations) have a weight based on their distance. Furthermore, since every location is connected to every other location, this is a connected graph. We could use a simple associative container to store this structure:

but Scala allows us abstraction so that we can ensure that the data always conforms to invariants that we expect. Consider the following code, which is a type-generic implementation of a connected graph with some useful utilities:

A ConnectedGraph[A, W] is immutable, and the edge weights (of type W) are not specified individually, but through a function of the edges nodes. This implementation also doesn’t do any verification that node arguments to functions like weight(source: A, dest: A): W (which is the weight on the edge between the source and dest nodes) refer to nodes that are present in the graph.


In this problem, the nodes are of type Int and the edge weights are of type Double. A solution is an ordered list of steps between locations. We also store the graph to which the solution belongs, and a method for calculating the solution’s cost. Finally, we add a function for generating random solutions to a graph.

Ant colony optimization

With all the preliminary code out of the way, let’s talk about our solver.

Ant colony optimization 1 is a novel optimization approach that was invented by Marco Dorigo in 1992.

The basic idea is fairly straightforward. In nature (to a rough approximation), ants start out by walking in random directions looking for food, and as they do so, they deposit pheromones along their path. When an ant finds food, it will walk the same path repeatedly as it carries off as much as it can handle back to the colony, and then returns to the food source. When other ants are deciding where to travel, they prefer to follow routes where more pheromones have been deposited. Thus, more and more ants start walking the same route, which encourages more ants, until the whole colony is able to join in.

Let’s try to model this algorithmically to solve our problem 2.

To start, we’ll choose a random solution and iteratively run our algorithm many times, while keeping track of the best overall solution to date.

In each iteration, we’ll do the following:

  • Place \(k\) "ants" on random nodes of the graph.

  • Each ant constructs a tour of the graph. The choice of location to visit next on each step of the tour is weighted by both the distance of the neighbours and the amount of pheromone between them.

  • When all \(k\) ants have returned, increase the pheromone levels in the graph where the ants have travelled.

Additionally, on every iteration, we will decrease the pheromone levels on all edges of the graph. This step is important for the solver to be able to “explore” alternative solutions rather than focusing on a single one. This trade-off is typically called “exploitation versus exploration”.

More details

Let’s expand on two details of the algorithm that I alluded to above. These are the way that an ant chooses a new destination on the tour and how pheromones get updated after every iteration.

For each destination that an ant has not yet visited, we compute a weight that will be used for probabilistically choosing that destination. The weight expression is \[ w{\left(i, j\right)} = \frac{\frac{f^\alpha_{ij}}{d^\beta{\left(p_i, p_j\right)}}}{\sum_{u,v} \frac{f^\alpha_{uv}}{d^\beta{\left(p_u, p_v\right)}}} \]

  • \(f_{ij}\) is the current amount of pheromone deposited on the edge between \(i\) and \(j\).

  • \(\alpha\) is a configurable constant, usually less than 1.0. A higher value will tend to weight the pheromone value more significantly.

  • \(\beta\) is similar to \(\alpha\), except that it weights the distance between edges more significantly.

  • The bottom term is a normalization factor that ensures that all the weights add to one (and thus, that they are probabilities).

On every iteration, we decrease the pheromone levels on every edge by a fixed factor \(g \in \left(0, 1\right)\): \[ f_{ij} \leftarrow \left(1 - g\right) \cdot f_{ij} \]

For every step \(\left(i, j\right)\) in an ant's solution, we increase the pheromones on that edge by \[ f_{ij} \leftarrow f_{ij} + \frac{F}{c \cdot d{\left(p_i, p_j\right)}} \] where \(F\) is a fixed constant, and \(c\) is the cost of the the ant's solution. This way, ants will tend to deposit more pheromones on better solutions and prefer edges that are shorter.

How it looks in Scala

A lot of the implementation details of the formulae above are self-evident in Scala given the machinery we already defined, so I’ll focus this section on some key elements of the implementation that are worth mentioning.

There are some common themes:

  • Mutation is never exposed through an interface, but sometimes it’s used internally inside a referentially-transparent function. Structures are never mutated either; new structures are created instead.

  • A modular style that’s familiar to OCaml/SML fans, instead of object-orientation with implementation inheritance 3.


The code below shows the Ant class definition.

There are a couple of points worth mentioning here.

The Ant class is really a module which contains type and method definitions.

The constants alpha and beta which are used to control the travel probabilities are left abstract (and the overall class is abstract as a result). We defined Ant.Defaults so that creating a typical ant instance is as simple as new Ant with Ant.Defaults, but defining custom values for the constants is as easy as new Ant { def alpha = 0.3; def beta = 0.5 }, for example.

Colonies of ants

A colony is parameterized on the kind of ant that it is composed of.

The most interesting things in this implementation include the fact that, like ML functors, the Colony “module” is a function of a particular Ant module.

We define a record for keeping track of the solver progress, Progress. Instead of logging our solver’s progress to stdout or stderr, the solver takes an optional listener which is invoked for every ant that returns with the latest progress.

Finally, we use Task from the Scalaz library. This is like a scala.concurrent.Future, but Tasks are only executed on their result is explicitly requested, unlike Futures which start executing immediately. The main loop inside Colony#solve is a recursive function that makes use of the Task monad.

Main program

The main function sets up a logger as the solver listener which logs each iteration to a file for analysis, runs the solver for 3000 iterations, and then formats the best solution as a DOT graph.

Sample results

We can visualize the solution of one run of the program:

Best solution

and also see how the solution progressed as the solver ran:

Solver iterations

As a reminder, the code from this post is available in its entirety on GitHub.

Next steps

Where do we go from here? This particular version of ant colony optimization is called “Ant System”, and it’s the first version that Dorigo proposed, but there have since been many suggested improvements.

As far as this particular algorithm is concerned, there are lots of questions remaining:

  • What are good relative values of \(\alpha\), \(\beta\), \(F\), and \(k\)?

  • How many iterations should the algorithm run before we say “good enough”?

  • What if we were to run many colonies in parallel and choose the best solution that way? Since the solver doesn’t depend on any external state, this is an easy idea to investigate.


  • 13 October 2015: Thanks to predef on the Scala sub-reddit comments for pointing out the dangers of Map#mapValues.

  • 22 March 2016: Fixed an off-by-one error in the touring code.

  • 4 October 2016: Thanks to merraksh on HN for some helpful corrections.


  1. Ant colony optimization on Wikipedia 

  2. I highly recommend this free book for a fantastic overview of heuristic optimization algorithms (including ant colony optimization). 

  3. ML-style (OCaml, SML) modules just make sense to me, and Scala does a reasonable job of encoding these patterns. Perhaps this will be the subject of another blog post.