News

How planning algorithms inch closer to the optimal solution

From our Spotify playlist to the routes proposed by our satnav or the spam filter in our mailbox: algorithms have become such a familiar part of our everyday lives that we sometimes take them for granted. Yet developing an algorithm is a hugely challenging task - especially when tackling complex problems such as the automated fuel delivery planning that Bottomline’s BX platform performs.

Professor Hein Fleuren (Tilburg University) specializes in the use of data science for optimization purposes, and has supported Bottomline in the development of its BX algorithm. We asked him to give us a look behind the scenes. Let me start by answering the most basic of questions in this context: what is an algorithm? Essentially, it’s a calculation method designed by humans – so there’s an element of creativity involved – to be executed by a computer. It follows a step-by-step, recipe-like structure to solve a given problem. In the case of Bottomline: how to plan the most efficient fuel delivery routes.

Combinatorial explosions 

To solve such a problem, an algorithm basically runs through as many scenarios as possible within the available time. And the reason I say ‘as many as possible’, not ‘all’, is that when using algorithms to optimize a planning, it’s generally impossible to calculate all possible scenarios, because of what mathematicians call a ‘combinatorial explosion’.

To explain this phenomenon to first-year students, I often ask them to imagine someone having to deliver identical packages at 20 addresses, who has access to a computer which can run through 1 billion combinations per second. How long would it take this computer to test all possible routes? The typical answer I get is that it would take less than a second. In reality, it would take this computer around six months of non-stop calculations. That’s how quickly the number of possible permutations explodes.

And that’s for a relatively simple problem. In the case of Bottomline’s fuel delivery routing, however, there are many more restrictions and variables. You may need to take into account opening times of fuel stations, delivery costs, the optimal size of fuel drops based on sales forecasts, the capacity of tank trucks, the way they have been compartmentalized, fluctuating prices at depots, volume agreements with suppliers … the list goes on and on. Having worked in this field for all of my professional life, I can honestly say that I have rarely encountered such a challenging problem. The number of possible scenarios easily runs into the trillions, yet the computer has to make its decision in a matter of minutes.

Hein Fleuren 08-2x


How to train an algorithm

So, when building an algorithm such as the one we developed for Bottomline’s BX platform, the key question is: how can we narrow down this unworkable number of possible scenarios? As humans, we do this almost without thinking. We immediately and intuitively discard a lot of possible but hopelessly inefficient routes as making no sense at all. For an algorithm, however, all scenarios are equally valid in theory. So, they need to be trained.

To do this, a crucial part of the development of an algorithm is to set parameters. These can describe constraints and conditions that help the algorithm focus on a smaller set of promising scenarios, rather than evaluating every possible combination. We can instruct an algorithm to prioritize one variable over another (for example, time over cost, or the other way around). Setting these parameters, in a sense, is an optimization problem in its own right. So far, no data scientist has been able to predict beforehand how an algorithm will perform. There is only one way to find out, and that’s to build it, code it and conduct large-scale test runs using a representative problem.

The outcome of such test runs can be compared to what a human planner would have come up with. Or we can set criteria for what would constitute an efficient planning and use this to assess the algorithm’s outcome. In either case, there’s a lot of finetuning involved before we reach the optimal balance between the best possible solution and the maximum amount of runtime we are willing to wait for the algorithm to come up with an answer.

Monitoring and adjusting algorithms

And once you’ve got an algorithm that seems to perform well, you need to keep monitoring it. After all, new variables may arise. A customer may change his priorities, agree new contracted volumes with a supplier, or perhaps we want to plan further ahead. In all these cases, it’s possible that an algorithm’s settings may need to be adjusted.

In short, whereas you could describe an algorithm as a form of artificial intelligence, it is in fact still very dependent on the hard work and problem-solving capacities of its engineers. Perhaps it would be more accurate to say that the current generation of algorithms are a blend of human intelligence and raw computing power. Yet that balance may be about to change. As I pointed out earlier, as humans we have the capacity to intuitively recognize some scenarios as inefficient. That is why human planners, even without the aid of a computer program, can still draw up a relatively efficient planning.

Of course, that capacity has nothing to do with ‘intuition’. What actually happens, is that trained planners can recognize patterns, which (subconsciously) guide them in the direction of the most feasible scenarios. And recognizing patterns is something that AI technology is increasingly good at. So it should be possible to train AI to think like a human planner. In other words: to automatically focus om a limited set of the most promising scenarios. The difference being that whereas humans can only mentally grasp a very limited number of scenarios, a computer is perfectly capable of processing thousands of complex scenarios in a matter of seconds. That’s an exciting prospect. It means that over the next few years, we can use AI to boost our algorithm’s performance, inching ever closer to the most efficient solution. Stay tuned!


Hein Fleuren is professor in the application of Business Analytics and Operations Research (BA/OR) at Tilburg University, specializing in data science and optimization. He has been working as a consultant with Bottomline for many years, working on the development of its routing algorithm. Hein is also the co-founder and director of the Zero Hunger Lab, a research group which uses data science to support the UN World Food Program, Food Banks and many NGOs to find solutions for the world’s hunger problem.


AIR EBOOK CAMPAIGN-1200X627-7