How to implement schedule optimisation

In a previous article, I was writing that Gartner still considers “Zero-touch Work Assignment” as one of the future capabilities to look out for in field service management. It estimates that only 14% of FSM systems deployed today have this functionality, and that 51% are likely to implement it in the next 2 years.

Many FSM systems consider “Zero-touch Work Assignment” as a simple request-based automation that will select a field agent based on a single request’s requirements, in isolation of everything else. This makes it easy to implement and can seem to reduce back-office cost, but it often creates other “collision” problems down the line.

On the other hand, implementing a schedule optimiser is often seen as difficult and implementation failures do occur more frequently than they should.

Here are 5 tips for those looking to automate their high-volume resource scheduling without causing more jeopardy situations down the “operations” line.

Consider Scheduling as a Global Solution

Scheduling is a global solution, not a “one record story”. We humans have trouble thinking of more than one thing at a time – at least I do. When it comes to allocating tasks to field agents, we often use a “one record story” approach.

Look at the current record’s requirements.

Find the best resource that is not already booked.

Assign the task to this resource

Forget about it this record.

When we get to the second record, we’ve already forgotten about the first one, that it could also have been assigned to another resource equally as fitted. Unfortunately, because of skills constraints, we must now give the second task to a less qualified resource, or worse, simply leave it unassigned.

When we get to the 100th task, we’ve forgotten all about the possible alternatives of the first few.

Visual scheduling boards definitely help us see the overall picture, but we still need to think through the entire logic of an assignment each time we move a task around.

Computers don’t think this way. They can construct a schedule of thousands of assignments hundreds of thousands of times per second, keeping track, each time, of the best possible solution. Adding a single task to the mix may mean shuffling 10 other tasks around to get to the new best solution, but for a computer, it’s a matter of a few milliseconds.

So, the first tip is to avoid asking your computer to schedule one task at a time. You should rather leverage your machine for what it’s good at: scheduling all tasks together, over and over, as the situation changes.

Overall Resource Match

There is a concept in auto-scheduling, or in any optimisation problem for that matter, called “over-constraining”. This is when there is no way to get to an optimal solution, to match everything up properly.

The simplest example of this would be if you have more tasks to be done than resources to do them. You will never be able to allocate all tasks or will need to allocate some tasks in over-time.

The situation can also be a bit more subtle. Out of 200 field agents, you have 50 that are skilled for HVAC maintenance. And out of your 200 man-days worth of tasks to allocate, you have 60 man-days worth of HVAC maintenance. Some HVAC maintenance will simply not be able to be done.

Yet a third example could be when you have enough HVAC technicians to cover the demand, but they are all based in one specific area, because, for example, they all come from the same subcontractor. This means you’ll be able to deliver the work, but you’ll be experiencing high travel costs for HVAC maintenance in other areas.

The second tip is to first do a rough resource fit. Using historical data, try to answer the question: Do I have enough resource, of the right skills, in all areas and with the right stock to cover all my demand.

Regardless of the schedule optimiser, this is just good practice for workforce management, and it will help you start understanding what your actual constraints are, the ones that you will need to feed into the optimiser.

Reduce Number of Constraints

When an organisation has been manually scheduling for a long time, it will have built up a lot of unwritten rules and idiosyncrasies that helps it manually deal with complex scheduling.

A common example of these are patches, or delimited geographical area. Field agents belong to a geographical area and can only be assigned work their area. Even if a task needs to be done just over the delimiting border, the schedulers won’t consider it as their scheduling process filters by patch first.

But it could also be rules like the following: prefer assigning to internal employees, unless all employees are busy, then give it to the contractor. Or give it directly to the contractor first only if it’s a specific skill. These rules are typically established to help schedulers apply direct filters to their data, allowing them to work with smaller subsets of tasks. These rules are also often based on estimated values of commercial agreements, without respect for individual situations.

But really, it all comes down to 2 things: hard constraints – like skills, parts and permits, i.e., can this resource do the work – and cost.

Most legacy rules can be transformed into a cost value: cost of assigning this contractor as opposed to the employee, cost of assigning tasks out of hours, cost of travelling to a point possibly out of the assigned patch. The computer can quickly calculate the cost of individual situation (many hundred thousand times per second) and come up with what is best with respect to cost only, helping your bottom line as opposed to blindly following manual rules.

Third tip: reduce your scheduling constraints down to hard constraints (skills, parts and permits) and cost. Less constraints, means more chances of getting to a better solution, and getting there more quickly.

Enable Parallel Operation

Those organisations that have been doing manual scheduling for a long time have a high chance of carrying a relatively large team of schedulers working with legacy, idiosyncratic rules in their minds. As we discussed though, for the purpose of scheduling, a computer doesn’t approach the problem in the same way as do humans. For this reason, implementing automated scheduling is often as much about people transformation than it is about technical build.

In the above context, if we were to simply “switch over” to the automated scheduler one morning, there is a very high risk that the human schedulers would only see the “problems” arising from the automation.

“Why is this task assigned to this person, it’s the wrong patch altogether?”

“Oh, we shouldn’t assign this contractor here, our employee is still on the bench”.

Typically, the more this happens, the more it means that your organisation can benefit from automatic scheduling. It means that the quick scheduling tricks that the schedulers were using, or the “one record at a time” approach was giving a sub-optimal solution for your bottom line.

One of the ways to help this transition is to setup a separate staging area for the auto-scheduler’s proposed schedule. On the morning of “go-live”, the humans can still prepare their schedule and commit it from their own staging, but then they can switch over to the machine’s staging area to see what it is proposing.

This has two purposes. One is to validate the auto-scheduler’s performance before allowing it to commit the schedule. It is notoriously difficult to create true testing conditions for a global scheduler in a testing environment and so, often, true testing is done with real production data… in the production environment.

The second purpose is to enable human schedulers to do “introspection”. It allows them to compare their schedule to another independent solution and ask themselves why they did what they did. It can start helping them realise how their scheduling habit may be restricting them from achieving non-obvious, better solutions.

Fourth tip: allow a soft transition by using parallel staging areas.

Keep Data Clean

I was working with an organisation implementing an auto-scheduler a few years ago. Many of their comments, after looking at the auto-scheduler staging area, were about tasks that shouldn’t have been scheduled at all. When I investigated, I found the task, scheduled for the next day, was within its SLA window, assigned to the right skill adn with minimal travel. Everything seemed fine.

When I asked the reviewers why they believed this task shouldn’t be scheduled, they explained that it was in an exception “bucket” and that they were waiting for the customer to advise.

“But the status of that task is still ‘pending dispatch’, I answered, why are you saying it is in an exception status?”

“Ah!, they said, we just know it. We have an email about it and some notes in this Excel sheet over here”.

So, I proceeded to explain that the auto-scheduler is not magical, nor does it have a mentalist module installed in it. It cannot guess what is not in the data, what is still in people’s minds.

Then, I drew a box on a whiteboard, an arrow leading into it, and another arrow coming out of the box. Over the inbound arrow was written “Garbage in” and over the outbound arrow, “Garbage out”.

This sounds obvious but is probably one of the most important tips for a successful auto-scheduler deployment: Keep your data clean.

The efficiency of scheduling will drop very quickly when the quality of the data drops just a little bit. This is because poor scheduling creates ripple effects downstream. Jeopardy situations will compound, and many tasks will need to be rescheduled for one single scheduling mismatch.

Fifth tip: ensure strong data governance.

What’s your end-to-end?

Contact us for a free consultation.

Start leveraging the power of simplicity.