Whilst I touched on the topic of benchmarking briefly in a recent post (https://www.linkedin.com/pulse/mysterious-world-benchmarking-services-tony-sykes/), I realize that it may be helpful to dive a little deeper into the subject as it’s something most people rarely encounter and yet seems to be demanded by management frequently when outsourcing deals turn sour or budgets need to be achieved.
Firstly, a few simple topics that most people will probably either know, or could deduce with logical thinking, although still worth mentioning:
- If you’ve benchmarked or contracted a service recently (i.e. within the past two years), it’s very unlikely that the price has changed dramatically in the market place. There are only a few exceptions to this in the IT environment, such as storage related services, where hardware prices continue to rapidly decline. If you need to reduce costs on such services, consider revising the parameters of the service rather than benchmarking
- It’s really not worthwhile benchmarking your entire portfolio! A benchmark exercise costs time and money for both parties. Depending upon what your services portfolio looks like, it probably only cost effective to benchmark those services that constitute 80% of the spend. The potential price reduction on the remaining 20% is unlikely to deliver gains that exceed the cost of the benchmark
- Take a few moments to consider what you should benchmark. Whilst some outsourcing contracts place the hardware, software and services components with a supplier, others only transfer the operations, retaining the hardware and software with the customer (often done to ensure that the service provider/supplier can be swapped without significant impact to the underlying service running on the assets). Benchmarking a service when 70% of the cost lies in the hardware and software which may be retained is not going to get those savings that management is looking for
- Is a benchmark really necessary? Such exercises can be painful and very draining on resources. There are suppliers in the market who will happily sit down and discuss your budget targets in preference to a benchmark
If we focus on the operations aspect (primarily labor costs) of services first, most people consider the location of the delivery team to be a main cost driver which can be influenced. Businesses look internally to develop near/off-shore capabilities to reduce costs, so it’s logical that the same is expected when using external, outsourced services. A benchmark advisor will indicate that eastern European labor offers a ca. 30% lower cost than western Europe, with many Asian countries being as low as 50%. Although supported by data, this is not the whole picture! You also need to consider:
- Skills/capabilities: Not all near-shore/off-shore locations have the same educational system or stimuli to develop people for specific tasks. Whilst labor may be cheaper, the quality of service is then lower, leading to increased volume of resources and thus effectively higher costs which off-set the potential advantages of labor costs. A blind comparison of locations should therefore be avoided. It is better to prepare to discuss the right mix of delivery locations to achieve the quality of service that is required and ensure that your benchmarking party takes this into consideration.
- Transition and service dip: I often hear that service quality/performance dipped after outsourcing, which in turn has pushed the customer to demand a benchmark or supplier to initiate a service improvement plan. Moving any service means accepting a learning curve! Even with best practices such as shadowing, knowledge base automation and a structured hand-over, people need time to adjust. And unless the costs of the transition have been agreed to, in addition to the service fees, there is also a hidden cost that the supplier is planning to recover over time, potentially reducing the immediate impact of off-shoring on the pricing. Try to gain transparency of transition costs during the benchmark to make a fair comparison and keep in mind that if the benchmark is demonstrating that lower prices could be achieved through further right-shoring, that your supplier will need to make a change that could cause a service dip.
- The total team: The operational aspect of a service does not stand alone. If you’re benchmarking a relatively simple service such as “Linux administration” or “Backup operations”, it’s tempting to just calculate the number of administrators required to do so. However, to make the service functional, you also need Governance, Security, ITIL processes, Financial management (consumption reporting, billing, etc.), audit support, etc. It’s doubtful that these aspects are individually priced, so many are financially engineered into the service fees. Considering the high-level of customer interfacing required for aspects such as governance, this is also something that is not easily off-shored. Basically, make sure that you are prepared to identify, discuss and allocate such supporting functions in terms of right-shoring.
Service hours and support times are the next major theme in any benchmark to receive attention. Hopefully, your contract’s service hours are aligned to your business needs, but how this is achieved is often very different from supplier to supplier. Whilst one supplier has a shift-based team delivering 24*7 services across all levels (1st to 3rd), another is using a follow-the-sun principle to achieve the same. A third supplier may be simply using a full team during daytime support hours with on-call staff for the 2nd and 3rd line outside of office hours. It’s important to ensure that the benchmarking party is truly aware of the method being used to deliver support and that they have captured this with the peers against which the benchmark is making the comparison. From a customer perspective, this is also where it can be advantageous to get market information on variables that could be changed to reduce costs in subsequent negotiations. For example, do you need the service hours if the resolution time is short? How much down-time can your business afford? What is critical and does everything need 24*7 support? Maybe you invested in architecture which gives resilience at the application layer, thereby allowing service hours to be reduced. Many suppliers will now argue that the customer requested a service and how the support is delivered is not relevant. This may be contractually correct, but it’s worth checking as these delivery methods have a significantly different cost base.
My personal favorite topic for discussion during benchmarks is the support levels. Within infrastructure services, it has been common to deliver services with varying levels, such as bronze, silver, gold or platinum. This distinction originates from the business’ needs, assuming that some services do not need the same level of responsiveness or availability that others do. A classic example is the company HR system vs. the public facing website, where it can be argued that a lower level of availability and slower response to incidents is permissible for the HR system. Such a difference may be reflected by using a ‘bronze’ service level vs. a ‘platinum’ service level where the availability of the environment is ‘guaranteed’ by the supplier. Ultimately, the same delivery team is often supporting both environments and the ‘service level’ simply gives the team guidance in terms of prioritization. Commercially, the prices for the service levels are often financially engineered, based on expected the volumes of each service in each category. You may even find that the incident response time for each service level remains the same and that it is only the ‘availability’ KPI which differs per level, ultimately meaning that you are simply playing a resource vs risk game in terms of meeting the service level. More interestingly, in today’s world where cloud is widely adopted and applications are multi-tiered and load balanced, a customer should really only need the lowest possible response and resolution times because the investment in hardware and software resilience (i.e. good architecture) ensures overall up-time of the application despite individual components becoming unavailable. Such service levels really only make sense if a supplier has ‘end-to-end’ responsibility for the application chain and influences the architecture. In other words, don’t be surprised if your benchmark reveals the same prices irrespective of such service levels and be prepared to discuss potentially changing the KPI’s to focus more on application availability rather than infrastructure.
Also, be prepared to discuss areas of conflict which may have been addressed in governance meetings. One area that I recently found interesting is that of automation. Conflicting views regarding automation and robotics emerged during a benchmark. A supplier invests to reduce labor costs and increase KPI performance, giving faster response and resolution, but also fewer incidents (automation can predict and prevent incidents), driving the customer to believe that prices should be lower due to reduced volume of incidents being dealt with. Unless a metric has been introduced to track the prevented incidents, it is difficult to show how the investment has delivered the benefit.
Remember that a benchmark exercise is most effective if you can truly compare apples with apples! Although many benchmark advisors will have databases with similar ‘apples’, they do not have the granularity to be able to compare everything. At this point, the customer and supplier need to step-up and be willing and able to discuss details. Although a benchmark exercise is not the right forum to discuss and negotiate service delivery, it can provide invaluable insights to take forwards into the right forum.
And please consider that a supplier will most likely undergo benchmarks on a regular basis across its pool of customers, whilst a customer performs such an exercise once or twice during the lifespan of a contract. So, get help if needed!!