1 - 1 Introduction [Seite 12]
1.1 - 1.1 Background knowledge [Seite 12]
1.2 - 1.2 How to use this book [Seite 12]
2 - 2 Managing, metrics and perspective [Seite 16]
2.1 - 2.1 Managing [Seite 16]
2.2 - 2.2 Perspective [Seite 16]
2.3 - 2.3 Full metric description [Seite 19]
2.4 - 2.4 Goals, Critical Success Factors (CSFs) and Key Performance Indicators (KPIs) [Seite 20]
3 - 3 Governance [Seite 24]
3.1 - 3.1 Perspective [Seite 24]
3.2 - 3.2 Metrics [Seite 24]
3.3 - 3.3 Processes [Seite 31]
4 - 4 Service Strategy [Seite 38]
4.1 - 4.1 Perspective [Seite 38]
4.2 - 4.2 Critical Success Factors [Seite 39]
4.3 - 4.3 Metrics [Seite 44]
4.4 - 4.4 Process metrics [Seite 49]
5 - 5 Service Design [Seite 54]
5.1 - 5.1 Perspective [Seite 54]
5.2 - 5.2 Business Analysis or Requirements Engineering [Seite 56]
5.3 - 5.3 Critical Success Factors - designing services [Seite 57]
5.4 - 5.4 Metrics [Seite 65]
5.5 - 5.5 Process metrics [Seite 78]
6 - 6 Classifications of metrics [Seite 82]
6.1 - 6.1 ITIL® metric structure [Seite 82]
6.2 - 6.2 Six Sigma process metrics [Seite 83]
6.3 - 6.3 COBIT capability, performance and control [Seite 84]
6.4 - 6.4 Capability Maturity Model (CMMI) [Seite 84]
6.5 - 6.5 Software process improvement and capability determination - SPICE ISO/IEC 15504 [Seite 85]
6.6 - 6.6 Goal, Question, Metrics (GQM) [Seite 86]
6.7 - 6.7 Tudor's IT Process Assessment (TIPA) framework [Seite 86]
7 - 7 Outsourcing and emerging technologies [Seite 88]
7.1 - 7.1 Outsourcing [Seite 88]
7.2 - 7.2 Outsourcing case study [Seite 89]
7.3 - 7.3 Virtualization, clouds, data centers, and green computing [Seite 90]
7.4 - 7.4 Service Orientated Architecture (SOA) [Seite 92]
8 - 8 Cultural and technical considerations [Seite 94]
8.1 - 8.1 Organizational culture [Seite 94]
8.2 - 8.2 Replacing metrics - messy reality vs. beautiful [Seite 8.2 Replacing metrics - messy reality vs. beautiful ]
statistics - 95 [Seite 95]
9 - 9 Tools and tool selection [Seite 98]
9.1 - 9.1 Checklists [Seite 98]
9.2 - 9.2 Measuring communications and meetings - Document Management System [Seite 99]
9.3 - 9.3 Meeting management [Seite 100]
9.4 - 9.4 Measuring project milestones and process activities [Seite 101]
9.5 - 9.5 Surveys [Seite 102]
10 - 10 Service Transition [Seite 106]
10.1 - 10.1 Perspective [Seite 106]
10.2 - 10.2 Critical Success Factors [Seite 106]
10.3 - 10.3 Metrics [Seite 110]
10.4 - 10.4 Process metrics [Seite 128]
11 - 11 Service Transition and the Management of Change [Seite 134]
11.1 - 11.1 Staff development, satisfaction and morale [Seite 134]
11.2 - 11.2 Employee development and training [Seite 134]
11.3 - 11.3 Role definition SFIA [Seite 137]
11.4 - 11.4 Professional recognition for IT Service Management (priSM®) [Seite 138]
12 - 12 Service Operation [Seite 142]
12.1 - 12.1 Perspective [Seite 142]
12.2 - 12.2 Critical Success Factors [Seite 142]
12.3 - 12.3 Metrics [Seite 145]
12.4 - 12.4 Function metrics [Seite 158]
12.5 - 12.5 Process metrics [Seite 159]
13 - 13 Continual Service Improvement (CSI) [Seite 164]
13.1 - 13.1 Critical Success Factors [Seite 164]
13.2 - 13.2 Metrics [Seite 166]
13.3 - 13.3 Process metrics [Seite 169]
2 Managing, metrics and perspective
2.1 Managing
As with a lot of folklore, there are wise sayings on both sides of the question about how to use metrics as part of management:
'You can't manage what you can't measure' [attributed to Tom DeMarco developer of Structured Analysis]
'A fool with a tool is still a fool' [attributed to Grady Booch, developer of the Unified Modeling Language]
Both of these are true. Managing requires good decision-making and good decision-making requires good knowledge of what is to be decided. ITIL®'s concept of Knowledge Management is designed to avoid this pitfall.
2.2 Perspective
Relying simply on numbers given by metrics, with no context or perspective, can be worse than having no information at all, apart from 'gut feel'. Metrics must be properly designed, properly understood and properly collected, otherwise they can be very dangerous. Metrics must always be interpreted in terms of the context in which they are measured in order to give a perspective on what they are likely to mean.
To give an example: a Service Manager might find that the proportion of emergency changes to normal changes has doubled. With just that information, most people would agree that something has gone wrong - why are there suddenly so many more emergency changes? This could be correct, but here are some alternative explanations of why this is the case:
If the change process is new, this may reflect the number of emergency changes that the organization actually requires more accurately. Previously these changes might have been handled as ordinary changes without proper recognition of the risk.
In a mature organization, a major economic crisis might have intensified the risk of a number of previously low-risk activities. It would be the proper approach for the Service Manager, recognizing changes related to these, to make them emergency changes.
The change management process might have been improved substantially in the current quarter, so much so that the number of ordinary changes that have been turned into standard changes has led to a halving of the number of normal changes. The number of emergency changes has stayed exactly the same, but the ratio is higher because of the tremendous improvement in the change process.
Even a very simple and apparently uncontroversial metric can mean very different things. As with most management, there is no 'silver bullet'. Metrics must be properly understood, within context, in order to be useful tools. To ensure that they are understood, metrics must be designed. For best results, service metrics should be designed when the Service itself is designed, as part of the Service Design Package, which is why the 'Design' section in this book is the largest.
The metric template used in this book includes the field 'Context' specifically to allow each metric to be properly documented so that, when it is designed, the proper interpretation and possible problems with it can be taken into account. The design of a metric is not simply the measure and how it is taken; it must also make it clear how it will be reported and how management will be able to keep a perspective on what it means for the business - particularly its contribution to measuring value.
This is also a reason why the number of metrics deployed must be kept as small as possible (but not, as Einstein put it, 'smaller'!). Metrics must also be designed to complement each other. In the example above, the ratio between emergency and normal changes is an important and useful one to measure, but it could be balanced by measuring the number of standard changes, the business criticality of changes and, perhaps, the cost of changes.
These would all help to embed the metric into a context that allows proper interpretation.
2.2.1 Metrics for improvement and performance
Metrics are needed not only to identify areas needing improvement, but also to guide the improvement activities. For this reason, metrics in this book are often not single numbers, but allow discrimination between, for example, Mean Time To Repair (MTTR) for Services, Components, Individuals and third parties - while also distinguishing between low priority incidents and (high priority) critical incidents. The headline rate shows overall whether things are improving, but these component measures make it possible to produce specific, directed improvement targets based on where or what is causing the issue.
Metrics are often used to measure, manage and reward individual performance. This has to be handled with great care. Individual contributions that are significant to the organization may be difficult to measure. Some organizations use time sheets to try to understand where staff are spending their time, and thus understand how their work is contributing to the value delivered. These tend to be highly flawed sources of information. Very few individuals see much value in filling in timesheets accurately, and those that do see them as useful find them inadequate records for capturing busy multi-tasking days.
There is a less subjective method - that of capturing the contribution of individuals and teams as documents in the Service Knowledge Management System (SKMS). For this to work, a good document management system with a sound audit trail is required, along with software that will identify what type of documents have been read, used (as in used as a template or used as a Change Model), updated (as a Checklist will be updated after a project or change review) or created (as in a new Service Design Package (SDP) or entry in the Service Catalogue). Each type of document update can be given a weight, reflecting the value to the organization (a new SDP that moves to the state 'chartered' is a major contribution, while an existing Request for Change (RFC) that is updated to add more information on the risk of the change would be a minor contribution).
Properly managed, such a scheme can give a very accurate and detailed picture of where in the Service Lifecycle work is being done, so missing areas (for example, maybe there are not enough Change Models being created) can be highlighted and the increased weighting communicated to the organization. If these measures are properly audited they can be used as incentives for inter-team competition as well as for finding the individuals worth singling out for recognition and reward. Being an objective system this form of reward, based on the actual contribution to value delivered, can be highly valued, even by very technical and senior staff, as well as being an incentive (and measure of progress) for new or junior staff.
In certain circumstances, external contracts particularly, penalty clauses may be required. Ideally these should be set so they are not triggered by minor deviations that can swiftly be remedied. Also, ideally, positive incentives should cover most of the relationship, with penalty clauses kept as a last resort. If penalty clauses are invoked frequently, then the business relationship is likely to, eventually, break down - before this happens, it would be wise to change supplier or have a fundamental reevaluation and renegotiation of the contract to avoid this situation.
2.2.2 Metrics from the top downwards
Metrics can be understood to work from the top downwards. Business measures (such as profit, turnover, market share, share price, price/earnings ratio) are the ultimate measures of success and all other metrics should, ultimately, contribute to the success of these metrics. Service Management identifies services; some deliver business results directly, some contribute indirectly. These can be measured by Service Metrics. Business services and internal services often depend on processes for their proper operation, and these can be measured by Process Metrics. Services and Processes rely on the underlying technologies that deliver these, and these can be measured by technology processes. Ideally, the sequence is Business Measure <- Service Measure <- Process Measure <- Technology Measure. Some metrics have value outside this direct relationship, but, where possible, metrics should be evaluated for how well they contribute to this value chain.
For the above to work, metrics, of whatever sort, must be designed as an integrated part of the design of any Service, Process, or Technology.
2.3 Full metric description
Useful metrics are more than just measures. A well defined metric should also have, these attributes (Description, Dependencies, Data)
Be under Change Control in the Metrics Register
Have a name/ID
Have a unique reference
Have an owner
Have a version number
Have a category eg:
- Business Metric
- Service Metric
- Process Metric
- Technology Metric
Show Status (with transition dates and times) eg:
- Created
- Design phase
- Approved/Rejected
- Chartered
- Testing
- Operational (Not Active, No Data, Green, Amber,...