LITERATURE WEEK 1
Van Veen-Berkx
Introduction
Healthcare has been intensely reformed, which urged hospitals to deliver transparent, high-quality care with strict financial
budgets. Focus on performance sparked the interest of providers to measure their performance and compare it to others’.
Porter & Teisberg indicated competition among healthcare providers should be focused on value (defined as “health care
results per unit of costs”) and supported by widely available outcome data. Obtaining such data, however, requires
appropriate management instruments that can disseminate business information and compare the performance of a single
provider to others. Benchmarking, defined as “a process of continuous measuring and comparing an organization’s business
against others”, is described as one of the approaches to obtain useful results.
To assess the application of benchmarking in hospitals, de Korne et al. (2010) have developed a “4P” conceptual
framework:
Key conditions form literature are: purposes (learning from others, identifying performance gaps, implementing best
practices); performance indicators (SMART indicators, comparable indicator information, reliable data gathering and
sharing); participating organisation similarities (in structure, process, outcomes; no competition between participants,
voluntary and involved participation); and performance management system (cyclical, internal).
Literature identifies several types of benchmarking: internal and external. While internal benchmarking focuses on
performance measurement and comparing within one organisation over time, external benchmarking can be categorised in
competitive, functional, generic and collaborative benchmarking. Competitive, functional and generic benchmarking are
commonly conducted independently, while the collaborative approach to traditional benchmarking is performed by groups
of organisations that work jointly to achieve the same goals. Collaborative benchmarking entails more than merely
comparing performance: organisations share their ideas, approaches, process designs and interventions.
Several studies though have assessed the efficacy of performance reports in stimulating hospital quality improvement.
Hibbard et al. (2005) found in a large study in Wisconsin that disclosure of performance, in private and public reports,
resulted in improvement in the clinical area reported upon. Devers et al. (2004) indicated different mechanisms that drive
hospital quality improvement: regulation, professionalism and market forces; benchmarking and reporting performances is
thought to be a key strategy for influencing market forces and, to a lesser extent, professionalism. Also in more
government-driven systems there is evidence for positive effects of performance reporting.
Already in 1994, however, Mosel and Gift as well as Gift et al. (1994) refer to the need for healthcare providers to consider
an alternative to the individual method, which is found in the collaborative approach of benchmarking. Wolfram Cox et al.
(1997) clearly contrast the two approaches:
1. The collaborative one is characterised by “learning with and from others as aim”, “partnership as relationship
between participants”, “a joint action” and “the visual picture is horizontal and visiting (sharing knowledge from the
kitchen)”
2. The competitive one is characterised by “superiority or learning to gain position over the other organisation”, “a
relationship of rivalry”, “a unilateral action to gain position on the ladder of success” and “the visual picture is vertical
ranking”.
,In the Netherlands, healthcare reform and the introduction of more competition has been a driver for hospitals to compare
themselves to others in the challenge to deliver safe, high quality, transparent, accountable and efficient care.
Conclusion
The findings of this investigation show that collaborative benchmarking appears to have benefits different from mainly
performance improvement and identification of performance gaps. It is interesting to note that, since 2004, the OR
benchmarking initiative still endures after already existing for ten years. A key benefit was pointed out in this recent study
by all respondents as “the purpose of networking”, on top of the purposes recognised in the 4P-model. The networking
events organised by the collaborative were found to make it easier for participants to contact and also visit one another.
Apparently, such informal contacts were helpful in spreading knowledge, sharing policy documents and initiating
improvement. One reason for this is that they could be used to discuss the tacit components of best practices, that are hard
to share in more formal communication media. Respondents were satisfied with the content of these meetings and with
the exchange of knowledge in an informal manner, the exchange of experiences including sharing best practices as well as
discussing worries and today’s challenges in OR management. It enables understanding and learning from each other.
These findings corroborate the idea of de Korne et al. (2010, 2012) that participating in benchmarking offers other
advantages, such as generating discussions about how to deliver services and increasing the interaction between
participants.
Discussion
The OR benchmarking collaborative saves the eight participating UMCs from reinventing the wheel regarding several issues
high on the agenda of OR departments. de Korne et al. (2010, 2012) has indicated that “taking part in an international
benchmarking initiative is in itself seen as a powerful signal to stakeholders that the organisation is actively working on
quality improvement”.
During the initiation phase of the benchmark collaborative, a considerable amount of time (two years) and effort was
undertaken by the steering committee to develop a collaboration agreement. As described in the findings, this agreement
created the foundation for trust and confidentiality between the eight participating partners, because confidentiality and
ownership of benchmarking data are two delicate and important parts of the agreement. These first years were also seized
by the development and harmonisation of definitions of performance indicators. Common definitions are an essential base
for external benchmarking.
Benchmarking has often been approached as a competitive activity resulting in rankings and with a focus on creating
competition between participants as driver for improvement. This study, however, clearly shows the advantages of a more
collaborative approach. An important difference between public reporting and reporting arranged in this Dutch
benchmarking collaborative is the fact that the performance as well as rankings are not publicly available elsewhere than to
the eight participating UMCs.
From the very first start, the initiators of the Dutch OR benchmarking collaborative as described in this study consistently
and literally have avoided “naming and shaming” through publishing and vertical ranking of the eight UMCs, regarding the
performance indicators measured. Lots of attention has been given to honest assessment and avoiding to compare apples
and oranges. The physical, organisational characteristics and structure of all participating OR departments can be very
different from one another. Contingency theory claims there is not “one best way for organising” because this is subject to
the internal and external conditions of every organisation.
The Dutch OR benchmarking collaborative is a “self-led” and voluntary collaboration with its own budget (paid for by the
eight hospitals themselves). OR benchmark data are merely used by the participants and not by policy makers, the
government or regulatory offices. Another foundation of the collaborative benchmark described in this study, is the pursuit
to learn from the organisational differences in structure, process designs, methods and performance. These differences can
be a source of learning as they allow practitioners to compare relations between organisational characteristics and
performance, especially in informal settings and networking. These differences also offer every participating OR
department the opportunity to engage their own quality improvement pathway.
Benchmarking is defined as a “continuous process” (APQC, 2008) and encourages the use of a continuous quality
improvement model (the PDSA cycle). Although this OR benchmark initiative, as many benchmark initiatives (Askim et
al., 2008), started with a stated aim to improve, actual (measurable) quality or performance improvements are not
necessary for this initiative to endure. These findings further support the idea of de Korne et al. (2010, 2012) that
benchmarking is relying on iterative and social processes in combination with structured and rational process of
performance comparison. The relatively limited focus on OR utilisation in this benchmark seems to be a starting point for
exchanging a variety of information and experiences considering the structure, process and performance of OR
departments. More attention needs to be given to the relation between benchmarking as instrument and the actual
performance improvements realised through benchmarking in the local UMC’s. A collaborative approach in
benchmarking can be effective because participants use its knowledgesharing infrastructure which enables operational,
tactical and strategic learning. Organisational learning is to the advantage of overall OR management. Benchmarking
,seems a useful instrument in enabling hospitals to learn from each other, to initiate performance improvements and
catalyse knowledge-sharing
De Bruin
How many beds must be allocated to a specific clinical ward to meet production targets? When budgets get tight, what are
the effects of downsizing a nursing unit? These questions are often discussed by medical professionals, hospital consultants,
and managers. In these discussions the occupancy rate is of great importance and often used as an input parameter. Most
hospitals use the same target occupancy rate for all wards, often 85%. Sometimes an exception is made for critical care and
intensive care units. In this paper we demonstrate that this equity assumption is unrealistic and that it might result in
an exces- sive number of refused admissions, particularly for smaller units. Queuing theory is used to quantify this impact.
We developed a decision support system, based on the Erlang loss model, which can be used to evaluate the current size of
nursing units.
The main contribution of this paper is threefold. First, we provide a comprehensive data analysis for 24 clinical wards in a
university medical center. We analyze both the number of admissions and the length of stay distribution to obtain insight in
the key characteristics of in-patient flow. Moreover, we present occupancy rates for all wards to indicate the bed
utilization.
Our second goal is to demonstrate that in-patient flow can be described by a standard queuing model (Erlang loss model).
This queuing model may therefore be an important tool in supporting strategic and tactical managerial decisions
considering the size of hospital wards. For instance, the model can be used to determine the number of required
operational beds and, hence, the corresponding annual budgets. The third goal is to illustrate the impact of in-patient flow
characteristics on ward sizes. The model provides great additional insight in the specific situation at hospital wards where
variation in demand is such an important characteristic. It reveals the non-linear relation between the size of a unit, the
probability of a refused admission and the occupancy rate. This matter is often not recognized by hospital professionals.
Structural model of patient flow through a clinical ward
Arrivals From the left in Fig. 1 patients arrive to the system and are admitted if there is a free bed available. An arriving
patient who finds all beds occupied is refused and leaves the system. We distinguish scheduled and unscheduled arrivals.
Refused admissions If all beds are occupied the patient is ‘blocked’ and is counted as a refused admission. In practice, a
refused admission can result in a diversion to another hospital or an admission to a non-preferable clinical ward. Many
wards have to deal with refused admissions due to unavailability of beds. The number of refused admissions can be
interpreted as a service level indicator and is important for the quality of care.
Admissions Hospitals keep record of the number of admissions. The total amount of ad- missions is equal to the sum of a
number of production parameters such as admissions, day treatments, and transfers (patients coming from another
medical discipline). The patients of one medical discipline are often spread over several wards.
Length of stay The time spent in the ward is entitled length of stay, often abbreviated as LOS, after which the patient is
discharged or transferred to another ward. The LOS is easily derived from the hospital information system as both time of
admission and time of discharge are logged on individual patient level. Length of stay is affected by congestion and delay in
the care chain.
Beds The capacity of a ward is measured in terms of operational beds. The number of op- erational beds, a management
decision, is used to determine the available personnel budget. This is done via a staffing ratio per operational bed (e.g. 1 full
time equivalent (fte) per nor- mal care bed). The number of operational beds is generally fixed and evaluated on a yearly
basis.
From day to day, the actual number of open or staffed beds slightly fluctuates (due to illness, holiday and patient demand)
Note that the number of physical bed positions is not necessarily equal to the number of operational beds; for most wards
the number of physical beds is larger.
Data analysis
Scheduled admissions
The average number of scheduled arrivals per day (parameter λ) equals 4.664. Clearly, the Poisson distribution does not
give a good fit. A possible explanation is that elective patients are generally admitted during weekdays (Mon–Fri) and hardly
in the weekend (Sat– Sun). This explains the peak bar at “0”, corresponding with the weekend days. Therefore we split the
, scheduled admissions in weekdays and weekends. See Fig. 3 for the results. Since the average number of arrivals per day
during the weekend is very small for most clinical wards, we focus here on the number of arrivals during weekdays.
For a Poisson random variable, the mean and variance are equal. As a first quantitative indication for the variability in the
number of arrivals, we determined the ratio of the vari- ance in the number of arrivals and the average number of arrivals
per weekday (this ratio is 1 for a Poisson random variable).
We like to stress that for practical purposes it is not required that the number of admissions exactly follows the laws of the
Poisson distribution. The key point for practical modeling purposes is that the variability in the number of admissions is
generally well captured by the Poisson distribution, making this a reasonable assumption for the remaining wards as well.
This is also exemplified by the stationary peakedness approximation for G/G/s/s queues, where the variability in the arrival
process is approximated using the variance to mean ratio.
Poisson distribution: used to show how many times an event is likely to occur over a specific period.
Poisson process model: series of discrete event where the average time between events is known, but the exact timing of
events is random. The arrival of an event is independent of the event before.
A Poisson Process meets the following criteria (in reality many phenomena modeled as Poisson processes don’t meet these
exactly):
1. Events are independent of each other. The occurrence of one event does not affect the probability another event will
occur.
2. The average rate (events per time period) is constant.
3. Two events cannot occur at the same time.
Unscheduled admissions
Poission arrival process have been realistic. Intuitively, it is obvious that unscheduled arrivals are independent, because one
emergency admission does not have any effect on the next one. Therefore, the assumptions for a Poisson process seem
realistic.
Lengts of stay
The coefficient of variation, defined as the ratio of the standard deviation to the mean, is greater than 1 for all wards,
except for the special care cardiac surgery. This shows that the LOS at clinical wards is highly variable.
The Lorenz curve is a graphical representation of the cumulative distribution function of a probability distribution. This
concept was introduced to represent the concentration of wealth (Lorenz 1905). We use it to illustrate that those patients
with prolonged hospital stay take a disproportional part of the available resources. The percentage of patients is plotted on
the x-axis, the percentage of resource consumption (in terms of hospitalized days) on the y-axis. Figure 5 gives the Lorenz
curves with the lowest and highest Gini-coefficient for the year 2006, respectively the special care cardiac surgery and
normal care hematol- ogy.
The Gini-coefficient (denoted as G) is a measure of the dispersion of the Lorenz curve (Gini 1912). It is defined as a ratio
with values between 0 and 1; the numerator is the area between the Lorenz curve of the distribution and the uniform
distribution line; the denom- inator is the area under the uniform distribution line. Thus, a low Gini-coefficient indicates
that the variability in LOS is low, while a high Gini-coefficient indicates a more variable dis- tribution. The following formula
was used to calculate the Gini-coefficient for each clinical ward,
n number of admitted patients to a ward
yi theobservedLOSvaluesinascendingorder,whereyi ≤yi+1,i=1,...,n−1
Lorenz curves are very useful in identifying those patients with prolonged hospital stay and their disproportional resource
consumption. It also illustrates the difference in LOS-characteristics between clinical wards. We note that in general a model
based on fixed lengths of stay is not capable of describing the complexity and dynamics of in-patient flow and gives
misleading results, also known as the flaw of averages.
Occupancy rate
Note that this definition of occupancy, which is very common from the perspective of op- erations research and
management science, is not used in most Dutch hospitals. The current national definition is based on ‘hospitalized days’,
which is an administrative financial para- meter.