Supply Chain
Science
Chapter 0: Strategic Foundation
When a basis for making decisions is actively planned, it is called strategy. The link between strategy and
operations lies in an organization’s value proposition. Firms that offer products or services to customers
compete on the basis of some combination of performance aspects. These can be cost, quality, speed,
service and variety. There are different trade-offs that can
be made according to these five elements (e.g. cost vs.
quality and cost vs. speed). These trade-offs can be Inefficient
depicted in figures (cost vs quality ). The efficient frontier
represent the most efficient (lowest cost) system for Cost
achieving a given performance level. Points below these
curves are infeasible given the current technology. Points
above them are inefficient. There are two levels of decision
making in terms of these kind of trade-offs: Infeasible
→ Strategic problem: determining where on the Quality
efficient frontier to locate.
→ Operational problem: designing a system that achieves performance on the efficient frontier.
A supply chain is a goal-oriented network of processes and stock points used to deliver goods and services
to customers.
Processes: the individual activities involved in producing and distributing goods and services.
Stock points: locations in the supply chain where inventories are held.
o Decoupling: stock on purpose, in advance of demand (anticipation, cycle/seasonable,
safety stock). A customer order decoupling point is the point after which the customer
order is known.
o Queue: waiting demand (congestion, assembly, transport, batch/set-up). In a queue the
order matters.
Network: the various paths by which goods and services can flow through a supply chain.
Goal oriented: the fundamental objective of a supply chain is to support the strategy of an
originations.
At the lowest level, a single process fed by a single stock point is defined as a station, which consists of
one or more severs. The next level in size and complexity is the line (or routing/flow), which is a sequence
of stations used to generate a product or service. The highest level is a network, which involves multiple
lines producing multiple products and/or services.
,Chapter 1: Capacity
Over the long term, average throughput of a process is always strictly less than capacity.
The fundamental activity of any operations system centres around the flow of entities through processes
made up of servers. The flow typically follow routings that define the sequences of processes visited by
the entities. There are a few key performance measures:
Throughput (TH): rate at which entities are processed by the system (restricted by capacity)
Work in process (WIP): number of entities in the system which can be measure in physical units
or financial units.
Cycle time (CT) or Throughput time or Lead time: time it takes an entity to traverse the system,
including any rework, restart or other disruptions (i.e. time to go through the system).
TH
Inventory turns= The inventory turns measures how efficiently an operation converts
WIP
inventory into output.
The capacity of a system is the maximum average rate at which entities can flow through the system. For
an individual station: capacity=base capacity – detractors . The base capacity is the rate of
process under ideal conditions and detractors are anything that slows the output of the process. For
routing and networks, capacity is constrained by the constituent stations. The process that constraints the
capacity of a system is the bottleneck. The bottleneck of a system is the process with the highest
utilization. The utilization level is the fraction of time a station is not idle:
rate into station
utilisation=
capacity of station
Principle (Capacity): The output of a system cannot equal or exceed its capacity. This is due to variability.
If you set the release rate below capacity, the system stabilizes. Although variability causes short-term
fluctuations, WIP will remain consistently low.
Principle (Utilisation): Cycle time increases with utilization and does so sharply as utilization approaches
100%.
The over-time vicious cycle: because maximizing throughput is desirable and estimating true theoretical
capacity is difficult, managers tend to set releases into their system close to or even above theoretical
capacity. This causes cycle times to increase, which in turn causes late orders and excessive WIP. When the
situation becomes bad enough, management authorizes overtime, which changes the capacity of the
system and causes cycle times to come back down. But as soon as the system recovers, overtime is
discontinued, releases are aimed right back at theoretical capacity and the whole cycle beings again.
,Chapter 2: Variability
Increasing variability always degrades the performance of a production system.
Principle (Little’s Law): Over the long-term, average work-in-progress (WIP), throughput (TH) and cycle
time (CT) for any stable process are related as follows:
WIP=TH × CT
There are two restrictions to Little’s Law:
→ It refers to long-term averages.
→ The process must be stable; stationary system with all distributions known.
→ You need to start with an empty system (no WIP).
Some common applications of Little’s Law include basic calculations, measure of cycle time, and cycle time
reductions.
In practice, operations and supply chain systems can exhibit dramatic differences in efficiency, due to
variability. Understanding variability involves two basic steps:
1. Specification of consistent and appropriate measures of variability. This quantitative measure is
called random variable. The set of all possible realization of a random variable is its population,
most of the time a sample is used instead of the whole population. One way to describe a
population or sample is by means of summary statistics, like the mean (average or central
tendency) and standard deviation (the spread or dispersion).
1. An appropriate measure of variability is the coefficient of variation (CV):
standard deviation
CV =
mean
1. Random variables with CV << 1 have low variability.
2. Random variables with CV >> 1 have high variability.
3. Random variables with CV around 1 have moderate variability (=> exponential curve).
2. At the level of a single process, there are two key sources of variability:
1. Inter-arrival times: times between the arrival of entities in the process. In cases where
we have a large number of independent customers arriving to a server, the CV will usually
be close to 1, according to a Poisson distribution (between the high-variability and low-
variability cases). Sources of variability are customer decisions, scheduling, transportation
delays, quality problems and upstream processing.
2. Effective process times: the time from when an entity is ready for processing and when it
is completed (this includes detractors). Sources of variability are entity variety, operator
speed, failures, setups and quality problems.
2. Development of cause-and-effect roles of variability in operations systems. In an operating
system, entities queue up behind processes: cycle time=waiting time+ process time . The
fundamental cause of queueing is a lack of coordination between arrivals and processing.
Impact of utilisation and variability on Waiting Time:
, Low variability allowing to increase the throughput, without increasing waiting times.
Low variability allowing to decrease waiting times, without lowering throughput.
Principle (Queueing): At a single station with no limit on the number of entities that can queue up, the
waiting time (WT) due to queueing is given by:
WT =V ×U ×T
where, V =a variability factor an increasing function of both the CV of inter-arrival
times and the CV of effective process times proportional to the squared coefficient of
variation (SCV) of both inter-arrival and process times V ≈ ( CV 2a+ CV 2b ) / 2
U=anutilization factor an increasing function of utilization which grows to
UTIL
infinity as utilization approaches 100% U=
1−UTIL
T =average effective process time for an entity at the station
The relationship only holds for stationary systems with all distributions known and priorities which are
either FCFS or Random. So, often you cannot calculate an exact WT.
Because an entity at a station is either waiting or being process, the cycle time can be written as:
CT=WT +T =VUT +T
Insights from the ‘VUT’-formula:
o Variability and utilization interact (high variability is most damaging at stations with high
utilization (i.e. at bottlenecks)).
o Unless WIP is capped, queueing becomes extremely sensitive to utilization as the station is loaded
close to its capacity.
o Reductions in utilization tend to have a much larger impact on waiting time than reductions in
variability. However, because capacity is costly, high utilization is usually desirable. Variability
reduction is often the key to achieving high efficiency operations systems.
,Extra Lecture 1: OEE
valuable operating time
Overall Equipment Effectiveness ( OEE )= =a × p × q
loadingtime
Extra Lecture 1 & 2 & Article: Throughput Diagrams and
Order Process Diagrams
High delivery reliability is one of the order winning performance criteria for make-to-order (MTO)
companies. For controlling and improving delivery reliability, a good diagnosis is necessary. It is important
for the diagnosis phase that the supportive tool links performance indicators to those decisions that can
affect the performance.
Input and output control decisions can influence both the average lateness and the variance of lateness.
Lateness is defined as the conformity of a schedule to a given due date. It is measured by subtracting the
promised delivery time from the realised throughput time. The percentage of orders delivered late can be
decreased by reducing the average lateness and/or by reducing the variance of lateness.
Input control decisions:
Order acceptance and delivery date promising; deals with customer enquiries.
Release of orders. Because capacity is often restrictive, it is important to select those orders for
release that provide capacity groups in the shop with a good load balance.
Priority dispatching; a relatively weak input control decision.
Output control decision dedicate capacity to those capacity groups where orders are congesting). Capacity
changes are generally triggered by large sets of orders, tending to be delivered late.
Throughput diagrams are an important instrument to diagnose flow problems. This throughput diagram
shows the relations between the cumulative input and output, and Little’s Law ( WIP=TH × CT ).
The horizontal axis in a throughput diagram shows the cumulative time in working days. The vertical axis
shows work measured in processing time hours for a capacity group. Note: only for First Come First Served
discipline, the horizontal distances indicate the throughput times of individual orders.
, In a throughput diagram, you can also see when
there is an overloaded or under-loaded resource.
In case of an overloaded resource, the cumulative
input and output lines are diverging. When there
is an under-loaded resource, the lines are crossing
each other.
The start of this throughput diagram is important,
when you start with a wrong WIP, this will be
wrong all the time.
A bottleneck is operating at full capacity. How do you recognize a bottleneck in a throughput diagram?
Operation completion line would not follow the arrival line precisely. The operating line of a bottleneck
would be more or less straight, not sharp-edged and diagonal.
What can you see from throughput diagram?
- Analysing aggregate flow of orders
- Unravelling acceptance behaviour
- Comparing hours promised vs realised
- Determining (un)balance of loads
- Pointing out moments of change
- Detecting stages causing main delays
While throughput diagrams show averages, Order Progress Diagrams show variance. The order progress
diagram indicate the difference between the progress of an individual order and the average order
progress pattern. Thus, we can observe which orders are delayed and which are speeded up for each stage
of the order. Besides, we will signal the extent of delay or speeding up and relate it to the estimated
lateness. For each stage of the order the diagram shows the relationship between realised individual
throughput times and average throughput times.
The horizontal axis shows realised dates for different stages of the process in terms of working days. The
vertical axis shows (estimated) lateness in different stages of the process. The estimated lateness after
each stage is defined as the difference between the realised completion date of that stage and the virtual
due date of that stage. The progress of an individual order is represented by a single curve. Between dots
show different stages/departments of the order (engineering, production, processing, end/departure,
etc.). In an ideal situation, all dots lie on the horizontal axis. An order progress diagram gives insight into
the progress of orders and the link with control decisions.
Upward and downward sloping line segments could be caused by the applied dispatching rules or output
control decisions in the considered stage.
- Increasing means slowing down
- Decreasing means speeding up