Supply Chain
Science
Chapter 0: Strategic Foundation
When a basis for making decisions is actively planned, it is called strategy. The link between strategy and
operations lies in an organization’s value proposition. Firms that offer products or services to customers
compete on the basis of some combination of cost, quality, speed, service and variety. There are different
trade-offs that can be made according to these five
elements (e.g. cost vs. quality and cost vs. speed). These
trade-offs can be depicted in figures (cost vs quality ). The Inefficient
efficient frontier represent the most efficient (lowest cost)
system for achieving a given performance level. Points Cost
below these curves are infeasible given the current
technology. Points above them are inefficient. There are
two levels of decision making in terms of these kind of
trade-offs: Infeasible
→ Strategic problem: determining where on the Quality
efficient frontier to locate.
→ Operational problem: designing a system that achieves performance on the efficient frontier.
A supply chain is a goal-oriented network of processes and stock points used to deliver goods and services
to customers.
Processes: the individual activities involved in producing and distributing goods and services.
Stock points: locations in the supply chain where inventories are held.
o Decoupling: stock on purpose, in advance of demand (anticipation, cycle/seasonable,
safety stock)
o Queue: waiting demand (congestion, assembly, transport, batch/set-up)
Network: the various paths by which goods and services can flow through a supply chain.
Goal oriented: the fundamental objective of a supply chain is to support the strategy of an
originations.
At the lowest level, a single process fed by a single stock point is defined as a station, which consists of
one or more severs. The next level in size and complexity is the line (or routing/flow), which is a sequence
of stations used to generate a product or service. The highest level is a network, which involves multiple
lines producing multiple products and/or services.
,Chapter 1: Capacity
Over the long term, average throughput of a process is always strictly less than capacity.
The fundamental activity of any operations system centres around the flow of entities through processes
made up of servers. The flow typically follow routings that define the sequences of processes visited by
the entities. There are a few key performance measures:
Throughput (TH): rate at which entities are processed by the system.
Work in process (WIP): number of entities in the system which can be measure in physical units
or financial units.
Cycle time (CT) or Throughput time or Lead time: time it takes an entity to traverse the system,
including any rework, restart or other disruptions.
TH
Inventory turns= The inventory turns measures how efficiently an operation converts
WIP
inventory into output.
The capacity of a system is the maximum average rate at which entities can flow through the system. For
an individual station: capacity=base capacity – detractors . The base capacity is the rate of process
under ideal conditions and detractors are anything that slows the output of the process. For routing and
networks, capacity is constrained by the constituent stations. The process that constraints the capacity of
a system is the bottleneck. The bottleneck of a system is the process with the highest utilization. The
utilization level is the fraction of time a station is not idle:
rate into station
utilisation=
capacity of station
Principle (Capacity): The output of a system cannot equal or exceed its capacity. This is due to variability.
If you set the release rate below capacity, the system stabilizes. Although variability causes short-term
fluctuations, WIP will remain consistently low.
Principle (Utilisation): Cycle time increases with utilization and does so sharply as utilization approaches
100%.
The over-time vicious cycle: because maximizing throughput is desirable and estimating true theoretical
capacity is difficult, managers tend to set releases into their system close to or even above theoretical
capacity. This causes cycle times to increase, which in turn causes late orders and excessive WIP. When the
situation becomes bad enough, management authorizes overtime, which changes the capacity of the
system and causes cycle times to come back down. But as soon as the system recovers, overtime is
discontinued, releases are aimed right back at theoretical capacity and the whole cycle beings again.
, Chapter 2: Variability
Increasing variability always degrades the performance of a production system.
Principle (Little’s Law): Over the long-term, average work-in-progress (WIP), throughput (TH) and cycle
time (CT) for any stable process are related as follows:
WIP=TH × CT
There are two restrictions to Little’s Law:
→ It refers to long-term averages.
→ The process must be stable.
Some common applications of Little’s Law include basic calculations, measure of cycle time, and cycle time
reductions.
In practice, operations and supply chain systems can exhibit dramatic differences in efficiency, due to
variability. Understanding variability involves two basic steps:
1. Specification of consistent and appropriate measures of variability. This quantitative measure is
called random variable. The set of all possible realization of a random variable is its population,
most of the time a sample is used instead of the whole population. One way to describe a
population or sample is by means of summary statistics, like the mean (average or central
tendency) and standard deviation (the spread or dispersion).
1. An appropriate measure of variability is the coefficient of variation (CV):
standard deviation
CV =
mean
1. Random variables with CV << 1 have low variability.
2. Random variables with CV >> 1 have high variability.
3. Random variables with CV around 1 have moderate variability (=> exponential curve).
2. At the level of a single process, there are two key sources of variability:
1. Inter-arrival times: times between the arrivals of entities in the process. In cases where
we have a large number of independent customers arriving to a server, the CV will usually
be close to 1, according to a Poisson distribution (between the high-variability and low-
variability cases). Sources of variability are customer decisions, scheduling, transportation
delays, quality problems and upstream processing.
2. Effective process times: the time from when an entity is ready for processing and when it
is completed (this includes detractors). Sources of variability are entity variety, operator
speed, failures, setups and quality problems.
2. Development of cause-and-effect roles of variability in operations systems. In an operating
system, entities queue up behind processes: cycle time=waiting time+ process time . The
fundamental cause of queueing is a lack of coordination between arrivals and processing.
Impact of utilisation and variability on Waiting Time: