Smart Industries | University of Twente | MSc. Business administration, Digital Business | Arian Merzaie
Week 1 | Introduction to Smart Industry
Figure 1 - Model by Daniel Bell
Pre-industrial: In the pre-industrial era, the production was extractive and the resources consisted of raw
materials which were transformed by means of human labor in some manner into natural power. The
laborers were predominantly farmers and artisans. Moreover, they were oriented on the past.
Industrial: The industrial age can be characterized by fabrication. Factories created energy by means of labor,
the resources now not only consisted of raw materials but also of money in the form of financing projects
and machines. For this, they needed engineers and laborers, and the orientation was on ad-hoc adaptiveness.
Post-industrial: In the post-industrial era, the production consisted of processing and recycling. The resources
for this era were information systems (ICT) and codified knowledge, for which we needed scientists. The
time perspective was to plan for the future and control output.
Smart: The current age production is slowly going towards a customized approach in which batches-of-one
are produced. The tools we are using now are the internet (www.), machine learning, and big data. This is
possible through digital collective intelligence and the orientation is on predictions and forecasts.
1.1.1 Digital twins
All smart, connected products, from home appliances to industrial equipment, share three core elements:
physical components (such as mechanical and electrical parts); smart components (sensors,
microprocessors, data storage, controls, software, an embedded operating system, and a digital user
interface) and connectivity components (ports, antennae, protocols, and networks that enable
communication between the product and the product cloud, which runs on remote servers and contains the
product’s external operating system). Smart, connected products require a whole new supporting technology
infrastructure. This “technology stack” provides a gateway for data exchange between the product and the
user and integrates data from business systems, external sources, and other related products. The technology
stack also serves as the platform for data storage and analytics, runs applications, and safeguards access to
products and the data flowing to and from them.
To better understand the rich data generated by smart, connected products, companies are also
beginning to deploy a tool called a “digital twin.” Originally conceived by the Defense Advanced Research
Projects Agency (DARPA), a digital twin is a 3-D virtual-reality replica of a physical product. As data streams
in, the twin evolves to reflect how the physical product has been altered and used and the environmental
conditions to which it has been exposed. Like an avatar for the actual product, the digital twin allows the
company to visualize the status and condition of a product that may be thousands of miles away. Digital
twins may also provide new insights into how products can be better designed, manufactured, operated, and
serviced.
,Systems can be categorized into three types: simple, complicated, and complex. Simple systems are just that.
Complicated systems are also completely predictable. The system follows well defined patterns. The
difference between simple systems and complicated systems is the component count. Complex systems
have been characterized as being a large network of components, many-to-many communication channels,
and sophisticated information processing that makes prediction of system states difficult.
Four Categories of System Behavior
Four final categories can be defined: Predicted Desirable (PD), Predicted Undesirable (PU), Unpredicted
Desirable (UD), and Unpredicted Undesirable (UU).
The PD category is obviously the desired behavior of our system. This is the intentional design and
realization of our system. In systems engineering terms, it is the requirements our system is designed to
meet.
The PU category contains problems we know about and will do something about eliminating. PU
problems are management issues. If we know about the PU problems and do nothing about them, that is
engineering malpractice. This is the category that expensive lawsuits are made of.
The unpredicted category is our “surprise” category. The UD category contains pleasant surprises.
While ignorance is never bliss, this category will only hurt our pride of not having understood our system
well enough to predict these beneficial behaviors.
The UU category holds the potential for serious, catastrophic problems. It is this category of
emergent behavior that we need to focus on. We need capabilities and methodologies to mitigate and/or
eliminate any serious problems and even reduce unforeseen problems that are merely annoying. This is the
purpose of the Digital Twin model and methodology.
All the elements of the Digital Twin: real space, virtual space, the link for data flow from real space to
virtual space, the link for information flow from virtual space to real space and virtual sub-spaces. The
premise driving the model was that each system consisted of two systems, the physical system that has
always existed and a new virtual system that contained all of the information about the physical system. This
meant that there was a mirroring or twinning of systems between what existed in real space to what existed
in virtual space and vice versa. The PLM or Product Lifecycle Management in the title meant that this was
not a static representation, but that the two systems would be linked throughout the entire lifecycle of the
system. The virtual and real systems would be connected as the system went through the four phases of
creation, production (manufacture), operation (sustainment/support), and disposal.
The lifecycle of the digital twin
The Digital Twin implementation model attempts to convey a sense of being iterative and simultaneous in
the development process. Unlike the Waterfall or even Spiral Models, the downstream functional areas as
conventionally thought of are brought upstream into the create phase. The “ilities” are part of the
considerations of system design.
In fact, these areas can and should influence the design. For example, being able to manufacture a
,honeycombed part through additive manufacturing would result in a part meeting its performance
requirement at a significant savings in weight. Without that consideration, designers would specify a more
expensive material in order to meet the weight and performance requirements.
What makes this new model feasible is the ability to work in the virtual space of the Digital
Twin. The classic sequential models of Systems Engineering were necessitated by the need to work with
physical objects. Designs had to be translated into expensive physical prototypes in order to do the
downstream work of say, manufacturing. This meant that only a subset of designs could be considered,
because the cost of getting it wrong and having to go back and redesign was expensive and time consuming.
The Digital Twin changes that with its ability to model and simulate digitally. As indicated in the
figure below, downstream functional areas can influence design because working with digital models in
the create phase is much cheaper and faster and will continue to move in that direction.
While this new model is advantageous with traditional subtractive manufacturing methods, it will
be required as additive manufacturing technologies advance and become mainstream production
capabilities. The design-to-production process for additive manufacturing is much more integrative than
subtractive manufacturing. Integration is a major hallmark of the Digital Twin Implementation Model.
Additionally, the authors describe Digital Twin like modeling and simulation as having high potential for
additive manufacturing part qualification.
While Digital Twin Implementation Model needs more detail and maturation, the concepts behind
it, addressing the “ilities” as early as possible, integration across the lifecycle, and fast iterations, address the
shortcomings of the current Systems Engineering models. The maturation and adoption of additive
manufacturing will only serve to highlight this need.
Testing of the digital twin
Visual testing = The real thing should be 100% covered by digital designs; as is needed in car engine CAD
designs
Performance testing = If we know all relevant natural laws, we can perfectly predict; F1 races outcome
100% predictable
Reflectivity testing = Learning by a machine and at a meta-design level by designer
, 1.1.2 Digital Stack for new products (infrastructure)
The connectivity of smart products goes both ways. First it goes to the cloud and allows for smart product
applications, an analytics/rule engine, an application platform, and a product data database (real-time
product information). On the other side, the connectivity allows for product hardware and software
(connectivity port/user interface, control components). Both the cloud as well as the product gain data and
information from external information sources. The integration of business systems with products/cloud
and external information results in systems such as CRM, ERP, and PLM. All components are secured by
systems such as cloud layers, authentication, system access, etc.
This infrastructure enables extraordinary new product capabilities. First, products can monitor and
report on their own condition and environment, helping to generate insights into their performance and
use. Second, complex product operations can be controlled by the users, through numerous remote-access
options. Third, the combination of monitoring data and remote-control capability creates new opportunities
for optimization (smart farms/buildings). Fourth, the combination of monitoring data, remote control, and
optimization algorithms allows autonomy. Products can learn, adapt to the environment and to user
preferences, service themselves, and operate on their own.
1.1.3 Creating new value with data
Data from smart, connected products is generating insights that help businesses, customers, and partners
optimize product performance. Simple analytics, applied by individual products to their own data, reveal
basic insights; more-sophisticated analytics, applied to product data that has been pooled into a “lake” with
data from external and enterprise sources, unearth deeper insights. optimize production. For example, a
production machine can detect a potentially dangerous malfunction, shut down other equipment that could
be damaged, and direct maintenance staff to the problem. GE’s Brilliant Factories initiative uses sensors
(retrofitted on existing equipment or designed into new equipment) to stream information into a data lake,
where it can be analyzed for insights on cutting downtime and improving efficiency. In one plant, this
approach doubled the production of defect-free units.