Your Challenges in the Choice of a Quality Assurance Strategy

On average 20% of the annual revenue in the manufacturing industry is lost due to the lack of product quality [1]. This accumulated loss is referred to as the cost of poor quality (CoPQ), which can be broken down into two categories:

  • Internal costs of nonconformance
  • External costs of nonconformance

Internal costs of nonconformance include the costs incurred for faulty products prior to delivery to the customer. For example, this includes all costs for the production of unusable products (rejects) that have such serious defects that they cannot be repaired. And in the event of minor complaints, they are the additional personnel costs for the repairing of defective products.

If inferior products are still delivered to customers, this can have fatal consequences, referred to as external costs of nonconformance.

Examples of this are the high costs of replacing or repairing products in cases of warranty claims. Furthermore, in the case of repeatedly poor quality, there is great danger of your reputation falling into disrepute, which will be reinforced by networking and social media. The consequences are declining customer satisfaction, resulting in loss of customers, market losses, and the corresponding loss of sales. Depending on the severity of the problem, there can be more drastic consequences: They range from a complete product recall to extensive legal liability with penalties if it can be proven that people have been harmed by defective products.

In view of the risks and costs of nonconformities, to guarantee the quality of products on an ongoing basis is both indispensable and at the same time a difficult task. The goal of a quality strategy is to maximize product quality and minimize quality costs at the same time. The following diagram shows a maturity model for quality assurance strategies which we will then examine a bit closer.

Maturity model for quality assurance strategies.
Source: Novatec internal

In most companies, reactive quality controls are performed on the end products. Here, the quality is randomly checked by employees at the end of the production process. Because the product is inspected only at the end, high costs are incurred because of rejects or refinishing work. Defects often begin already with the intermediate products. Added to this are the personnel costs for experienced employees that perform such inspections.

In condition-based quality control, the quality is not examined at the end of the production process, but during individual steps. Built-in sensors are used to record the current quality. Defects in intermediate products can be identified from simple rules like the exceeding or undershooting of threshold values. This is the only way for countermeasures be taken early on.

Predictive quality assurance constitutes a proactive quality control that uses modern analytical methods such as machine learning.

Quality-related and traceable measurands are predicted at the machine level in all subprocesses in order to initiate measures for anticipated quality defects preemptively and thus increase productivity and quality on an ongoing basis. This is done based on historical data, such as sensor data from machines, environmental conditions, process parameters, or machine data from control units. The basis for this is an IIoT platform and built-in sensors in every machine to record the necessary data.

The last stage in the maturity model describes the prescriptive quality assurance. Recommendations are provided at the machine level for optimal process parameters with the help of predictive quality, root-cause analyses, and other diagnostic procedures.

The Advantages of Using Predictive Quality

The goal of predictive quality assurance is to prevent errors in order to save the time and costs produced by reprocessing or using avoidable rejects, by retesting, or by refinishing work. Here, artificial intelligence is used during production to predict the quality of products as they are manufactured. This allows a decision to be made right away as to whether a product is a quality piece and can be used in further production steps, or a reject that must be recycled. Based on current process parameters and contexts learned from the past, the quality of a product is predicted as it is manufactured. When a product is assessed as a reject, process parameter changes are recommended so the error will be as minor as possible – ideally zero. Error prevention by preventive quality assurance is significant especially if the quality of a manufactured product can be examined only after some time. For example, in the case of plastic extrusion, the quality can be examined only after the cooling phase.

The complex relationships and interactions within modern and high-tech production processes are constantly becoming greater challenges even for experienced process experts. Even though today’s manufacturing processes are set up to be very stable, errors can still occur despite the fact that all parameters operate within the applicable tolerances. If an error is discovered by an inspection at the end of the production chain, process parameters must be adjusted. Here, it is not only crucial to know the relationships between process parameters and production quality, but also the time between detection and action (quality control loop). For this reason, predictive quality assurance informs employees in production whether the product currently being manufactured is a quality piece or a reject.

What are the Advantages of Using Predictive Quality?

  • Prediction of the product quality (OK/NOK statement) while the product is being manufactured
  • Time and cost saving by preventing the reprocessing of rejects
  • Detection of relationships between process parameters and product quality

Predictive quality assurance uses sensor data (e.g. temperature, pressure, vibrations, etc.), event data from IT systems, process parameters from machine control units, and machine learning to predict the probability of the product being a reject or a quality piece. In the case of plastic extrusion, this would be injection pressures and temperatures of the molten material, moisture content and storage temperature of the granulate, ambient temperature and usage history of the tool, and quality reports from downstream processes.

The Industrial Internet of Things (IIoT) allows process parameters to be captured from equipment and machines. The large volumes of data produced here are frequently stored and processed in cloud environments. Recorded process parameters, sensor data, and the results of quality assurance are used as learning examples. Machine learning is then used to learn patterns in the process parameters and the associated evaluations as a quality piece or reject. Next, the process parameters recorded by sensors are continuously monitored in production. If a learned pattern is detected in the process parameters, production workers are informed of imminent quality problems. Quality problems that cannot be attributed to a familiar pattern in the data are used to improve the prediction.

How Predictive Quality Works

Monitoring the condition of the equipment and machines is essential for predictive quality. The convergence of operational technology (OT) and information technology (IT), as well as the availability of bandwidth, computing capacity, and memory, make it possible to collect, store, and analyze large volumes of data. To this end, different data is collected by the appropriate sensors and read out from machine controllers and other IT systems.

The following data, among other information, is used to evaluate the condition of the equipment and machines:

  • Vibrations (e.g. deflection, speed, acceleration, or ultrasound)
  • Temperature (e.g. component temperature, ambient temperature, infrared radiation)
  • Tribology (e.g. wear particles)
  • Event data (e.g. state of production, error messages)
  • Process parameters (e.g. rotational speed, processing time)

IIoT platforms (e.g. Amazon AWS IoT, Microsoft Azure IoT, or Siemens MindSphere, Cumulocity, ADAMOS, Echolo, Crosser) frequently collect the data from the sensors via OPC-UA or special IIoT connectors and transmit it to a cloud environment for storage and analysis. In the example of manufacturing of plastic pipes by pipe extrusion, the predictive quality can be implemented based on the following structure:

Predictive quality for pipe extrusion.
Source: Novatec internal

The architecture shown provides an overview of components and technologies used in predictive quality for the manufacturing of plastic pipes. Architecture and technologies are based on the cloud native open source software approach. This allows for operation in your own computer center or with a cloud provider such as AWS or Azure. Technologies can also be substituted by managed services such as Amazon SageMaker or Azure Machine Learning Studio. The concrete implementation of the architecture is always based on the circumstances and requirements of the respective project.

Production (shop floor)

Machine tool for pipe extrusion.
Source: Novatec internal

The screw extruder is found in the manufacturing of plastic pipes (e.g. cable conduits for electrical installation). Granulate is fed into the filling hopper at the beginning of the pipe extrusion. The screw conveys the granulate through the cylinder. Heating equipment is attached around the cylinder. The heating elements and friction cause the material to melt. In the illustration on the left, the tool that presses the molten mass into the shape of a pipe is attached. Next, the material is drawn through a cooling and vacuum system that solidifies it. Finally the long plastic pipes are cut to the desired length.

An elementary measurand for the desired quality of a plastic pipe is the inner and outer diameters that are used for target metrics for the prediction model based on machine learning. Additional quality features such as the color quality or particle size distribution of the plastic are possible.

Among other things, the following process parameters have an impact on the quality of the pipe product:

  • Hopper temperature
  • Rotational speed of the screw
  • Melting temperature
  • Temperatures of the heating elements
  • Temperature of the nozzle head
  • Temperature of the cooling
  • Vacuum pressure
  • Pulling force of the pulling mechanism

To record these parameters, temperature sensors must be attached to many places such as the hopper and the heating elements accordingly. The rotational speed can be obtained from the extruder machine control unit. Additional sensors are also required to measure the vacuum pressure. This way, all the relevant sensor data is collected in an IIoT platform and transmitted to a database in a cloud environment. Depending on the circumstances, sensor data can be transmitted to one or more customers. For security reasons, data transmission is mostly initiated from production (push principle).

The Cloud

The data transmitted from the various extruders is stored centrally in a database. This allows for a comprehensive analysis beyond the individual extruders (machines in general). In addition, process data or information on warranty claims from an ERP or PPS system can be stored in the database. This includes data such as the replacement rate of a product, utilization, or processing time of a workpiece. The data memory is selected based on the requirements (e.g. scalability, speed, resource allocation, in-memory) of the IoT strategy with regard to the processing of large volumes of data. The data is collected over a set time period in preparation for the training phase. During this time, visualizations (e.g. dashboards) are used to acquire an initial benefit from the data.

Training

Once there is enough data in the database to train a machine learning (ML) model, the development of this data is started. From the training data, the model learns to predict the inner and outer diameters of the plastic pipe during the extrusion process. The process parameters described are used as input values. The data in the database first undergoes a preprocessing. The preprocessing extracts relevant sensor data, among other things, and performs a normalization or standardization of the features. Then the data is broken down within the database:

  • Training data consists of examples from which a machine learning model learns patterns in the data and derives regularities.
  • Validation data is used to perform a final fine adjustment of the parameters of the model for an optimal prediction of the quality.
  • Test data is used to evaluate the quality of the model’s predictions.

Based on the training data, a machine learning model is trained to predict the relevant metrics for quality assurance such as the pipe diameter. The trained model is evaluated based on the test and validation data, to ensure the quality of the model. Then the model is deployed for use in production with current extruder data. The training process is a cycle that is repeated again and again over time. Of course, new sensors are used especially when circumstances change, as when new equipment has been commissioned or new data is collected. For this reason, a high degree of automation is important. An accurate prediction can be made only if the machine learning model is familiar with all the patterns in the data and is therefore always up to date.

Operation

The model deployed from the training process allows a prediction to be made of the inner and outer pipe diameters based on current data so that immediate measures can be taken by a production worker if the expected quality is inadequate. Current extruder data also undergoes preprocessing when the model is in operation. The preprocessing of current data ensures that the ML model can process it correctly. With the current data and the patterns learned by training, the machine learning model makes an accurate prediction. The prediction is saved back in the database as an output. This way, all relevant sensor data, process data, and the prediction can be visualized and monitored in a dashboard such as Grafana. Customer service portals can consume the data and integrate and display it in the portal. This way, the optimal process parameters can be adjusted quickly to ensure the high quality of the plastic pipe product and thus produce less rejects by taking preemptive action. For example, recommendations for action based on root-cause analyses at the machine and component level can also be given to production workers. Further analyses such as forecasting and warranty claims are also possible.

The technical implementation of the training as well as the prediction pipeline consists of decoupled, scalable, and interchangeable microservices that can be implemented with docker containers regardless of the platform, for example. This allows individual services to be written in different programming languages because uniform communication via protocols such as HTTP or REST ensures cooperation. The microservices are integrated into an existing or new cluster. A machine learning framework such as TensorFlow or Keras is used to compile and train deep neural networks according to allocation, evaluate the results, and deploy models. A tool such as DVC (data version control) is used for the versioning of data and models, as well as for pipelining, to enable reproducibility, maintainability, and traceability for the training cycle.

Read more

Our Predictive Quality Services

If data is already collected by an IIoT platform, we usually proceed in four steps for a predictive quality project:

  1. As a first step, a joint workshop is held to record the actual status. We would like to understand which different data and sources exist, how large the volume of data is that is produced daily, and how often the data is updated for process parameters. The most important thing is to clarify whether currently there even exists any data with examples that can be used. This information allows us to make an initial estimate of the quality of the data and the frequency with which a new version of the model must be trained. We are also interested in knowing which existing (IIoT) platform is used, how the architecture is structured, which peripheral systems (e.g. ERP systems, BPM systems) are present, and which systems must be integrated. A target architecture will then be defined together with the technology strategy followed – e.g. open-source first strategy or managed-cloud service first strategy. Based on these principles, we will develop the objectives with you.
  2. As a second step, we evaluate the quality of the data by means of an exploratory data analysis with regard to suitability and ability to predict the quality. We use a representative data set provided by you for this purpose. This is followed by a prototypical implementation of one or more machine learning (ML) models under consideration as well as their evaluation and documentation as part of a feasibility study (proof of value). This will produce a qualified decision recommendation including opportunities and risks. In addition, findings and insights from the existing data can be revealed, and relationships and correlations can be visualized.
  3. As a third step, the predictive quality solution is implemented based on the architecture developed and the objectives. The solution is implemented depending on whether the company uses a cloud native application or managed services of a cloud provider (e.g. AWS or Azure). A holistic machine learning solution that comprises both a training phase and an inference phase is created. This includes a machine learning pipeline for the lifecycle of the machine learning model and the associated versioning of data and models. We attach great importance to automation and scaling in our predictive quality solution. In order to guarantee the availability of the production system and ensure a smooth operation, the services are monitored – in terms of the operational aspects (e.g. response time, number of calls, use of memory, or CPU utilization) as well as the qualitative aspects of the prediction (e.g. accuracy). From this information a conclusion can be made as to when a new model should be trained and how the application should scale automatically with the load.
  4. In the fourth step, the enablement of the DevOps engineers, the specialist areas, and the data scientists takes place. We familiarize your employees with the technologies and methods in workshops or training sessions. Here it doesn’t matter whether it is just a technical enablement or if we are teaching your employees the basics of machine learning. What is important to us is that you receive the greatest advantage possible for yourself and your customers from the predictive quality solution.

If connectivity has not yet been established and an IIoT platform is not yet available, we will gladly take care of that and expand the scope of the project accordingly to include additional steps. Have a look at our IoT program.

Your direct contact

Dr. Harald Bosch

Senior Consultant
Table of contents
Dr. Harald Bosch Senior Consultant