Get an overview
Our portfolio of services for data evaluation is comprehensive, applicable both in the IoT and Industry 4.0 fields as well as for the Industrial Internet of Things. A highly trained team of experts can help you to choose the right methods, tools, and platforms. Our general consulting offering is extremely flexible and contains surprising individual ideas for solutions in the fields of data intelligence, digital experience, and value engineering.
For a brief overview of the topics we deal with in the data evaluation area, see the article below.
Data evaluation essentials
The IoT and Industry 4.0 – everyone’s talking about networking, about linking “things” and devices that talk to each other, exchange information, and even negotiate contracts with each other. Does data evaluation play a role at all here? Well of course – the evaluation of data is what breathes life into the Internet of Things. Without data evaluation, the IoT and Industry 4.0 remain lifeless and ineffective.
Let’s indulge in a comparison: Between the IoT/Industry 4.0 and our (yes, the human) body. Without a doubt, you’re aware of these things:
- You use your head to conjure up complex thoughts, draw conclusions, and make transfers to other parts of your body.
- You have reflexes in your arms, legs, and respiratory organs.
- It’s not enough to master an instrument with your head – your hands are required, too.
This is what we’re getting at: It must be possible for data to be evaluated in various different places in IoT solutions depending on what you want to achieve (there’s no need to go into further detail about the bodily analogy here):
- Coherent interrelations must be created – and naturally, this can only be done in a higher-level component that has access to the different data sources (or that actually stores the data itself).
- Data needs to be evaluated quickly and directly in the local setting. The best way to do this is close to where the data is generated. So near the sensors.
We don’t want to bore you with technical details here, but to widen your horizons – what’s possible, what’s important, and what you need to think about. And you do need to think about it in order to make the most of what you have!
It might seem trivial, but it’s often underestimated. Elsewhere, idioms such as “A fool with a tool is still a fool” are known. If this is applied to data transformation, we might say “Data in the wrong format cannot be analyzed”. Is that too harsh? Our experience shows that it’s exactly the right opinion. A simple example: You want to search for a specific employee from a list of all of your employees. If this list is not sorted, and your company happens to be large, this might take an extremely long time. Of course, there are far more complex examples, but basically the data must meet the needs of your (analysis) purpose and be properly prepared so that the contained value can be extracted.
- Data transformation takes place at various different levels:
- Conversion of analog signals into digital signals
- Transformation of digital values of sensors into data packages with values, timestamps, and the sensor ID
- Transformation of protocols
- Transformation as preparation for the use of algorithms, machine learning procedures, data analytics methods and so on
There are different options for the transformation of the data depending on the available hardware. At top level (so in the cloud, for example), you find products and frameworks that permit an abstract, possibly visual definition of the transformation. At the level near the sensors, you must use simpler options.
It might seem obvious, but we find that we keep needing to repeat it: We benefit if we restrict the set of data to be processed to a sensible size. Keeping the amount of data to be processed manageable means that you’ll aways be able to speed up the system that you’re setting up or using.
Some sensors such as induction-based position sensors generate 1000 data records or more per second. Does all of this information need to pass through all levels of your application? Probably not – so you filter out and aggregate data in the appropriate place, as early on in the process as possible.
Source: © 2018 Industrial Internet Consortium, a program of Object Management Group, Inc. (“OMG”), Introduction to Edge Computing in IIOT
IoT applications generally save data at all levels. Because – as can be seen in the next section – data analyses are carried out at all levels, too. The reasons for this are as follows:
- Data filters: The early filtering of data helps to keep the flood of data to a minimum.
- Security: It might make sense to forward only the result of an analysis to the next level. This can be illustrated using the example of a fingerprint sensor, which should only pass on information as to whether the finger in question has been recognized. The actual fingerprint data should not leave the device!
- Performance and efficiency: Often, an on-the-spot analysis is required to ensure a distributed analysis and a fast reaction. Frequently, historical data is required, too.
- Failure safety: If data were only saved (and analyzed) in one place (e.g. on-premise), this would create a dramatic SPOF (single point of failure).
During the creation of IoT and Industry 4.0 solutions, the strategies for saving data must be clearly defined and documented. As shown, various criteria play a part here.
Like data storage, data analysis in IoT applications takes place at all levels. This is easy to grasp if you think about the photo app on your smartphone, for example:
- A photo is taken with your smartphone (the smartphone is the device and the image sensor is the sensor). Today, the integrated image sensor has a whole host of analysis tools such as an auto focus function.
- Your photo app might have further gimmicks such as face distortion or the addition of a virtual pair of glasses or a crown etc. And all of this is done locally on your smartphone.
- Once the photo has been taken, it is uploaded to the cloud. There, person/face recognition functions may be used to link the photo with stored persons.
As you can see, in the example, data is saved at different levels and is analyzed at different levels, too. The reasons for carrying out analyses at different levels are manifold, ranging from security to performance and network capacity.
Data analytics level
Starting from a really simple example (smartphone) and the specific question of where data is analyzed, we can take a look at data analysis from an entirely different angle: What added value does data analysis offer us? Take a look at the following chart:
Image: Data analytics level, based on Gartner.
On the basis of the Gartner Data Analytics Maturity Mode, we can assign added value to different analysis methods. The more statements we’re able to make about the future, the more extra value we have. This makes perfect sense. If we can shape the future (top right, prescriptive analytics), the added value to be anticipated is greater than if we are merely able to understand the past (bottom left, descriptive analytics).
IoT applications and Industry 4.0 scenarios benefit from all analyses. The further into the future we want to go and the more we want to influence the future, the more likely we are to come across complex analysis models. Naturally, the most added value is created at the higher levels (so at application level and not at sensor level). The reasons for this are as follows:
- The data volume and breadth of data increases.
- The available computing power increases.
- The possibilities for influencing the entire scenario increase.
Image: © Novatec
The term “machine learning” has for some time been used to denote methods that primarily use neural networks to analyze data, make predictions, detect anomalies, classify topics, and make decisions. Even if machine learning is often seen as a complex process, machine learning methods are increasingly found at lower levels, right next to sensors and actuators. This is because this is where machine learning can really play to its strength – the really efficient interpretation of abstract data. In addition, machine learning models can be well adjusted – or adjust themselves – in line with changing circumstances.
Further aspects of data analysis include the following:
- How is the absence of data interpreted?
- Where are plausibility checks implemented? What for?
- How are errors in data determined? (Here, too, machine learning can be suitable for defining a “corridor” of normal data.)
- Which correlations can offer important information?
- How can customers be provided with the best possible support?
Streaming and static analyses
Image: © Novatec
You’ll notice that data can be evaluated retrospectively or on an ad-hoc basis (streaming). Previously, retrospective data evaluation was mainly used: So once a month, a report/evaluation was performed. This was done using batch processing. And these evaluation processes are sometimes still used today. But increasingly, we’re using real-time analysis or stream processing, since we want to be able to respond immediately if, for example, share prices start to fall, not several hours later. We need to increase production volumes as soon as demand rises, and not a month after the fact. The computing-intensive analyses based on past data have not been eliminated, but they’re carried out in addition to this new kind of evaluation. Don’t be surprised if data analysis becomes a major part of your IoT and Industry 4.0 project!
The bottom line
Some think that data is today’s oil. In these days of energy revolution, we’re not so sure. However, what is certain is that data – when combined with analysis possibilities – can be extremely powerful. It can even allow us to change the future. It’s no coincidence that the biggest data collectors are also the most valuable companies: Google and Facebook are two of these.
Collect data and use it to:
- Get to know your customers better and thus provide them with better support
- Make your production even more efficient
- Bring the quality of your products further into the foreground