Glean important information from your data
Our portfolio of data evaluation services encompasses all relevant topics and is particularly applicable for the fields of IoT, Industry 4.0, and the Industrial Internet of Things.
A highly trained team of experts can help you to choose the right methods, tools, and platforms. We have a flexible general consulting offering that contains surprising individual ideas for solutions in the fields of data intelligence, digital experience, and quality engineering.
Here, we’ll give you a brief overview of the topics that we tackle in the data evaluation area.
Data evaluation essentials
The IoT and Industry 4.0 – everyone’s talking about networking, about linking “things” and devices that talk to each other, exchange information, and even negotiate contracts with each other. Does data evaluation play a role at all here? The answer: The evaluation of data is what breathes life into the Internet of Things.
We’d like to make a comparison: Let’s compare the IoT and Industry 4.0 with our body (yes, the actual human body). Without a doubt, you’re aware of these things:
- You use your head to conjure up complex thoughts, draw conclusions, and make transfers to other parts of your body.
- You have reflexes in your arms, legs, and respiratory organs.
- It’s not enough to master an instrument with your head – your hands are required, too.
This is what we’re getting at: It must be possible for data to be analyzed in various different places in IoT solutions depending on what you want to achieve (there’s no need to go into further detail about the bodily analogy here):
- Coherent interrelations must be established. Naturally, this can only be done in a higher-level component that has access to the different data sources or that actually stores the data itself.
- Data needs to be analyzed quickly and directly in the local setting. The best way to do this is close to where the data is generated. So near the sensors.
We don’t wish to worry you with the technical details here. Instead, we just want to open your eyes to everything that’s possible, what’s important, and the direction in which you might turn your thoughts in order to get the best out of the options available to you!
It might seem trivial, but it’s often underestimated. Elsewhere, idioms such as “A fool with a tool is still a fool” are known. If this is applied to data transformation, we might say “Data in the wrong format cannot be analyzed”. Is that too harsh? Our experience shows that it’s exactly the right opinion.
A simple example: You want to find a specific employee in a list of all employees. If this list is not sorted, and your company happens to be large, this might take an extremely long time. There are much more complex examples than this, but the general philosophy is always the same: The data must meet the needs of your (analysis) purpose and be properly prepared so that the contained value can be extracted.
Data transformation takes place at various different levels
- Conversion of analog signals into digital signals
- Transformation of digital values of sensors into data packages with values, timestamps, and the sensor ID
- Transformation of protocols
- Transformation as preparation for the use of algorithms, machine learning procedures, data analytics methods and so on
There are different options for the transformation of the data depending on the available hardware. At the highest level (so in the cloud, for example), you find products and frameworks that permit an abstract, possibly visual definition of the transformation. At the level near the sensors, you must use simpler options.
For you, it might be the most obvious thing in the world, but we maintain that it can’t be repeated too often: We all benefit by keeping the volume of data to be processed to a sensible size. Keeping the amount of data to be processed manageable means that you’ll aways be able to speed up the system that you’re setting up or using.
Some sensors, such as induction-based position sensors, generate 1000 data records or more per second. Does all of this information need to pass through all levels of your application? Unlikely. So you filter out and aggregate data in the appropriate place – as early on in the process as possible.
IoT applications generally save data at all levels. Because – as can be seen in the next section – data analyses are carried out at all levels, too.
- Data filters: The early filtering of data helps to keep the flood of data to a minimum.
- Security: It might make sense to forward only the result of an analysis to the next level. This can be illustrated using the example of a fingerprint sensor, which should only pass on information as to whether the finger in question has been recognized. The actual fingerprint data should not leave the device if it can be avoided.
- Performance and efficiency: Often, an on-the-spot analysis is required to ensure a distributed analysis and a fast reaction. Frequently, historical data is required, too.
- Failure safety: If data were only saved (and analyzed) in one place (e.g. on-premise), this would create a dramatic SPOF (single point of failure).
During the creation of IoT and Industry 4.0 solutions, the strategies for saving data must be clearly defined and documented. As shown, various criteria play a part here.
Like data storage, data analysis in IoT applications takes place at all levels. This is easy to grasp if you think about the photo app on your smartphone, for example:
- A photo is taken with your smartphone (the smartphone is the device and the image sensor is the sensor). Today, the integrated image sensor has a whole host of analysis tools such as an auto focus function.
- Your photo app might have further gimmicks such as face distortion or the addition of a virtual pair of glasses or a crown etc. And all of this is done locally on your smartphone.
- Once the photo has been taken, it is uploaded to the cloud. There, person/face recognition functions may be used to link the photo with stored persons.
As you can see, in the given example, data is saved and analyzed at different levels. The reasons for carrying out analyses at these levels are manifold, ranging from security to performance and network capacity.
What added value does data analysis offer you?
Starting from a really simple example (smartphone) and the specific question of where data is analyzed, we can take a look at data analysis from an entirely different angle:
On the basis of the Gartner Data Analytics Maturity Mode, we can assign added value to different analysis methods. The sooner we’re able to make statements about the future, the more extra value we have. This is easy to understand: If we can shape the future (top right, prescriptive analytics), the added value to be anticipated is greater than if we are merely able to understand the past (bottom left, descriptive analytics).
IoT applications and Industry 4.0 scenarios benefit from all analyses. The further into the future we want to go and the more we want to influence the future, the more likely we are to come across complex analysis models. Naturally, the most value is created at the higher levels (e.g. at application level and not at sensor level).
- The data volume and breadth of data increases.
- The available computing power increases.
- The possibilities to influence the entire scenario increase.
Analyzing data with machine learning
In order to analyze collected data, make predictions, detect anomalies, classify topics, and also make decisions, various different statistical methods based on machine learning are used. These include different cluster analysis and classification procedures, for example. In the latter case, neural networks are primarily used for data records that require high-quality classification. Even if machine learning is often seen as a complex process, machine learning methods are increasingly found at lower levels, too, right next to the sensors and actuators. This is precisely where machine learning can show its strengths and interpret abstract data really efficiently. In addition, machine learning models can be easily adjusted in line with changing circumstances – or can even adjust themselves!
Further aspects of data analysis
- How is the absence of data interpreted?
- Where are plausibility checks implemented? What for?
- How are errors in data determined? Here, too, machine learning can be suitable for defining a “corridor” of normal data.
- Which correlations can offer important information?
- How can customers be provided with the best possible support?
Data streaming and static analyses
You’ll notice that data can be analyzed retrospectively or on an ad-hoc basis (streaming). In the past, retrospective evaluation was much more common: Once a month, a report (evaluation) would take place. This was done using batch processing. And these evaluation processes are sometimes still used today.
But increasingly, we’re using real-time analysis or stream processing, since we want to be able to respond immediately if, for example, share prices start to fall, not several hours later. We need to increase production volumes as soon as demand rises, and not a month after the fact. The computing-intensive analyses based on past data have not been eliminated, but they’re carried out in addition to this new kind of evaluation.
Don’t be surprised if data analysis becomes a major part of your IoT and Industry 4.0 project!
Using collected data correctly
Some think that data is today’s equivalent of oil. In these days of energy revolution, we’re not so sure. However, what is certain is that data – when combined with analysis possibilities – can be extremely powerful. It can even allow us to change the future. It’s no coincidence that the biggest data collectors are also the most valuable companies: Google and Facebook are two of these.
Collect data and use it to:
- Get to know your customers better and thus provide them with better support
- Make your production even more efficient
- Bring the quality of your products further into the foreground
What are you waiting for?