Get an overview
Today, new concepts, new products, and new technologies continue to be researched, tested, and brought onto the market. The spirit of invention is active at all levels – and so is a state of constant revolution.
We can help you to achieve an overview – for your application, for your challenge, for your changes, and for your transformation.
Contact us right now – or first get an overview on the next few pages!
Data transport essentials
Data transport. In the age of the gigabit, megabit, and 5G, it might seem to be a complex subject. And sometimes, it really is complex. But it can also be very simple on occasion, e.g. if you want to send data to the cloud using MQTT (Message Queuing Telemetry Transport). As you’re going to see, it also makes sense to distinguish between the IoT and Industry 4.0 when considering data transport.
Protocols at application layer
Generally, initial thoughts on the topic of data transport turn to the protocols (to be precise: Communication protocols). There are protocols at different levels; ISO has shown us a few things here with the OSI model. In our environment, we’re generally active at application layer, but if necessary we can also work with protocols from the other layers.
First, let’s briefly deal with an issue that we often get asked about: All protocols support secure transmission, generally using TLS or related protocols (such as DTLS). From a non-technical, greatly simplified point of view, TLS and DTLS can be compared to HTTPS. HTTPS is the secure variant of HTTP. It has been used for years to display Web pages (like this one!).
To start with, familiarize yourself with a small selection of IoT protocols:
- MQTT (Message Queuing Telemetry Transport) is an extremely popular binary protocol that supports publish-subscribe.
- Binary means that unlike normal text, none of the data is easily readable by humans at protocol layer.
- Publish-subscribe is a communication mechanism akin to sending info letters to all interested parties. There is a sender who provides the information, a broker who accepts delivery of the information, and receivers who have deliberately chosen to receive information on the topic in question (e.g. Temperature in Berlin) and have registered to do so.
- HTTP (Hypertext Transfer Protocol) is the best known protocol that everyone who’s called a browser Web page is familiar with. It is text-based (so readable by humans) and supports request-response.
- Request-response as a communication mechanism means that a reply is returned or expected for each question. Generally, the questioner actively waits for the response, which arrives quickly. However, it is also possible for the questioner to wait only passively for an answer (for example, in an e-mail conversation).
- AMQP (Advanced Message Queuing Protocol) is a binary publish-subscribe protocol that competes with MQTT. However, the higher overheads should be taken into account here.
- CoAP (Constrained Application Protocol) is a binary protocol that handles public-subscribe and request-response communication mechanisms. It can be seen as a restricted, really slim version of HTTP for machine-to-machine communication.
- XMPP (Extensible Messaging and Presence Protocol) is a text-based publish-subscribe and request-response protocol that was originally designed for message exchange in chat programs.
- DDS (Data Distribution Service) is a binary publish-subscribe protocol that is trying to establish itself in the IoT and in Industry 4.0. It is particularly suitable for machine-to-machine communication.
Industry 4.0 protocols
Among other things, Industry 4.0 describes communication between production plants, so it makes sense that the direct connection to the cloud is not at the forefront here (at least from the point of view of the machine, which wants to communicate with other machines). Here is a selection of protocols that is by no means exclusive:
- Once you start learning about Industry 4.0, you’re bound to come across OPC UA (Open Platform Communications Unified Architecture). And you’ll quickly realize that OPC UA is much more than just a protocol. OPC UA has the potential to become the de facto standard for industrial plants. It’s a good idea to understand the concepts a little better. OPC UA is based on Internet technology (IP). In the next section, we’ll go into a little more detail on OPC UA.
- ProfiNet has a long history, and is one of the traditional industrial protocols. It has been subject to constant further development and is therefore one of the most widespread specifications. In addition, it works well with OPC UA. ProfiNet is the Industrial Ethernet variant of the traditional Profibus (Fieldbus).
- Modbus TCP is the Ethernet variant of the traditional Modbus (Fieldbus) and is particularly suited to smaller systems. As a result, Modbus TCP is frequently found in this segment, too.
- Ethernet – or, to be more precise, Industrial Ethernet(s) – denotes the modern variants of machine-to-machine communication in industrial plants. Fieldbus can be seen as the predecessor.
- EtherNet/IP is a further protocol specification that is based on Industrial Ethernet. Today, it is the most widespread Industrial Ethernet variant by far.
Data transport in Industry 4.0 applications
In the context of Industry 4.0, protocols can be subdivided into three categories:
- Industrial Ethernet protocols (ProfiNet, Modbus TCP, EtherNet/IP etc.)
- Fieldbus protocols (Profibus, Modbus RTU etc.)
- Wireless protocols (BLE etc.)
Fieldbus is the classic technology that can now be found in almost all Industry 4.0 applications. It found its way into factories before the time of Ethernet. Industrial Ethernet is enjoying greater growth rates. However, at present, the market is equally divided between the two technologies. The wireless protocols do not yet play a major part in the industrial sector. This is completely different for IoT applications, where we often depend on technologies without a wired connection.
OPC UA is taking on a special position among the Industry 4.0 protocols. OPC UA is based on Internet Protocol (IP) and calls for a corresponding network as its basis.
Source: © Plattform Industrie 4.0, Industry 4.0 Communication with OPC UA – Guide for Introduction in Small and Medium-Sized Businesses
OPC UA supports the common synchronous (client/server) and asynchronous (messaging) communication types. OPC UA uses the common IoT protocols here, such as MQTT for messaging. In addition, the JSON format used in modern software applications is used and supported.
Source: © OPC Foundation, OPC UA – the Heart, Soul, and Mind of Secure Networking
Information models can be defined in detail – including with vendor-specific properties – on the basis of the Information Model Layer of the OPC UA specification (or protocol):
Source: © OPC Foundation, OPC UA – the Heart, Soul, and Mind of Secure Networking
To sum up, we can say that OPC UA works in an extremely standardized way on Industry 4.0 applications. As a type of super-protocol, it can therefore solve interoperability problems up to cloud level. OPC UA is supported by a huge number of OPC Foundation members (see https://opcfoundation.org/members) including large software companies such as Microsoft and IBM as well as German small and medium-sized businesses, small consulting firms, and – of course – Siemens and other groups.
Data transport in IoT applications
With regard to used protocols at application level, Industry 4.0 and the IoT show considerable differences. In the IoT area, for example, small fieldbuses are used. Most of these use Ethernet as the basis or a wireless technology such as Bluetooth or WiFi (we won’t go into further detail about these here). In the IoT area, too, the communication paths are less clearly structured/prescribed. This means that there can be multiple (IoT) gateways between a sensor and the data sink in the cloud.
Security then becomes a central issue, particularly in the case of critical applications. The generated data is transmitted securely using symmetrical and asymmetrical encryption. But what about the generation of data? How do we know that we can trust the source of the data?
Trusted platform modules in brief
The Trusted Computing Group (TCG) can help somewhat, with the Trusted Platform Module 2.0 Library Specification (TPM 2.0). This describes how systems can be protected against undesirable changes from outside using a hardware chip. This enables protection against changes to hardware and software. In addition, the TPM describes how devices can be uniquely identified (using an RSA key).
Many people will have heard of BitLocker, which ensures that data on the hard drive is encrypted on laptops used for business purposes. In addition, BitLocker accesses the TPM chip (if installed) in order to check whether the hardware or firmware has been changed. This means that BitLocker or the TPM chip can prohibit access to the hard drive if changes have been made to the hardware or firmware. This would be the case, for example, if the hard drive were integrated into a new hardware environment.
You can now see that systems can be protected against external attacks using TPM 2.0. This covers attacks on software components such as the changing of firmware in order to prevent hacking, for example. However, the hardware components are also protected. Nevertheless, we should not forget that the implementation of a TPM 2.0 module alone does not provide adequate security, since:
- The hardware must be checked using the TPM module.
- The software must be checked using the TPM module.
If you’re thinking about how to protect your products, you should get to grips with TPM 2.0. A hardware security module (HSM) might also be relevant if you develop applications that involve the (extremely) frequent creation of security keys.
Blockchain and distributed ledger technology
You’ve surely heard of Bitcoin, and you know that a digital currency is ideal for machine-to-machine payment. You might know that when machines need to negotiate contracts, a blockchain could be used. Why? With blockchains, we combine trusted communication, the decentralized protection of data, and data immutability. Isn’t that exactly what we need?
Let’s get started: Bitcoin is a specific implementation of Blockchain technology. And Blockchain is one of the best known incarnations of distributed ledger technology (DLT). Mostly, it’s all about saving data securely in a distributed environment without an intermediary (such as a bank).
Image: Distributed ledger technologies
The technologies differ in their properties, such as the number of achievable transactions per second (TPS) and their underlying data structure. Note that distributed ledger technologies are still in their infancy – even if Bitcoin already has a long journey behind it, with a large amount of capital.
There are various different implementations of DLTs, and the applications are manifold. For example, in the IoT environment, the following specific distributed ledger technologies are of interest to us:
- IOTA offers possibilities for the efficient use of micropayments. In the industrial environment, machines will be able to make purchases on the basis of sensor data, for example.
- Ethereum enables the use of smart contracts. Machines can enter into contracts that are associated with actions in certain (negotiated) conditions.
At present, good advice is a rarity – and it’s hard to separate the hype from the reality. Successful projects have indeed already been realized with DLTs. We believe that DLTs will find their niche in future (however large that niche might turn out to be). In any case, it makes sense to take a look at this aspect, and perhaps even investigate it further.
Data transport in projects
Image: © Novatec
It’s rare for a greenfield project to be viable. This is because greenfield projects have practically no dependencies from the past. Some startups are able to enjoy this luxury, but even these are quickly caught up in the past, too, if – for example – the new products and ideas also need to be made available for the existing framework.
Brownfield approaches are the norm – in all areas of software development. And this is one of our strengths as an independent service provider. Because often, tailored solutions that can also deal with unexpected situations are needed.
The choice of the right protocol is sometimes a really easy task – and it’s certainly easy if there’s no choice at all! Often, you’re faced with a decision as to how to connect up new sensors. Or you need to think about how you can now transport data to your cloud application. Various criteria play a role here, including these:
- Existing infrastructure, devices, and procedures
- Existing application landscape
- Preferred architecture/interaction patterns
- Enhancement possibilities and integration capability
- Resource consumption
The bottom line
Whether it’s Industry 4.0 applications or the IoT, protocols are right at the crux of the matter when it comes to the digital era of networked devices. In one case they are a given, and in the other they are not. However, you always need to think about which road you want to go down, and why. Don’t leave your choice to chance – make a deliberate, informed decision instead.
Security plays an important part in the networked world. Unsecured communication can quickly become public, causing your customers’ trust in you to disappear. It’s important to pay attention to data security and data protection right from the start, and doing so will save you hassle and negative publicity later on.
As an independent service provider, we’re familiar with all commonly used protocols. Starting with REST-based protocols through binary protocols, point-to-point protocols, and publish-subscribe protocols to machine-to-machine and sensor-to-cloud protocols as well as synchronous and asynchronous varieties. When it comes to the selection of protocols for our customers, we give our fantasy full rein and then make a sensible decision. Generally, there’s only a small set of options with minor differences between them. Naturally, security plays a central part. So let us get involved with your projects right from the start, and we’ll make sure everything’s done properly.