What are Artificial Intelligence & Machine Learning?
As is the case with all new terminology, there are also different definitions and interpretations of the meaning of artificial intelligence (AI). Frequently a distinction is made between “weak AI” and “strong AI”. Contrary to what robot enthusiasts in the media and advertising material may suggest, strong AI still does not have any practical relevance for companies in the foreseeable future. The vision of researchers is to create intelligence that is versatile, generically applicable, and on a par with humans. On the other hand, which specializes in clearly defined tasks, is increasingly dominating working life and solving task after task. AI that translates texts fluently does not have to be able to understand the content of images as well. These drastic reductions in complexity compared to strong AI make it possible to create weak AI with today’s computing capacity.
In many fields, AI is conceptually equal with deep learning – the training of deep neural networks. Then the term “machine learning” (ML) describes the rather classic AI processes such as decision trees and support vector machines. Because we cannot generalize that neural networks are better suited for every problem, we prefer to use the following definitions of the terms:
- AI is a system that acts intelligently, viewed from the outside, because it can perform complex tasks automatically. Basically, a complex, manually programmed set of instructions can also appear to be intelligent – think of your GPS.
- ML consists of algorithms that allow complex rules to be learned from data instead of being programmed manually. An algorithm can adapt to different tasks automatically – with a suitable dataset in each case.
- In turn, deep learning (DL) is part of the machine learning process. Thus, it follows the same procedures as all other ML processes, but has a multilayered architecture. This allows to build abstract intermediate solutions, and generally speaking, to adapt better both to the data and the solution. A characteristic, however, that requires a greater amount of training data and computing power.
Let us explain this using an example: Let’s assume that we have to build toy spaceships and want to automate this task. The analogy for programmed AI would be to create a completely defined mold. This allows a machine to cast a lot of spaceships independently. But if we want to change something about the result, we have to create a new mold manually. In classic machine learning, we would define the individual components only, such as windows, power units, and antennas, and the machine would have to learn to create a spaceship based on specifications. This approach is essentially more generic; if we change the specifications, the machine can also learn to build other spaceships. The deep learning analogy in this example would be to give the machine only Lego building blocks and images of spaceships. Based on the multilayered structure, the machine would first create intermediate products, such as windows, and then build spaceships with them.
Unlike classic programming in which a suitable program is written for each new task, ML processes always use the same computing specifications, referred to as “models”. They transform inputs to outputs. In the case of deep learning, the models consist – in simple terms – of several layers of weighted sums with nonlinear activation functions in between in each case. The model performs another task, depending on how the weights of the sums are selected!
Here is where the large amount of data comes into play: No human can adjust the millions of weights of a larger neural network manually in such a way that a task is solved reliably. But if we have many examples with a known solution (the training data), it is an optimization problem that the computer can solve on its own: Find the weights that transform the given examples into known solutions.
So ML shifts the effort from the programming to the modeling of the optimization task and the maintenance of the database. This has some advantages:
- Beyond a certain complexity, programming is no longer feasible.
- Especially unstructured data such as audio and video data cannot be processed with rules. For example, how would you start programming speech recognition?
- Large numbers of exceptions can be compensated for easily by more data.
- If the data changes over time, only retraining without any personal effort is necessary.
- Feedback cycles can be built in to optimize the system independently.
The software development that previously implemented the entire solution will be required to integrate the AI solution and to develop the actual product. Nothing in the basic procedure changes, which is why we have grayed it out in the diagram. The actual technical solution though is broken down into two areas. In the modeling, the raw data is collected, cleaned, structured, and analyzed (data science). The processed input data is then transferred to the optimization pipeline. At the same time, the task is translated to an optimization problem: Which target variable is to be optimized? How do I evaluate an output from the system in terms of my task? With these specifications, a learning algorithm can automatically adapt its model to the data and the task. . Thus the costly optimization of a solution is only a question of the invested computing capacity.
There are some areas of activity and problems for which ML-based solutions tend to be more suitable than others. To roughly classify your problem, you can use the following structure as a guide:
You want to transform an input to an output and are able to determine the correct output for an amount of data from the past. In this case, you can use the supervised learning methods to train the model. You “supervise” the learning progress with the correct answers, as it were. This is referred to as classification or regression tasks, depending on whether the expected answer is an estimated value or an assignment to a category. Examples:
- To estimate the remaining time from the sensor data of a machine (regression)
- To determine the responsible department from the text of an email (classification)
You have an amount of data whose structure is complex or unknown to you. There actually is no “correct” solution, and therefore neither any “supervision”. In unsupervised learning, the model extracts or learns the structure of the data and makes it available to you for the actual solution of the problem. Examples:
- To extract the most frequent topics from many user comments (topic modeling)
- To group customers based on their responses to newsletters (clustering)
- To detect unknown problems in production (anomaly detection)
In many cases, semi-supervised learning achieves its objective as a combination of both of the above approaches. For example, in language processing, the “structure” of a language is readily learned without supervision first, because here much data can be collected easily. Then, based on this intelligent structure, texts can be classified under supervision more easily.
Especially in situations where not all the facts are known, one cannot immediately have a grasp of all the decisions and take the best action. AI also has the same problem. With reinforcement learning, a strategy for making decisions under uncertainty can be learned. There has to be a balance between realizing short-term gains and opening the door to opportunities for possibly greater gains in the future. Some examples are:
- To assign computation tasks to existing computers so that on average nobody waits for long (optimization)
- To use one robot arm to catch several falling objects (control)
You can find these terms and additional examples in the following topic map:
Architecture of an AI solution
The creation and provision of AI solutions make high demands on the architecture and computing infrastructure. The data flow example shown in the following image explains the relationships between memory requirements, computing requirements, and a highly available application with an AI core.
In every outlined area of the diagram, other demands on the infrastructure are important. But common to all of them is the fact that they can be implemented in common cloud providers or on their own servers – the elements below the dashed line. In selecting a mix of infrastructures, however, you should be aware that machine learning benefits strongly from large amounts of data – in the form of a higher quality in the results – and thus also from a fast connection between data storage and computation. In the diagram, this is visualized by the arrows between the memory-intensive and the compute-intensive areas. Only the AI service at the end of the data flow is decoupled and can also be deployed and updated periodically on edge devices such as smartphones, tablets, and smart sensors.
In contrast to the conventional, programmatic solution, the result is dependent not only on the (easily versionable) program code, but heavily on the data and the infrastructure. For each step – from the raw data to the AI-based product – there are suitable frameworks, libraries, and providers, some examples of which are shown in the diagram. Furthermore, cloud providers also offer integrated solutions.
The following examples from our product range also show this freedom of design.
The individuality of your task is shown on the horizontal axis. Tasks that presumably can be used similarly by many players are located in the left-hand area (low individuality). The translation of text to speech (text-2-speech) as a partial functionality of a chatbot is one example from this area. Here it makes more sense to take advantage of synergy effects and to rely on solutions by providers instead of training one’s own models. Great improvements in quality can be achieved by bundling the data from many users of many services. In the right-hand margin are very individual tasks or data structures, such as a suggestion system for your online shop. Certainly existing algorithms can be used for this purpose, because others have the same problem too. But you have to train the model in your data. Ultimately you need the similarity between your products and customer groups, and not those of Amazon.
The abstraction level of the solution is shown in the vertical axis. Keeping with the example of a suggestion system: You could write the learning process for the system yourself, build your own solution with existing libraries, use existing cloud services, and model your solution there, or even just feed your data to already completed APIs.
We are happy to help you select the best solutions based on a combination of abstraction level and individuality.
Why Getting into Artificial Intelligence & Machine Learning is Worth the Effort
More data is always available. For example, data is collected when services are used in the Internet (shops, streaming, customer service, etc.). But also devices, both in the hand of the end user and in an industrial environment (IoT) produce increasingly valuable data that you can use.
These enormous amounts of data can no longer be managed by humans alone. Big data is no longer conceivable without AI processes. Fortunately, advances in computing power and data storage have enabled us to process large volumes of data and to use complex AI algorithms. This allows us not only to optimize existing processes, but also to develop completely new areas of business.
AI is worth the effort!
Take advantage of the new opportunities now and bring your technology up to the latest state of the art. There are very many fields of application where solutions with machine learning methods and artificial intelligence offer real added value.
Possible applications of Artificial Intelligence & Machine Learning
The application possibilities of AI and machine learning methods are manifold. In our portfolio we have among others:
Maximize availability and minimize downtime of your machines and systems. Use machine learning and data science methods to find out which maintenance strategy makes the most sense and optimize your plant efficiency through predictive maintenance.
How do you get maintenance that meets your needs? We are happy to help you with this.
No matter whether you want to capture the content of documents as part of process automation or digitize your old stock: With the help of artificial intelligence, the information from the documents can be converted into a digital format, which makes (partially) automated further processing much easier and thus provides a high savings potential. In addition, you can use the data obtained in this way much more easily, for example, to develop new offers.
Start your AI-supported document analysis with us!
A chatbot is a resource-saving alternative or supplement to a call center that can handle customer inquiries around the clock. In addition to greater availability, this allows you to save costs and increase service quality. You can also benefit from interfaces that allow a chatbot to automatically perform tasks such as creating a reservation in your system or triggering a product order.
Set up your individual digital assistant with our help.
Depending on the industry, different methods are used for quality monitoring, all of which produce data. From simple sensor data and images to 3D scans: With the data collected, quality can be monitored and predicted using artificial intelligence. In complex parametric relationships, the influence of individual small changes can develop into a major production error. The use of artificial intelligence enables timely countermeasures to be taken and thus ensures the desired quality requirements.
We help you with quality monitoring using artificial intelligence!
With the various methods of artificial intelligence and machine learning, you can automate the processing of claims to a considerable extent. This shortens the processing time and relieves your employees from monotonous work. These are then available for complex cases where human interaction is unavoidable. The result is increased customer and employee satisfaction.
The mentioned examples in our portfolio are only a very small part of what is possible.
You are also welcome to contact us if you have the question whether your own idea is feasible.
A Glimpse into the Future
Even if the possibility of strong AI by quantum computers is still on the distant horizon, now is the right time to create and implement AI-based products and processes. This allows for new services, more efficient processes, and unexpected findings from your volumes of data. You will benefit from the results quickly and from the experiences gathered for a long time.
We are happy to help you with your next steps. For example, with a workshop during the early phase to identify and evaluate the most promising targets within a day. With training that conveys both theoretical principles and useful best practices. Or at any point along your journey.
Just call or write us at email@example.com.