The increasing availability of machine learning and artificial intelligence (AI) applications has made it possible to develop new kinds of automated manufacturing systems. As Professor for Cognitive Production Systems in the Faculty of Engineering Design, Production Engineering and Automotive Engineering at the University of Stuttgart and Head of the Center for Cyber Cognitive Intelligence at the Fraunhofer Institute for Manufacturing Engineering and Automation (Fraunhofer IPA), Professor Marco Huber is investigating innovative applications of machine learning that could make these systems more flexible.
May 11, 2021
HPC in Industry
Manufacturing
Interview
Hybrid Computing
AI & Data Analytics
Partnerships
See all news
This combination of roles places Huber in an ideal situation to facilitate productive interactions between research and industry. As a university professor, he investigates state-of-the-art topics not only from a theoretical perspective, but also with a focus on practical requirements in the manufacturing industry. At Fraunhofer IPA, he advises companies interested in using AI to improve their productivity. "The work at Fraunhofer is less about developing new methods," Huber explains, "than about applying methods developed for basic research in ways that will help industry to address the problems it faces."
In November 2020, Fraunhofer IPA and HLRS signed a cooperation agreement that will enable Huber and his colleagues to access resources for high-performance computing (HPC) and simulation at HLRS. Together, the two organizations also plan to undertake research on issues of shared interest — specifically, quantum computing and artificial intelligence. In this interview, Professor Huber describes how the partnership came into being and how HLRS's computing resources will enhance Fraunhofer IPA's research capabilities.
In industry today, you need to have a certain level of expertise to program a robot. This is a situation we want to move beyond. Such a process should not only be accessible to experts, but everyone should have the ability to program robots when needed.
Instead of programming a robot directly, our approach involves formulating a task that it needs to complete — for example, grasping an object and removing it from a box. In a simulation, a robot can teach itself the best way to do this. At first, it does a pretty bad job, but because this is only a simulation, that's ok — it can't break anything. With the help of a paradigm from machine learning called reinforcement learning, it gradually improves until it reaches a point where we can determine that it is good enough for the application. The nice thing is that as soon as we reach this point, we also automatically have a robot program that we can implement on the factory floor. In a sense, the robot programs itself through this process.
Through simulation robots can learn to improve their ability to manage tasks such as grasping an item in a box. Image: Fraunhofer IPA.
The advantage of using simulation in robotics is that you don't need access to a physical robot. Normally, a robot must be taken out of production for the entire time it is being programmed. This isn't just a matter of a few hours, but can require a significant amount of time until everything functions correctly. By using simulation, we can avoid such interruptions in production.
At the same time that a machine manufactures a particular product, we believe that through simulation it should also be able to learn how it will need to go about manufacturing the next product. This is the vision that motivates our research. This goal isn't something we will be able to implement in the real world today or tomorrow, but we think that it should be possible in the coming years.
The vision that we're pursuing is to facilitate the transformation of production to so-called "lot-size-1" manufacturing. There is growing interest in moving away from mass production towards a model that people in the industrial community around Stuttgart refer to as "mass personalization." Companies would like to be able to produce high-quality, individualized products, while at the same time keeping costs low. To make this possible, production machinery must be able to adapt itself efficiently to every new product.
You can see one example of this need in the manufacture of electrical circuit boxes, which are usually assembled in a highly customized way. Nowadays, nearly every circuit box is unique and built by hand. They contain, for example, so-called "top-hat rails" (also called DIN rails), on which components need to be mounted and cabled together. Because of how difficult and expensive this process is, we would like to use simulation to get to a point where a robot could learn how to manufacture customized circuit boxes efficiently.
Reinforcement learning and other kinds of machine learning rely on data that often doesn't exist in the amount or in the form that is needed. Therefore, we use highly precise simulation to generate the vast majority of the data that is needed to train the algorithm. This way, you only need a very small amount of real-world data in order to fine-tune the algorithms.
At Fraunhofer IPA, we are already doing this to a small degree, but we don't have the necessary computing capacity to carry out large-scale simulations. Right now, it isn't a problem for us to program an individual robot, but as soon as we begin thinking in larger scales — such as several robots or a production process involving multiple interrelated steps — we get to a point where our computing capacity isn't sufficient. This is why having access to the supercomputer at HLRS is a cornerstone of our partnership. The collaboration offers us new opportunities for simulating complex systems, having the machine learn through simulation, and transferring the resulting programs to industry, and to do it on a large scale.
One other topic that we would like to address as part of this collaboration is the verification of neural networks. Today, researchers often face the problem that when they want to use neural networks in applications where safety is critical, it is difficult to guarantee that the network will do what it is supposed to. This is where verification comes into the picture. You formulate a set of requirements that the network should satisfy and then mathematically test how well the algorithm meets them. We have been working with a company to investigate this in a case study and in our test run we showed that it should be possible.
Nevertheless, this verification step is extremely computationally expensive, and at Fraunhofer IPA we unfortunately don't have the computing capacity to solve this problem in an acceptable amount of time. In our conversations with the company and with HLRS, we agreed that we will undertake a demonstration project together in the first quarter of 2021. HLRS will make the necessary computing resources available to complete this verification exercise.
Fraunhofer IPA does have a GPU computing cluster available for research using artificial intelligence, but because researchers from many different parts of the organization are now using it, the demands are continually growing and we often run into bottlenecks. Having access to HLRS's computing resources should also help to address this problem.
Fraunhofer IPA, together with HLRS and several other partners, was successful in a proposal to start a new project called SEQUOIA (Software Engineering for Industrial Hybrid Quantum Applications and Algorithms), which will conduct research to understand the future opportunities that quantum computing offers. This kind of computing is still quite new, and so we need to be asking some important questions: What kinds of practical problems could quantum computing solve faster than a classical computer can? And which kinds of problems will, for scientific reasons, not benefit from so-called quantum supremacy?
Through this project, we will gain access to the quantum computer that IBM is making available to the Fraunhofer Society beginning in 2021. The system offers only a relatively small number of qubits, and so although it will not necessarily enable us to solve real-world problems, it will allow us to develop experience and undertake small-scale case studies so that we can gain a better understanding of what might be possible.
Currently, quantum computing — in a similar way to the programming of robots — is still something that only experts can do. Worldwide, there are currently only a small number of experts who are able to program quantum computers. For this reason, we will also be looking at ways to simplify software development for such systems. By doing so, we hope to make quantum computing more accessible for both scientists and industry.
— Interview: Christopher Williams