In the field of Artificial Intelligence, artificial neural networks (ANNs) are one subtype of a machine learning model that have evolved inspired by the human brain. These models are increasingly deployed within intelligent, technical systems, where they derive datadriven autonomous decisions. Since ANNs are black-box models that do not inherently reveal their decision-making foundation, trust in these models is critical for subsequent use in applications outside of laboratory environments. However, despite some similarities, the way the human brain and ANNs work is somewhat distinct, causing ANNs, in contrast to humans, to be vulnerable to so-called adversarial examples.
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake. To enhance the robustness of models against this type of attack, adversarial training is applied along with other defensive strategies. Here, ANNs are exposed to generated adversarial examples to increase their resilience. However, the increased robustness is often associated with decreasing classification accuracy on the original data. To counteract this effect, the focus is on more natural examples with semantic modifications that are more likely to occur in real-world applications. These types of adversarial examples are referred to as semantic adversarials.
Within the Ph.D. project, semantic adversarials will be investigated in more depth and adapted to the industrial context. For this purpose, the use case of assessing machine health on the basis of one-dimensional, multimodal signals acquired by a control or an I/O system is suitable. However, machine data sets are often unbalanced, since fault cases are more challenging to aggregate than good states. With the help of generative models, such as Generative Adversarial Nets, semantically meaningful additional data should remedy this circumstance. In particular, the adaptivity of machine learners will be examined, since e.g. wear and tear or domain adaptaions can trigger a drift in the distributions of the machine health data.