Publications

Below you find a non-complete list of my publications. You can see all of them on my Google Scholar profile.


IALE: Imitating Active Learner Ensembles
Christoffer Löffler, Christopher Mutschler
preprint
We use imitation learning to learn a policy for pool-based active learning (AL). We let a set of given AL heuristics work on a dataset and train a policy that imitates the best heuristic at the current stage in the training course. We apply the policy on a similar  domain and show that we successfully learn to imitate a set of hard-coded experts.
| bibtex | code | arxiv | pdf |


Policy Adaptation via Self-Supervised State Translation
Christopher Mutschler, Sebastian Pokutta
preprint
We show how to adapt an existing policy when the environment representation changes. We transfer the original policy by translating the environment representation back into its original encoding by sampling observations from both the environment and a dynamics model trained from prior experience. This allows us to bootstrap a neural network model for state translation without using extrinsic rewards.
| bibtex | code |


Learning Continuous-State Markov Chains using Autoregressive Conditional Variational Autoencoders
Ramiz H. Siddiqui, Christopher Mutschler
preprint
We propose autoregressive Conditional Variational Autoencoders to learn Markov sequences of arbitrary order. On continuous variable conditioning (especially in case of multi-modal distributions like Gaussian mixtures) we outperform state of the art baselines in generating synthetic trajectory sequences.
| bibtex |


Real-Time Gait Reconstruction for Virtual Reality Using a Single Sensor
Tobias Feigl, Lisa Gruner, Christopher Mutschler, Daniel Roth
Proceedings of the International Symposium of Mixed and Augmented Reality (ISMAR) – Adjunct
We propose an approach to reconstruct gait motions from a single head-mounted accelerometer. We train our models to map head motions to corresponding ground truth gait phases. To reconstruct leg motion, the models predict gait phases to trigger equivalent synthetic animations.
| bibtex |


A Sense of Quality for Augmented Reality Assisted Process Guidance
Anes Redzepagic, Christoffer Löffler, Tobias Feigl, Christopher Mutschler
Proceedings of the International Symposium of Mixed and Augmented Reality (ISMAR) – Adjunct
We show how to combine inertial sensors, mounted on work tools, with AR headsets to enrich modern assistance systems with a sense of process quality. An ML classifier predicts quality metrics from a 9-DoF inertial measurement unit, while we simultaneously guide and track the work processes with a HoloLens AR system.
| bibtex |


Don’t Dive for Labels in a Dirty Pool: Handling Outliers in Active Learning
Christoffer Löffler, Karthik Ayyalasomayajula, Björn Eskofier, Christopher Mutschler
preprint
We propose active learning (AL) method that works well on data sets that contain Out-of-Distribution (OoD) samples. With anomalies being a minority we train a generative model along with the final classifier and use it to filter the data pool prior to the actual AL process. We show that uncertainty-based active learning methods query anomalies while our filter prevents the classifier from (wrongly) working on OoD data samples.
| bibtex |


The OnHW Dataset: Online Handwriting Recognition from IMU-Enhanced Ballpoint Pens with Machine Learning
Felix Ott, Mohamad Wehbi, Tim Hamann, Jens Barth, Björn Eskofier, Christopher Mutschler
Proceedings of the ACM on Interactive, Wearable and Ubiquitous Technologies
We release a novel dataset for online handwriting recognition with 31,275 upper- and lower-case English letters (52 classes) from 119 participants together with CNN, LSTM, and BLSTM baseline deep learning models for the writer-dependent and writer-independent recognition tasks. 
| bibtex | data |


RNN-aided Human Velocity Estimation from a Single IMU
Tobias Feigl, Sebastian Kram, Philipp Woller, Ramiz Siddiqui, Michael Philippsen, Christopher Mutschler
Sensors 
We propose a hybrid filter that combines a CNN/BLSTM with a Bayesian filter to estimate a person’s velocity on rotation-invariant signals. Our experiments show the robustness against different movement states and changes in orientation, even with high dynamics. With a single IMU we outperform the state of the art in terms of velocity and traveled distance, and generalize well to other movement speeds.
| bibtex |


Automated Quality Assurance for Hand-held Tools via Embedded Classification and AutoML
Christoffer Löffler, Christian Nickel, Christopher Sibel, Daniel Dzibela, Jonathan Braat Benjamin Gruhler, Philipp Woller, Nicolas Witt, Christopher Mutschler
European Conference on Machine Learning and Practice of Knowledge Discovery in Databases (ECML-PKDD)
We propose process monitoring system that uses inertial, magnetic field and audio sensors that we attach as add-ons to hand-held tools. Embedded classifiers analyse the sensor data and we directly provide feedback to workers during the execution of work processes. We show how our system automatically trains and deploys new machine learning models based on new user data.
| bibtex | projectyoutube


High-Speed Collision Avoidance using Deep Reinforcement Learning and Domain Randomization for Autonomous Vehicles
Georgios Kontes, Daniel Scherer, Tim Nisslbeck, Janina Fischer, Christopher Mutschler
13th International IEEE Conference on Intelligent Transportation Systems (ITSC)
We use domain randomization to study simulation-to-reality transfer for a high-speed collision avoidance scenario. We train the policy not only on a single version of the setup but on several variations. Our experiments show that the resulting policy is able to generalize much better to different values for the vehicle speed and distance from the obstacle compared to policies trained in the non-randomized version of the setup.
| bibtex | pdfslides | video |


ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization
Felix Ott, Tobias Feigl, Christoffer Löffler, Christopher Mutschler
CVPR Workshops (Joint Workshop on Long-Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM)
We propose an architecture for long-term 6DoF visual odometry that leverages synergies between absolute pose estimates (from PoseNet-like architectures) and relative pose estimates (from FlowNet-like architectures) by combining both through recurrent layers. Experiments on public datasets and on our own Industry dataset show that our novel design outperforms existing techniques in long-term navigation tasks.
| bibtex |


NLOS Detection using UWB Channel Impulse Responses and Convolutional Neural Networks
Maximilian Stahlke, Sebastian Kram, Christopher Mutschler, Thomas Mahr
International Conference on Localization an GNSS (ICL-GNSS)
With a realistic measurement campaign we evaluate different convolutional neural network architectures on the NLOS detection task. We show that most models highly outperform ML-based baselines even with low network complexities and that they also generalize to unseen  receivers and environments.
bibtex | AI-based Positioning |


Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments
Tobias Feigl, Andreas Porada, Steve Steiner, Christoffer Löffler, Christopher Mutschler, Michael Philippsen
15th Intl. Conf. on Computer Graphics Theory and Applications (GRAPP)
We study the applicability of popular AR systems (Apple ARKit, Google ARCore, and Microsoft Hololens) in industrial contexts. With an elaborate measurement campaign we show that for such a context, i.e., when a reliable and accurate tracking of a user matters, the Simultaneous Localization and Mapping (SLAM) techniques of these AR systems are a showstopper. While added natural features help, the tracking reliability can often not be improved enough.
| bibtex |


2019

Deep Reinforcement Learning for Motion Planning of Mobile Robots
Leonid Butyrev, Thorsten Edelhäußer, Christopher Mutschler
arXiv
This paper presents a novel motion and trajectory planning algorithm for nonholonomic mobile robots that uses recent advances in deep reinforcement learning. Starting from a random initial state, i.e., position, velocity and orientation, the robot reaches an arbitrary target state while taking both kinematic and dynamic constraints into account. Our deep rein- forcement learning agent not only processes a continuous state space it also executes continuous actions, i.e., the acceleration of wheels and the adaptation of the steering angle. We evaluate our motion and trajectory planning on a mobile robot with a differential drive in a simulation environment.
| bibtex |


Sick Moves! Motion Parameters as Indicators of Simulator Sickness
Tobias Feigl, Daniel Roth, Stefan Gradl, Markus Wirth, Marc Erich Latoschik, Bjoern M. Eskofier, Michael Philippsen, Christopher Mutschler
IEEE Transactions on Visualization and Computer Graphics (TVCG)
We explore motion parameters, more specifically gait parameters, as an objective indicator to assess simulator sickness in Virtual Reality (VR). We used two different pose estimation methods for the evaluation of motion tasks in a large-scale VR environment: a simple model and an optimised model that allows for a more accurate and natural mapping of human senses. The results show that both models affect the gait parameters and simulator sickness. We further trained a classifier that extracts the non-linear correlation of gait parameters to assess simulator sickness from gait parameters alone. 
| bibtex |


A Framework for Location-Based VR Applications
Jean-Luc Lugrin, Constantin Kleinbeck, Daniel Roth, Christian Daxer, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik
16. Workshop der GI-Fachgruppe VR/VR
This paper presents a framework to develop and investigate location-based Virtual Reality (VR) applications. We demonstrate our framework by introducing a novel type of VR museum, designed to support a large number of simultaneous co-located users. These visitors are walking in a hangar-scale tracking zone (600m2), while sharing a ten times bigger virtual space (7000m2). Co-located VR applications like this one are opening novel VR perspectives. However, sharing a limitless virtual world using a large, but limited, tracking space is also raising numerous challenges: from financial considerations and technical implementation to interactions and evaluations (e.g., user’s representation, navigation, health & safety, monitoring). How to design, develop and evaluate such a VR system is still an open question. Here, we describe a fully implemented framework with its specific features and performance optimizations. We also illustrate our framework’s viability with a first VR application and discuss its potential benefits for education and future evaluation.
| bibtex | pdf |


A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU
Tobias Feigl, Sebastian Kram, Philipp Woller, Ramiz H. Siddiqui, Michael Philippsen, Christopher Mutschler
9th International Conference on Indoor Positioning and Indoor Navigation (IPIN)

We use machine learning (ML) and deep learning (DL) to estimate a human’s velocity under varying dynamics (as they are present for instance in sports applications) such as abrupt and unpredictable changes. Our approach is robust to varying motion states and orientation changes in dynamic situations. On data from a single uncalibrated IMU our novel recurrent model not only outperforms the state of the art on instantaneous velocity (<0.1m/s) and on traveled distance (< 29m/km). It can also generalise to different and varying rates of motion and provides accurate and precise velocity estimates.
| bibtex |


A Deep Learning Approach to Position Estimation from Channel Impulse Responses
Arne Niitsoo, Thorsten Edelhäußer, Ernst Eberlein, Niels Hadaschik, Christopher Mutschler
Sensors 2019, 19(5), 1064
Radio-based locating systems allow for a robust and continuous tracking in industrial environments and are a key enabler for the digitalization of processes in many areas such as production, manufacturing, and warehouse management. Time difference of arrival (TDoA) systems estimate the time-of-flight (ToF) of radio burst signals with a set of synchronized antennas from which they trilaterate accurate position estimates of mobile tags. However, in industrial environments where multipath propagation is predominant it is difficult to extract the correct ToF of the signal. This article shows how deep learning (DL) can be used to estimate the position of mobile objects directly from the raw channel impulse responses (CIR) extracted at the receivers. Our experiments show that our DL-based position estimation not only works well under harsh multipath propagation but also outperforms state-of-the-art approaches in line-of-sight situations.
| bibtex |
AI-based Positioning |


2018

Evaluation Criteria for Inside-Out Indoor Positioning Systems based on Machine Learning
Christoffer Löffler, Sascha Riechel, Janina Fischer, Christopher Mutschler
8th International Conference on Indoor Positioning and Indoor Navigation (IPIN)

This paper proposes evaluation criteria that consider algorithmic properties of ML-based positioning schemes and introduces a dataset from an indoor warehouse scenario to evaluate for them. Our dataset consists of images labeled with millimeter precise positions that allows for a better development and performance evaluation of learning algorithms. This allows an evaluation of machine learning algorithms for monocular optical positioning in a realistic indoor position application for the first time. We also show the feasibility of ML-based positioning schemes for an industrial deployment.

[bibtex] [project page]


Recurrent Neural Networks on Drifting Time-of-Flight Measurements
Tobias Feigl, Thorsten Nowak, Michael Philippsen, Thorsten Edelhäußer, Christopher Mutschler
8th International Conference on Indoor Positioning and Indoor Navigation (IPIN)

We train recurrent neural networks on ToFs both in simulation and reality that successfully processes noisy measurements that are not zero-mean Gaussian distributed. We show that our methods outperforms conventional methods based on Kalman filters that are currently considered to be state of the art.

[bibtex] [project page]


Convolutional Neural Networks for Position Estimation in TDoA-based Locating Systems (Best Paper Award)
Arne Niitsoo, Thorsten Edelhäußer, Christopher Mutschler
8th International Conference on Indoor Positioning and Indoor Navigation

[bibtex] [AI-based Positioning]


Supervised Learning for Yaw Orientation Estimation
Tobias Feigl, Christopher Mutschler, Michael Philippsen
8th International Conference on Indoor Positioning and Indoor Navigation (IPIN)

With free movement and multi-user capabilities, there is demand to open up Virtual Reality (VR) for large spaces. However, the cost of accurate camera-based tracking grows with the size of the space and the number of users. No-pose (NP) tracking is cheaper, but so far it cannot accurately and stably estimate the yaw orientation of the user’s head in the long-run.

Our novel yaw orientation estimation combines a single inertial sensor located at the human’s head with inaccurate positional tracking. We exploit that humans tend to walk in their viewing direction and that they also tolerate some orientation drift. We classify head and body motion and estimate heading drift to enable low-cost long-time stable head orientation in NP tracking on 100 m × 100 m. Our evaluation shows that we estimate heading reasonably well.

[bibtex]


Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities
Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik
IEEE VR

This paper presents a novel approach for the augmentation of social behaviors in virtual reality (VR). We designed three visual transformations for behavioral phenomena crucial to everyday so- cial interactions: eye contact, joint attention, and grouping. To evaluate the approach, we let users interact socially in a virtual museum using a large-scale multi-user tracking environment. Using a between-subject design (N = 125) we formed groups of five par- ticipants. Participants were represented as simplified avatars and experienced the virtual museum simultaneously, either with or without the augmentations. Our results indicate that our approach can significantly increase social presence in multi-user environments and that the augmented experience appears more thought-provoking. Furthermore, the augmentations seem also to affect the actual behavior of participants with regard to more eye contact and more focus on avatars/objects in the scene. We interpret these findings as first indicators for the potential of social augmentations to impact social perception and behavior in VR.

[bibtex]


Head-to-Body-Pose Classification in No-Pose VR Tracking Systems
Tobias Feigl, Christopher Mutschler, Michael Philippsen
IEEE VR

Pose tracking does not yet reliably work in large-scale interactive multi-user VR. Our novel head orientation estimation combines a single inertial sensor located at the user’s head with inaccurate posi- tional tracking. We exploit that users tend to walk in their viewing direction and classify head and body motion to estimate heading drift. This enables low-cost long-time stable head orientation. We evaluate our method and show that we sustain immersion.

[bibtex] [project page]