Scientific Conference & DGR Days

Europe/Berlin
Description

Within the last years, machine learning has proved to be a technological game changer in various domains, ranging from medicine to robotics. Despite many advances and outstanding exhibitions of various systems endowed with artificial intelligence (AI), current approaches often lack flexibility, adaptability, robustness, and explainability: in contrast to humans, these systems are neither able to adapt a-priori knowledge to master new tasks, nor able to reflect on their own experience to actively explain and improve their performance. The KIT Science Week 2021 focuses on bringing the human back into the center of learning systems, by discussing the development of human-inspired, trustable, understandable, and adaptable AI technologies.

Registration
Registration
    • 9:50 AM 10:00 AM
      Welcome and Introduction 10m

      Speaker: Tamim Asfour.

    • 10:00 AM 11:00 AM
      Keynotes: Talk: Personal Digital-Twin and Data Science
      • 10:00 AM
        Personal Digital-Twin and Data Science 1h

        Computer vision, AI and robotics extend and deepen the basic research about the human by motion measurements, motion analysis, biomechanical analysis, motion semiotics, and their data science. In 2020 we started Corporate Sponsored Research Program "Human-Motion Data Science" as a three-year research program in University of Tokyo supported by the five industrial partners. Informatics on the body and motion of humans enlightens a unique scientific domain, but yet remains unsystematized and is not fully developed. We study human-motion data science research toward social implementation into sports training, rehabilitation, health monitoring, and so on. The uniqueness of our approach is based on the computational algorithms and system designs originated in robotics. 3D pose and motion reconstruction from computer vision, biomechanical analysis of wholebody motion, and semantic interpretation of motion are all based on our original robotics studies of kinematics, dynamics, statistics, and high dimensional optimization. This talk will discuss monitoring the change of body functions and skills by accumulating personal body and motion data as the personal digital-twin, and on the horizon of its data science.

        Bio

        Portrait Nakamura

        Yoshihiko Nakamura is Senior Researcher with Corporate Sponsored Research Program "Human Motion Data Science", Research into Artifacts Center for Engineering, Graduate School of Engineering, University of Tokyo. He received PhD in 1985 from Kyoto University and held faculty positions in Kyoto University, University of California Santa Barbara and University of Tokyo. His fields of research include humanoid robotics, cognitive robotics, neuro-musculoskeletal human model, and human-motion data science. He is a recipient of King-Sun Fu Memorial Best Transactions Paper Award, IEEE Transaction of Robotics and Automation in 2001 and 2002 and of Pioneer Award of IEEE-RAS in 2021. He was President of IFToMM (International Federation for the Promotion of Mechanisms and Machine Science) in 2011-2015. Dr. Nakamura is Foreign Member of Academy of Engineering Science of Serbia, TUM Distinguished Affiliated Professor of Technische Universität München, Fellow of JSME, RSJ, and World Academy of Art and Science, Life Fellow of IEEE, and Professor Emeritus of University of Tokyo.

        www.roboticsynl.com/hmds/

        Speaker: Yoshihiko Nakamura (University of Tokyo)
    • 10:00 AM 6:00 PM
      Networking area: wonder.me
    • 11:00 AM 12:00 PM
      Keynotes: Talk: From AI in Robotics to AI in Finance: Examples and Discussion
      • 11:00 AM
        From AI in Robotics to AI in Finance: Examples and Discussion 1h

        After many years of research in academia on AI and autonomous robots, for the last three years, I have been the head of AI research at J.P. Morgan. During all this time, I have looked at many challenges with an AI approach, addressing knowledge, representation of states, actions, behaviors, planning, multiagent interactions, learning. In this talk, I will share several interesting problems that we have encountered and solutions that we devised in AI in Robotics and AI in Finance. I will focus on examples and will discuss AI planning, execution, teamwork, and learning and their potential of great applicability and great impact in real domains.

        Manuela M. Veloso, PhD, Managing Director
        Head, J.P. Morgan Chase AI Research
        Herbert A. Simon University Professor Emeritus
        School of Computer Science
        Carnegie Mellon University


        Bio

        Portrait Veloso

        Manuela Veloso is the Head of J.P. Morgan AI Research and Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University (CMU), where she was previously assistant and associate professor, before being appointed head of the Machine Learning Department in 2016. She was the President of AAAI (2013-2014), and the co-founder, Trustee, and Past President of the RoboCup research initiative. Prof. Veloso is internationally recognized for her pioneering work on robot autonomy and multi-agent systems. She published extensively in both fields, and received multiple awards and honors, including being a Fellow of the four most prestigious associations in computer science and engineering (AAAI, AAAS, ACM, and IEEE). She is also the recipient of several best paper awards, the Einstein Chair of the Chinese Academy of Science, the ACM/SIGART Autonomous Agents Research Award, an NSF Career Award, and the Allen Newell Medal for Excellence in Research. She was Program Chair of the International Conference in Artificial Intelligence (IJCAI) in 2007 and the AAAI conference on AI in 2005.

        www.cs.cmu.edu/~mmv/

        Speaker: Manuela Veloso (J.P. Morgan & Carnegie Mellon University)
    • 12:00 PM 1:00 PM
      Keynotes: Robots: Perceiving, interacting, collaborating
      • 12:00 PM
        Robots: Perceiving, Interacting, Collaborating 1h

        The integral ability of any robot is to act in the environment, interact and collaborate with people and other robots. Interaction between two agents builds on the ability to engage in mutual prediction and signaling. Thus, human-robot interaction requires a system that can interpret and make use of human signaling strategies in a social context. In such scenarios, there is a need for an interplay between processes such as attention, segmentation, object detection, recognition and categorization in order to interact with the environment. In addition, the parameterization of these is inevitably guided by the task or the goal a robot is supposed to achieve. In this talk, I will present the current state of the art in the area of robot perception and interaction and discuss open problems in the area. I will also show how visual input can be integrated with proprioception, tactile and force-torque feedback in order to plan, guide and assess robot's action and interaction with the environment.


        Bio

        Portrait Kragic

        Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology (KTH). She received M.Sc. in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Founding member of Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. She chaired IEEE RAS Technical Committee on Computer and Robot Vision and served as an IEEE RAS AdCom member. Her research is in the area of robotics, computer vision and machine learning. In 2012, she received an ERC Starting Grant, in 2019 Distinguished Professor Grant from the Swedish research Council and ERC Advanced Grant. Her research is supported by the Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research, EU and Swedish Research Council.

        www.csc.kth.se/~danik/

        Speaker: Danica Kragic (Royal Institute of Technology (KTH))
    • 1:00 PM 2:00 PM
      Break 1h
    • 2:00 PM 3:00 PM
      Keynotes: Breaking the Wall to Collective Learning: How AI and Networked Robotics can Kickstart Machine Evolution
      • 2:00 PM
        Breaking the Wall to Collective Learning: How AI and Networked Robotics can Kickstart Machine Evolution 1h

        Smart robotic systems have taken giant leaps in recent years. Important technological breakthroughs have led to the introduction of intelligent machines that meet human needs not only in factories but also in healthcare and in the service industry. Moreover, with the help of artificial intelligence, robots are now capable of learning and continuously developing new skills. Robots, connected to a remote memory that stores large volumes of trained datasets, have become more affordable and user friendly. Even more importantly, cloud-networked robots are now also able to share data, skills and knowledge with each other, creating a ripple effect or even an entire system of ‘collective learning’. A specific skill learned by one individual robot will be instantly available to all other robots in the network. I will talk about my vision for a theory of collective intelligence and applying the knowledge pyramid towards my continuous efforts to strengthen the relationship between robots and humans, with the aim of bringing ever safer, more intuitive and reliable robotics into the real world.

        Bio

        Portrait Haddadin

        Sami Haddadin is the Founding Director of the Munich School of Robotics and Machine Intelligence (MSRM) at Technical University of Munich (TUM) and holds the Chair of Robotics Science and Systems Intelligence. He received his PhD from RWTH Aachen in 2011. From 2014 to 2018, he was Full Professor and Director of the Institute of Automatic Control at Gottfried Wilhelm Leibniz Universität Hannover, Germany. Prior to that, he held various positions as a research associate at the German Aerospace Center (DLR). His research interests include intelligent robot design, robot learning, collective intelligence, human-robot Interaction, nonlinear control, real-time planning, optimal control, human neuromechanics and robot safety. His work has found its way into numerous commercial robotics and AI products. Prof. Sami Haddadin has written more than 200 scientific articles and received numerous prestigious international scientific awards, including the George Giralt PhD Award, the IEEE/RAS Early Career Award, the RSS Early Career Spotlight, the Alfried Krupp Award, the German Future Prize of the German President and the Gottfried Wilhelm Leibniz Award.

        www.msrm.tum.de/en/rsi/chair-of-robotics-and-systems-intelligence

        Speaker: Sami Haddadin (Technical University of Munich (TUM))
    • 3:00 PM 4:00 PM
      Break 1h
    • 4:00 PM 5:00 PM
      Keynotes: Probabilistic and Deep Learning Approaches for Robot Navigation and Autonomous Driving
      • 4:00 PM
        Probabilistic and Deep Learning Approaches for Robot Navigation and Autonomous Driving 1h

        For autonomous robots and automated driving, the capability to robustly perceive their environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to solve the state estimation problem. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to bring us closer to the development of truly robust systems able to serve us in our every-day lives.

        Bio

        Portrait Burgard

        Wolfram Burgard is a Professor for Computer Science and the Head of the Autonomous Intelligent Systems Research Laboratory at the University of Freiburg. He received his PhD from the University of Bonn in 1991, where he then became the Head of the research lab for Autonomous Mobile Systems. His areas of interest lie in artificial intelligence and mobile robots, for which he developed pioneer techniques for localization, simultaneous localization and mapping (SLAM), robot navigation and control and path-planning, among others. Prof. Burgard has co-authored over 350 papers and articles, as well as 2 books. He received numerous awards, including the Gottfried Wilhelm Leibniz, 14 best paper awards, and an ERC Advanced grant in 2010 and the IEEE Robotics and Automation Technical Field Award in 2021. He is member of the German Academy of Sciences Leopoldina and the Heidelberg Academy of Sciences, as well as Fellow of the IEEE, the AAAI and the EurAI. He was the Editor-in-Chief of the IEEE/RSJ International Conference on Intelligent Robots and Systems from 2014 to 2016 and he served as President of the IEEE Robotics and Automation society from 2018 to 2019.

        ais.informatik.uni-freiburg.de

        Speaker: Wolfram Burgard (University of Freiburg)
    • 5:00 PM 6:00 PM
      Keynotes: Gaul-Lecture
      • 5:00 PM
        Gaul-Lecture - Causality and Autoencoders in the Light of Drug Repurposing for COVID-19 1h

        Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (for example in genomics, advertisement, policy making, education, etc.). In order to obtain mechanistic insights from such data, a major challenge is the development of a framework that integrates observational and interventional data and allows causal transportability, i.e., predicting the effect of yet unseen interventions or transporting the effect of interventions observed in one context to another. I will propose an autoencoder framework for this problem. In particular, I will characterize the implicit bias of overparameterized autoencoders and show how this links to causal transportability and can be applied for drug repurposing in the current COVID-19 crisis.

        Bio

        Portrait Uhler

        Caroline Uhler is the Henry L. and Grace Doherty Associate Professor in EECS (Electrical Engineering & Computer Science) and IDSS (Institute for Data, Systems and Society) at Massachusetts Institute of Technology (MIT). She is also the Co-Director of the newly founded Eric and Wendy Schmidt Center at the Broad Institute, an associate member of LIDS (Laboratory for Information and Decision Systems), the Center for Statistics, Machine Learning and the Operations Research Center (ORC) at MIT. Her research focuses on machine learning, statistics and computational biology, in particular on causal inference, generative modeling and applications to genomics, for example on linking the spatial organization of the DNA with gene regulation. She is an elected member of the International Statistical Institute and a recipient of a Simons Investigator Award, a Sloan Research Fellowship, an NSF Career Award, a Sofja Kovalevskaja Award from the Humboldt Foundation, and a START Award from the Austrian Science Foundation. She received her PhD in statistics from UC Berkeley and was an Assistant Professor at IST Austria before moving to MIT in 2015. She held visiting positions at ETH Zurich, the Simons Institute at UC Berkeley and the Institute of Mathematics and its Applications at the University of Minnesota.

        www.carolineuhler.com

        Speaker: Caroline Uhler (Massachusetts Institute of Technology (MIT))
    • 10:00 AM 11:00 AM
      Keynotes: Robotics and Phenotyping for Sustainable Crop Production
      • 10:00 AM
        Robotics and Phenotyping for Sustainable Crop Production 1h

        Crop farming plays an essential role in our society, providing us food, feed, fiber, and fuel. We heavily rely on agricultural production but at the same time, we need to reduce the footprint of agriculture production: less input of chemicals like herbicides, fertilizer, and other limited resources. Agricultural robots and other new technologies offer promising directions to address key management challenges in agricultural fields. To achieve this, autonomous field robots need the ability to perceive and model their environment, to predict possible future developments, and to make appropriate decisions in complex and changing situations. This talk will showcase recent developments towards robot-driven sustainable crop production. I will illustrate how management tasks can be automized using UAVs and UGVs and which new ways this technology can offer.

        Bio

        Portrait Stachniss

        Cyrill Stachniss is a Full Professor at the University of Bonn and heads the Photogrammetry and Robotics Lab. Before his appointment in Bonn, he was with the University of Freiburg and the Swiss Federal Institute of Technology. Since 2010 a Microsoft Research Faculty Fellow and received the IEEE RAS Early Career Award in 2013. From 2015-2019, he was Senior Editor for the IEEE Robotics and Automation Letters. Together with his colleague Heiner Kuhlmann, he is a Spokesperson of the DFG Cluster of Excellence "PhenoRob" at the University of Bonn. In his research, he focuses on probabilistic techniques for mobile robotics, perception, and navigation. The main application areas of his research are autonomous service robots, agricultural robotics, and self-driving cars. He has co-authored over 250 publications, has won several best paper awards, and has coordinated multiple large research projects on the national and European level.

        www.ipb.uni-bonn.de/people/cyrill-stachniss

        Speaker: Cyrill Stachniss (University of Bonn)
    • 10:01 AM 5:31 PM
      Networking area: wonder.me
    • 11:00 AM 12:00 PM
      Keynotes: Developing AI for B2B applications
      • 11:00 AM
        Developing AI for B2B Applications 1h

        AI has made significant progress and is being used in many commercial applications today. The bulk of AI adoption so far has been in B2C consumer applications, but B2B applications offer an equally exciting opportunity for AI. Developing AI for B2B applications, however, comes with its own constraints and challenges. In particular, the availability of high-quality data and a solid understanding of the business process are crucial for success. This talk will give an overview about developing B2B AI applications and how to avoid pitfalls on the way of productizing AI models. As an example, we will have a look at natural language processing, an active sub-field of AI research which can be applied to many B2B use cases.

        Bio

        Portrait Dahlmeier

        Daniel Dahlmeier is Chief Data Scientist at the SAP Artificial Intelligence team. In his role, he is responsible for the data science strategy and orchestrating data science teams and initiatives across SAP. During his professional career, he has been involved in building AI products, from research and early-stage innovation to productization and operating AI at scale. Daniel’s academic research background lies in natural language processing. He holds a PhD from the National University of Singapore, an executive MBA from Mannheim Business School and a diploma in computer science from the Karlsruhe Institute of Technology (KIT). SAP’s technologies for machine learning, the Internet of Things, and advanced analysis methods help our customers on their way of becoming intelligent companies. Artificial intelligence is already included in the core of SAP's corporate software. SAP's strategy is to bring AI into applications and business processes for the benefit of its customers and partners.

        www.sap.com

        Speaker: Daniel Dahlmeier (SAP)
    • 12:00 PM 1:30 PM
      DGR Days: Session
      • 12:00 PM
        DGR Days Session - Walking & Wearable Robotics 1h 30m

        12:00
        Bio-Inspired Compliant Motion Control in the Context of Bipedal Locomotion
        Patrick Vonwirth and Karsten Berns

        12:12
        Effective Viscous Damping for Legged Robots
        An Mo, Fabio Izzi, Daniel Haeufle and Alexander Badri-Spröwitz

        12:24
        Questions around the Catapult Mechanism in Human Legged Locomotion
        Bernadett Kiss, Alexandra Buchmann, Daniel Renjewski and Alexander Badri-Spröwitz

        12:36
        Learning of Walking Gait Controllers for Magnetic Soft Millirobots
        Sinan Özgün Demir, Utku Culha, Sebastian Trimpe and Metin Sitti

        12:48
        Rigid, Soft, Passive, and Active: a Hybrid Occupational Exoskeleton
        Francesco Missiroli, Nicola Lotti, Enrica Tricomi, Casimir Bokranz, Ryan Alicea and Lorenzo Masia

        13:00
        Underactuated Soft Hip Exosuit Based on Adaptive Oscillators
        Enrica Tricomi, Nicola Lotti, Francesco Missiroli, Xiaohui Zhang and Lorenzo Masia

        13:12
        The Benefit of Muscle-Actuated Motion in Optimization and Learning
        Isabell Wochner and Syn Schmitt

    • 1:30 PM 2:00 PM
      Break 30m
    • 2:00 PM 3:30 PM
      DGR Days: Session
      • 2:00 PM
        DGR Days Session - Robot Learning 1h 30m

        14:00
        Extracting Strong Policies for Robotics Tasks from Zero-Order Trajectory Optimizers
        Sebastian Blaes, Cristina Pinneri and Georg Martius

        14:12
        Specializing Versatile Skill Libraries using Local Mixture of Experts
        Mevlüt Onur Celik, Dongzhuoran Zhou, Ge Li, Philipp Becker and Gerhard Neumann

        14:24
        Combining Manipulation Primitive Nets and Policy Gradient Methods for Learning Robotic Assembly Tasks
        Marco Braun and Sebastian Wrede

        14:36
        Robot Dynamics Learning with Action Conditional Recurrent Kalman Networks
        Vaisakh Shaj, Philipp Becker and Gerhard Neumann

        14:48
        Seamless Sequencing of Skills via Differentiable Optimization
        Noémie Jaquier, Julia Starke, You Zhou and Tamim Asfour

        15:00
        Learning Control Policies from Optimal Trajectories
        Christoph Zelch, Jan Peters and Oskar von Stryk

    • 3:30 PM 4:00 PM
      Break 30m
    • 4:00 PM 4:45 PM
      Keynotes: Keynote Fischer (more information will follow)
    • 4:45 PM 5:30 PM
      Keynotes: Keynote
      • 4:45 PM
        Moral Emotions and the Promises and Risks of Artificial Intelligence 45m

        New and potentially risky technological developments, such as related to artificial intelligence and machine learning systems, can trigger emotions and public concerns. Emotions have often been met with suspicion in debates about risk, because they are seen as contrary to rational decision making. Indeed, emotions can cloud our understanding of quantitative information about risks. However, various emotion researchers in psychology and philosophy have argued for the importance of emotions for ethical reasoning. In my presentation I will argue that moral emotions can make a major contribution in order to assess the multifaceted, ethical aspects of risks, such as justice, fairness, dignity, responsibility and autonomy. Furthermore, when it comes to artificial intelligence, human emotions presumably by definition outperform artificial intelligence concerning unique human capacities such as ethical sensitivity and imagination, because of the embodied and embedded nature of these capacities. We should critically reflect on which tasks we should delegate to machines, and which we should reserve for humans. Hence, for both reasons, decision making about the promises and risks of artificial intelligence should include attention for emotions, in order to facilitate ‘emotional-moral deliberation’ concerning which role we want artificial intelligence to play in society.

        Bio

        Portrait Roeser

        Sabine Roeser is Professor of Ethics and Head of the Department of Values, Technology, and Innovation at TU Delft. Roeser’s research covers theoretical, foundational topics concerning the nature of moral knowledge, intuitions, emotions, art and evaluative aspects of risk, but also urgent and hotly debated public issues on which her theoretical research can shed new light, such as nuclear energy, climate change and public health issues. She has given numerous academic and public talks. Roeser regularly serves on policy advisory boards concerning risky technlogies, such as concerning decision making about genetic modification, nuclear energy and nuclear waste, and she is a member of the Health Council of the Netherlands. Roeser is (co-)leader of various large research projects, including a 10 year multi-university project on the Ethics of Socially Disruptive Technologies (ESDiT). Her most recent book is Risk, Technology, and Moral Emotions (2018 Routledge).

        www.tbm.tudelft.nl/sroeser

        Speaker: Sabine Roeser (TU Delft)
    • 10:00 AM 11:00 AM
      Keynotes: Machine Learning is Not Intelligence
      • 10:00 AM
        Machine Learning is Not Intelligence 1h

        What’s Missing? And How We Might Create a Science of Intelligence

        No other community has laid a stronger claim to the term Artificial Intelligence than the machine learning community. But truth be told, we don’t really know the mechanisms underlying natural intelligence—and therefore we cannot really know what underlies artificial analogue is either. What do we know then about intelligence? We know how to measure intelligence in humans, we know that intelligence is predictive of many real-world capabilities, we can list properties we attribute to intelligent behavior, but we remain without a clear constructive understanding of the computational underpinnings of intelligence. Probably everybody agrees that what happens in a human body and brain is fundamentally different from any artificial system we have created thus far—and that the resulting behavior is also fundamentally different.
        In this talk, I will present the Cluster of Excellence “Science of Intelligence” which seeks to find constructive explanations of intelligence. It brings together researchers from the study of artificial intelligence (robotics, computer vision, machine learning, AI, control) and natural intelligence (psychology, behavioral biology, neuroscience, philosophy, educational science). It is based on the assumption that only by merging the perspectives of relevant disciplines we can obtain a complete and valid understanding of intelligence. Some of the ongoing research of “Science of Intelligence” provides evidence for why a recipe for “true” artificial intelligence will include more than one ingredient. I will talk about what some of these ingredients might be and present research in support of their relevance to intelligent behavior.
        Spoiler alert! These ingredients will include things other than machine learning (but, of course, machine learning is probably one of the ingredients).

        Bio

        Portrait Brock

        Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin, a German "University of Excellence". He received his PhD from Stanford University in 2000 and held postdoctoral positions at Rice University and Stanford University. He was an Assistant and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst before moving back to Berlin in 2009. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on robot intelligence, mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. Oliver Brock directs the Research Center of Excellence "Science of Intelligence". He is an IEEE Fellow and was president of the Robotics: Science and Systems Foundation from 2012 until 2019.

        www.robotics.tu-berlin.de

        Speaker: Oliver Brock (Technische Universität Berlin)
    • 10:01 AM 5:31 PM
      Networking area: wonder.me
    • 11:00 AM 12:00 PM
      Keynotes: xAI – Is This the Future of AI-Software Testing?
      • 11:00 AM
        xAI – Is This the Future of AI-Software Testing? 1h

        AI is leaving its marks everywhere in the industry. One important question is how to integrate AI-based software in security-critical environments such as autonomous driving and modern medicine applications. The AI must perform robustly and safely. To ensure this, software that contains AI components must be tested thoroughly, which is not a trivial task: On the one hand, classic software testing methods cannot be applied due to the “black-box” nature of most AI-algorithms. On the other hand, in many cases regulations concerning AI-testing have yet to be defined. One cornerstone of AI-testing could be explainable AI (xAI). It helps software developers to understand the decision-making process of AI-algorithms which is fundamental for a trust-worthy AI. In this talk, we give a summary of visual xAI methods. We then show how we implemented these methods in a specific use case.

        Bio

        Portrait Chung

        Khanlian Chung is a Senior Software Development Engineer at Vector. He is responsible for integrating AI-features in Vector software testing tools. The focus of his work is how AI-based software can be tested and secured. Khanlian Chung graduated in Physics at the Technical University of Kaiserslautern. As a PhD student and Postdoc at Heidelberg University, he investigated how AI can improve diagnostic and interventional imaging for cancer patients. Vector Informatik is the leading manufacturer of software tools and embedded components for the development of electronic systems and their networking with many different systems from CAN (Controller Area Network) to Automotive Ethernet. Vector tools and services provide engineers with the decisive advantage to make a challenging and highly complex subject area as simple and manageable as possible. Worldwide customers in the automotive, commercial vehicles, aerospace, transportation, and control technology industries rely on the solutions and products of the independent Vector Group for the development of technologies for future mobility.

        www.vector.com

        Speaker: Khanlian Chung (Vector)
    • 12:00 PM 1:30 PM
      DGR Days: Session
      • 12:00 PM
        DGR Days Session - Learning for Grasping & Manipulation 1h 30m

        12:00
        Residual Feedback Learning for Contact-Rich Manipulation Tasks with Uncertainty
        Alireza Ranjbar, Ngo Anh Vien, Hanna Ziesche, Joschka Boedecker and Gerhard Neumann

        12:12
        Learning and Teaching Multimodal Neural Policies for Dexterous Manipulation
        Philipp Ruppel, Norman Hendrich and Jianwei Zhang

        12:24
        Robot Hand Dexterous Manipulation by Teleoperation with Adaptive Force Control
        Chao Zeng and Jianwei Zhang

        12:36
        Learning Robust Mobile Manipulation for Household Tasks
        Snehal Jauhri, Jan Peters and Georgia Chalvatzaki

        12:48
        Achieving Robustness in a Drawer Manipulation Task by using High-level Feedback instead of Planning
        Manuel Baum and Oliver Brock

        13:00
        A Dataset for Learning Bimanual Task Models from Human Observation
        Franziska Krebs, André Meixner, Isabel Patzer and Tamim Asfour

        13:12
        EMG-driven Machine Learning Control of a Soft Glove for Grasping Assistance and Rehabilitation
        Marek Sierotowicz, Nicola Lotti, Francesco Missiroli, Ryan Alicea, Michele Xiloyannis, Claudio Castellini and Lorenzo Masia

    • 1:30 PM 2:00 PM
      Break 30m
    • 2:00 PM 3:30 PM
      DGR Days: Session
      • 2:00 PM
        DGR Days Session - Perception 1h 30m

        14:00
        Interconnected Recursive Filters in Artificial and Biological Vision
        Aravind Battaje and Oliver Brock

        14:12
        Distributed Semantic Mapping for Heterogeneous Robotic Teams
        Yunis Fanger, Tim Bodenmüller and Rudolph Triebel

        14:24
        A Dexterous Hand-Arm Teleoperation System based on Hand Pose Estimation and Active Vision
        Shuang Li, Norman Hendrich and Jianwei Zhang

        14:36
        Multimodal Perception for Robotic Pouring
        Hongzhuo Liang, Norman Hendrich and Jianwei Zhang

        14:48
        Physically Plausible Tracking and Reconstruction of Dynamic Objects
        Michael Strecke and Joerg Stueckler

        15:00
        Detecting Robotic Failures Using Visual Anomaly Detection
        Santosh Thoduka, Juergen Gall and Paul G. Plöger

        15:12
        Skill Generalisation and Experience Acquisition for Predicting and Avoiding Execution Failures
        Alex Mitrevski, Paul G. Plöger and Gerhard Lakemeyer

    • 3:30 PM 4:00 PM
      Break 30m
    • 4:00 PM 5:30 PM
      DGR Days: Session
      • 4:00 PM
        DGR Days Session - Human-Robot-Interaction & Production 1h 30m

        16:00
        Improving HRI through Robot Architecture Transparency
        Lukas Hindemith, Anna-Lisa Vollmer and Britta Wrede

        16:12
        Improving Safety in Human-Robot Collaboration by using Brain-Computer Interface Technology
        Jianzhi Lyu, Alexander Maye, Jianwei Zhang, Norman Hendrich and Andreas K. Engel

        16:24
        Flexibility in Human-Robot Teams
        Dominik Riedelbauch

        16:36
        Towards Active Visual SLAM
        Elia Bonetto and Aamir Ahmad

        16:48
        Smart Interaction System for Autonomous Bus in Pedestrian Zone
        Qazi Hamza Jan and Karsten Berns

        17:00
        On the Principle of Transference and its Impact on Robotic Innovation
        Bertold Bongardt

        17:12
        Sustainable Production Enabled by Remanufacturing
        Constantin Hofmann, Jan-Philipp Kaiser and Niclas Eschner