Speaker
Description
The success and fast pace of Machine Learning (ML) in the past decade was also
enabled by modern gradient descent optimizers embedded into ML frameworks such
as TensorFlow. In the context of a doctoral research project, we investigate how
these optimizers can be utilized directly, outside of the scope of neural
networks. This approach holds the potential of optimizing explainable models
with only few model parameters allowing to derive properties for direct,
physical explanation and interpretation, like velocity, acceleration or jerk.
This is highly beneficial for use in the field of mechatronics. However, while
modern gradient gradient descent optimizers shipped with ML frameworks perform
well in neural nets, results show that most optimizers have limited capabilities
when applied directly to PP models. Domain-specific model requirements like
C^k-continuity, acceleration or jerk limitation as well as spectral or energy
optimization pose the need for developing appropriate loss functions, novel
algorithms as well as regularization techniques in order to improve optimizer
performance.
In this context, we investigate piecewise polynomial models as they occur
(and are required) in 1D trajectory planning tasks in mechatronics. Utilizing
TensorFlow optimizers, we optimize our PP model towards multi-targeted loss
functions suitable for fitting of C^k-continuos PP functions which can be
deployed in an electronic cam approximation setting. We enhance capabilities of
our PP base model by utilizing an orthogonal Chebyshev basis along with a novel
regularization method improving convergence of the approximation and
continuity optimization targets. We see a possible application of this approach
in Deep Reinforcement Learning applied to Control Theory. By exchanging
the black box that is a neural network with an explainable PP model, we foster
utility of Reinforcement Learning in designing cyber-physical control systems.
Possible contributed talk | Yes |
---|---|
Are you a student? | Yes |