Selected article for: "accuracy evaluate and compare accuracy evaluate"

Author: Kandukuri, Rama; Achterhold, Jan; Moeller, Michael; Stueckler, Joerg
Title: Learning to Identify Physical Parameters from Video Using Differentiable Physics
  • Cord-id: 4ad17uua
  • Document date: 2021_3_17
  • ID: 4ad17uua
    Snippet: Video representation learning has recently attracted attention in computer vision due to its applications for activity and scene forecasting or vision-based planning and control. Video prediction models often learn a latent representation of video which is encoded from input frames and decoded back into images. Even when conditioned on actions, purely deep learning based architectures typically lack a physically interpretable latent space. In this study, we use a differentiable physics engine wi
    Document: Video representation learning has recently attracted attention in computer vision due to its applications for activity and scene forecasting or vision-based planning and control. Video prediction models often learn a latent representation of video which is encoded from input frames and decoded back into images. Even when conditioned on actions, purely deep learning based architectures typically lack a physically interpretable latent space. In this study, we use a differentiable physics engine within an action-conditional video representation network to learn a physical latent representation. We propose supervised and self-supervised learning methods to train our network and identify physical properties. The latter uses spatial transformers to decode physical states back into images. The simulation scenarios in our experiments comprise pushing, sliding and colliding objects, for which we also analyze the observability of the physical properties. In experiments we demonstrate that our network can learn to encode images and identify physical properties like mass and friction from videos and action sequences in the simulated scenarios. We evaluate the accuracy of our supervised and self-supervised methods and compare it with a system identification baseline which directly learns from state trajectories. We also demonstrate the ability of our method to predict future video frames from input images and actions. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this chapter (10.1007/978-3-030-71278-5_4) contains supplementary material, which is available to authorized users.

    Search related documents:
    Co phrase search for related documents
    • accuracy degree and long short term memory: 1
    • additional force and long short term: 1
    • long short term and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9
    • long short term and low dimensional: 1
    • long short term and lstm convolution: 1, 2, 3
    • long short term and lstm latent representation: 1
    • long short term memory and loss function: 1, 2, 3, 4
    • long short term memory and low dimensional: 1
    • long short term memory and lstm convolution: 1, 2, 3
    • long short term memory and lstm latent representation: 1
    • loss function and low dimensional: 1
    • low dimensional and lstm latent representation: 1