Author: Laschowski, Brokoslaw; McNally, William; Wong, Alexander; McPhee, John
                    Title: Environment Classification for Robotic Leg Prostheses and Exoskeletons using Deep Convolutional Neural Networks  Cord-id: xkhn3t81  Document date: 2021_6_25
                    ID: xkhn3t81
                    
                    Snippet: Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for intelligent high-level control and decision-making use mechanical, inertial, and/or neuromuscular data, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we designed and evaluated an adv
                    
                    
                    
                     
                    
                    
                    
                    
                        
                            
                                Document: Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for intelligent high-level control and decision-making use mechanical, inertial, and/or neuromuscular data, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we designed and evaluated an advanced environment classification system that uses computer vision and deep learning to forward predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust locomotion mode transitions. In this study, we first reviewed the development of the ExoNet database – the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for large-scale image classification of the walking environments, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Lastly, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called NetScore, which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference). Although we designed this environment classification system to support the development of next-generation environment-adaptive locomotor control systems for robotic prostheses and exoskeletons, applications could extend to humanoids, autonomous legged robots, powered wheelchairs, and assistive devices for persons with visual impairments.
 
  Search related documents: 
                                Co phrase  search for related documents- Try single phrases listed below for: 1
  
 
                                Co phrase  search for related documents, hyperlinks ordered by date