Smart wheelchairs: Visual perception pipeline to improve prediction of user intention

Research output: Chapter in Book/Report/Conference proceedingConference Proceeding (Non-Journal item)

Abstract

Wheelchairs aid people with physical disabilities by assisting with mobility, thus improving their independence. Autonomous assistance on wheelchairs are limited to prototypes that provide ‘smart functionality’, by completing tasks such as docking or terrain adaption. The biggest constraints are navigating within dynamic environments, such as the home.

This paper describes the data pipeline to automate the wheelchair navigation process, from classifying an object, estimating the user’s intention via verbal command (e.g. take me to the fridge) and navigating towards a goal. Object locations will be registered within a map whilst contextual meta data is calculated. A combination of object classification confidence and object instances is used to calculate the uniqueness of all identifiable objects. Thus, assisting in predicting the user’s intention. For example, if a “go to the fridge” request is received, the wheelchair will know that the fridge is located within the kitchen, and therefore drive to the kitchen and then the fridge.

Results show that utilising contextual data reduces the likelihood of false-positive object detections being registered by the navigation pipeline, thus is more likely to interpret the user intention more accurately.
Original languageEnglish
Title of host publicationUKRAS22 Conference “Robotics for Unconstrained Environments” Proceedings
PublisherUK-RAS Network
Pages18-19
Number of pages2
DOIs
Publication statusPublished - 26 Aug 2022

Keywords

  • Robotics
  • semantic mapping
  • wheelchair
  • ROS

Fingerprint

Dive into the research topics of 'Smart wheelchairs: Visual perception pipeline to improve prediction of user intention'. Together they form a unique fingerprint.

Cite this