Enabling intuitive control of lower-limb assistive devices and their assessment through transverse sonomyographic sensing and machine learning
MetadataShow full item record
Assistive device technology has significantly improved within the last decade primarily due to the introduction and advancement of microprocessor-controlled and powered (i.e., robotic) prostheses and orthoses. However, there remains a gap between the mechanical capabilities of these devices and the biomechanical abilities of their wearers. There is a critical need for robust human-machine interfaces, which accurately sense various forms of locomotion and intuitively interact with the human wearer during a range of activities that comprise their daily lives. Such interfaces rely on a continuous estimation of user intent, which is derived from wearable or embedded sensing. Sonomyography, the evaluation of real-time dynamic ultrasound (US) imaging of skeletal muscle, has been recently proposed as a new sensing modality for assistive device control. Compared to a more conventional modality of muscle-based sensing known as surface electromyography (EMG) that detects changes in neural excitation of superficial muscle, sonomyography has the ability to track deformation of skeletal muscle ranging from superficial to deep tissue, and to sense these changes in multiple muscle groups with the same sensor. The overall objective of this research is to benchmark and demonstrate the feasibility of using transverse sonomyographic sensing as an input to various machine learning models for intuitive control systems and assessment methods of wearable lower-limb assistive devices. Following a discussion of the relevant prior work that led to this research in Chapter 1, in Chapter 2, we benchmark sonomyography against surface EMG for the continuous estimation of discrete ambulation tasks and analyze the contribution of superficial and deep US features. Next, in Chapters 3 and 4, sonomyographic sensing is compared to surface EMG sensing (as well as their fusion) for continuous estimation of joint kinematics and kinetics during various ambulation tasks. In Chapter 5, sonomyographic sensing is used as an input to forward models that predict future trajectories of limb motion and torque during ambulation. Subsequently, in Chapter 6, we compare sonomyography, mechanical-based sensing, and their fusion as tools to recognize how individuals with and without lower-limb amputation use their lower-extremity to complete widely-varying ambulation tasks. Finally, the overall conclusions of this work are discussed in Chapter 7. This research can impact the quality of life for individuals with mobility limitations by developing and testing new human-machine interfaces that are robust and ubiquitous to specific assistive devices and ambulation tasks. Therefore, our findings can help better translate robotic assistive technologies from lab-based settings into the lives of individuals.