Pictures copyrighted www.eindhoven.eu

J.M. Keller - Recognition Technology in Eldercare

Recognition Technology, as defined by L. Zadeh, refers to current or future systems that have the potential to provide a "quantum jump in the capabilities of today's recognition systems", and includes systems that incorporate new sensors, novel signal processing and soft computing. This talk concerns eldercare. Older adults are living longer and more fulfilled lives, and they desire to live as independently as possible in the home of their choice. However, independent lifestyles come with risks that are complicated by chronic illness and impairments in mobility, cognition, and the senses. In response to this trend, the University of Missouri has been investigating new approaches in caring for the elderly. This research focus has resulted in TigerPlace, an apartment complex for seniors that opened in Columbia, Missouri in 2004. A joint venture between MU's Sinclair School of Nursing and Americare Systems Inc., TigerPlace is one of four projects granted state approval to operate under the "aging in place" model of care giving. Under that model, residents who would otherwise be required by state law to live in nursing homes may have health services brought to them in their apartments instead.

Technology that can help seniors "age in place" has been spotlighted in recent years, spurred by the aging population. One focus of our research is the creation of intelligent systems that use sensors to uncover patterns of activity helpful to caregivers, especially targeting mobility and cognitive impairment. Details can be found at http://eldertech.missouri.edu. This is a large interdisciplinary effort including faculty, students and pre- and post-doctoral fellows from Engineering, Nursing, Health Management and Informatics, Medicine, Social Work, and Physical Therapy.

After a brief motivation, I will discuss how the engineering aspects of eldercare research perfectly fit the recognition technology framework. Sensors for eldercare have become more powerful and have decreased in price. I will describe a variety of instruments that we use, including very simple motion and temperature sensors, a bed restlessness/qualitative pulse and respiration unit, acoustic sensors, and low cost video sensors. While the sensors themselves have evolved, our ability to collect and process this data has leapt forward. Wireless communication makes placing and recording from many sensors very easy. High speed, low cost, large memory computers together with advances in low cost graphics processing units (GPUs) allow sophisticated algorithms to be developed for real time deployment.

I will show applications of computational intelligence techniques, among others, applied to the sensing environment. The wealth of simple sensor data is just beginning to be mined for activity information on residents. Both feature display and processing will be highlighted. A major effort of our group involves silhouette extraction and tracking with video sensors. Privacy is a crucial issue for the application of video in the sensor suite. Hence, we have chosen to only capture silhouettes for subsequent processing, for example, to detect falls. Our methodology uses histograms of "color texture features" to determine the human and remove shadows, while dynamically updating the background. A fuzzy logic system is used to "detach" objects held by the person since those objects can distort the silhouette features for activity analysis. Using 2 cameras, a 3-D "voxel person" is constructed (made possible by our high speed GPU construction of silhouettes from image sequences), and sophisticated position and shape features are calculated. A hierarchical fuzzy logic system is built to determine memberships in various states, followed by another set of rules to classify activities (early focus on fall detection). Extensions to fuzzy voxel person and multiple stereo camera systems will be discussed. A second major video activity deals with markerless motion capture for monitoring physical activity in elders. The goal is to provide information and feedback from exercise regimens. Two approaches, matching silhouettes from 2 cameras to 3-D body models, and extraction of relevant features (spine and shoulder angles) directly from silhouettes will be shown. Markerless motion capture by matching silhouettes to 3-D body models, or calculating relevant physical features directly from 2-D silhouettes, is the enabling technology for many monitoring projects.