home
Real-world Use Cases Normative Requirements Repository

DressAssist (Assisted-Care Dressing)


DressAssist

A carebot is an assistive and supportive robotic tool used to care for the elderly, children, and those with disabilities (typically either of a physical or cognitive nature). The carebot is usually deployed in the user’s home (or at a care home) – either working with human caregivers or on their own. Its primary role is to aid a user in dressing and in providing routine care and support functions such as reminding a user to take their medication. It may also be a source of companionship and comfort to the user and is expected to engage in social interactions with the user, by communicating, listening, responding, and reacting and to make certain normatively-relevant decisions and judgements. In our use case the primary role of the agent is to dress the user, with a secondary function of monitoring the user’s well-being. The instantiation of SLEEC requirements allows the agent to be, in some part, SLEEC-sensitive and in certain crucial instances, legally-compliant.

Developments in machine learning and control engineering promise a world in which autonomous agents can provide care and support for individuals in their daily lives (Zhang et al., 2019; Cosar et al, 2020). Jevtić et al. describe the development of such a carebot (Jevtić et al., 2019). It is a personalised robot with a wide range of physical characteristics and abilities that can perform assistive dressing functions in close physical interaction with users. Although a human carer may still be required, the autonomous agent could allow increased reach, enhance existing activities, and enable greater multitasking (Townsend et al., 2022). Robots of this type also demonstrate a degree of sociability and of emotional perception, such as, engaging in high-level interactive dialogue, responsiveness, gesturing, and using voice recognition, which serve to 'lubricate' the human-robot interface (Breazeal, 2003). This will require the agent to execute decisions, expressed as SLEEC rules, derived from an array of reasoned and justifiable alternatives.

The agent is equipped with moving actuators enabling it to pick up and manipulate items of clothing and with multiple cameras that capture video imagery to determine user pose and limb trajectory. The agent has voice synthesis and emotional recognition system to interpret verbal and non-verbal commands and communicate with the user. Interaction with the user is also possible by means of a touch screen. The audio-visual components may also be leveraged to monitor user well-being through machine-learning components that detect distress in speech patterns as well as facial expressions. The user wears a smartwatch to provide biometric information and to enable the detection of user falls.

Breazeal, C. (2003). Emotion and sociable humanoid robots. International journal of human-computer studies, 59(1-2):119-155.

Coşar, S., Fernandez-Carmona, M., Agrigoroaie, R., et al. (2020). Enrichment: Perception and interac- tion of an Assistive Robot for the Elderly at Home. International Journal of Social Robotics, 12(3), 779–805.

Jevtić, A., Flores Valle, A., Alenyà, G. et al. (2019). Personalized robot assistant for support in dressing. IEEE Transactions on Cognitive and Developmental Systems, 11(3):363-374.

Townsend, BA., Paterson, C., Arvind, TT., et al. (2022). From Pluralistic Normative Principles to Autonomous-Agent Rules. Minds and Machines, 1-33.

Zhang, F., Cully, A., & Demiris, Y. (2019). Probabilistic real-time user posture tracking for personalized robot-assisted dressing. IEEE Transactions on Robotics, 35(4), 873–888.