Professor Emeritus of Computational Linguistics

University of Toronto, Department of Computer Science

Research

Research on detecting Alzheimer’s disease, aphasia, and cognitive decline in writing and speech

Longitudinal detection of dementia through lexical and syntactic changes in writing: A case study of three British novelists: Le, Hirst, Lancashire, and Jokel (2011) carried out a large-scale longitudinal study of lexical and syntactic changes in language in Alzheimer's disease using complete, fully parsed texts and a large number of measures, using as subjects the British novelists Iris Murdoch (who died with Alzheimer's), Agatha Christie (who was suspected of it), and P.D. James (who aged healthily). The study avoided the limitations and deficiencies of Garrard et al.'s earlier study of Iris Murdoch. The results supported the hypothesis that signs of dementia can be found in diachronic analyses of patients' writings, and moreover led to new understanding of the work of the individual authors who were studied. In particular, it is probable that Agatha Christie indeed suffered from the onset of Alzheimer's while writing her last novels, and that Iris Murdoch exhibited a "trough" of relatively impoverished vocabulary and syntax in her writing in her late 40s and 50s that presaged her later dementia.  (This work was carried out in collaboration with Ian Lancashire of the Department of English and Regina Jokel of the Department of Speech-Language Pathology and the Kunin-Lunenfeld Applied Research Unit, Baycrest Hospital.)

Does cognitive decline attenuate an author's individual style? As part of their work on detecting the individual style of an author, Hirst and Feng (2012) considered the question of whether cognitive decline, while causing simplification of a writer's language, also leads to the decline of their individual style. The results were equivocal, as different frameworks yielded contrary results, but an SVM classifier was able to make age discriminations, or nearly so, for all three authors whom we studied, thereby casting doubt on the underlying axiom that an author's essential style is invariant in the absence of cognitive decline.

Automated classification of primary progressive aphasia subtypes from narrative speech transcripts: Fraser et al (2014a) presented a method for evaluating and classifying connected speech in primary progressive aphasia using computational techniques. Syntactic and semantic features were automatically extracted from transcriptions of narrative speech for three groups: semantic dementia (SD), progressive nonfluent aphasia (PNFA), and healthy controls. Features that varied significantly between the groups were used to train machine learning classifiers, achieving accuracies well above baseline on the three binary classification tasks. An analysis of the influential features showed that in contrast with controls, both patient groups tended to use words which were higher in frequency (especially nouns for SD, and verbs for PNFA). The SD patients also tended to use words (especially nouns) that were higher in familiarity, and they produced fewer nouns, but more demonstratives and adverbs, than controls. The speech of the PNFA group tended to be slower and incorporate shorter words than controls. The patient groups were distinguished from each other by the SD patients’ relatively increased use of words which are high in frequency and/or familiarity.

Using statistical parsing to detect agrammatic aphasia: Fraser et al (2014b) presented an automatic method for analyzing aphasic speech using surface level parse features and context-free grammar production rules. Examining these features individually, they showed that they can uncover many of the same characteristics of agrammatic language that have been reported in studies using manual analysis. When taken together, these parse features can be used to train a classifier to accurately predict whether or not an individual has aphasia. Furthermore, they found that the parse features can lead to higher classification accuracies than traditional measures of syntactic complexity. Finally, they found that a minimal amount of pre-processing can lead to better results than using either the raw data or highly processed data.

Comparison of different feature sets for identification of variants in progressive aphasia: Fraser et al (2014c) used computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). They examined several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discussed the circumstances under which they can be extracted. They considered the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. They first evaluated the individual feature sets on their classification accuracy, and then performed an ablation study to determine the optimal combination of feature sets. Finally, they ranked the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. They found that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.

Sentence segmentation of aphasic speech: Automatic analysis of impaired speech for screening or diagnosis is a growing research field; however there are still many barriers to a fully automated approach. When automatic speech recognition is used to obtain the speech transcripts, sentence boundaries must be inserted before most measures of syntactic complexity can be computed. Fraser et al (2015) considered how language impairments can affect segmentation methods, and compared the results of computing syntactic complexity metrics on automatically and manually segmented transcripts.They found that the important boundary indicators and the resulting segmentation accuracy can vary depending on the type of impairment observed, but that results on patient data are generally similar to control data. They also found that a number of syntactic complexity metrics are robust to the types of segmentation errors that are typically made.

Detecting semantic changes in Alzheimer’s disease with vector space models: Numerous studies have shown that language impairments, particularly semantic deficits, are evident in the narrative speech of people with Alzheimer’s disease from the earliest stages of the disease. Fraser and Hirst (2016) presented a novel technique for capturing those changes, by comparing distributed word representations constructed from healthy controls and Alzheimer’s patients. They investigated examples of words with different representations in the two spaces, and linked the semantic and contextual differences to findings from the Alzheimer’s disease literature.

Detecting late-life depression in Alzheimer’s disease through analysis of speech and language: Alzheimer’s disease and depression share a number of symptoms, and commonly occur together. Being able to differentiate between these two conditions is critical, as depression is generally treatable. Fraser, Rudzicz, and Hirst (2016) used linguistic analysis and machine learning to determine whether automated screening algorithms for Alzheimer's disease are affected by depression, and to detect when individuals diagnosed with Alzheimer's are also showing signs of depression. In the first case, they found that their automated Alzheimer's screening procedure did not show false positives for individuals who have depression but are otherwise healthy. In the second case, they had moderate success in detecting signs of depression in Alzheimer's disease (accuracy = 0.658), but were not able to draw a strong conclusion about the features that are most informative to the classification.

Rhetorical structure and Alzheimer's disease: Abdalla, Rudzicz, and Hirst (2018) identified the effects of Alzheimer's disease on the structure of discourse, both in spontaneous speech and in literature.They used two data sets, DementiaBank and the Carolina Conversations Collection, to explore how Alzheimer's disease manifests itself in spontaneous speech by automatically extracting discourse relations according to Rhetorical Structure Theory.They also studied written novels, comparing authors with and without dementia using the same tools. They found that several discourse relations, especially those involving elaboration and attribution, are significant indicators of Alzheimer's disease in speech. Indicators of the disease in written text, by contrast, involve relations of logical contingency.

References

Abdalla, Mohamed; Rudzicz, Frank; and Hirst, Graeme. “Rhetorical structure and Alzheimer's disease.” Aphasiology, 32(1), 2018, 41–60. [PDF]

Fraser, Kathleen C.; Meltzer, Jed A.; Graham, Naida L.; Leonard, Carol; Hirst, Graeme; Black, Sandra E.; and Rochon, Elizabeth. “Automated classification of primary progressive aphasia subtypes from narrative speech transcripts.” Cortex, 55, June 2014a, 43–60. [PDF]

Fraser, Kathleen C.; Hirst, Graeme; Meltzer, Jed A.; Mack, Jennifer E.; and Thompson, Cynthia K. “Using statistical parsing to detect agrammatic aphasia.” Proceedings, BioNLP 2014 Workshop, Baltimore, June 2014b, 134–142. [PDF]

Fraser, Kathleen C.; Hirst, Graeme; Graham, Naida L.; Meltzer, Jed A.; Black, Sandra E.; and Rochon, Elizabeth. “Comparison of different feature sets for identification of variants in progressive aphasia.” Proceedings, Workshop on Computational Linguistics and Clinical Psychology, Baltimore, June 2014c, 17–26. [PDF]

Fraser, Kathleen C.; Ben-David, Naama; Hirst, Graeme; Graham, Naida L.; and Rochon, Elizabeth. “Sentence segmentation of aphasic speech.” Proceedings, 2015 Conference of the North American Chapter of the Association for Computational Linguistics -- Human Language Technologies, Denver, June 2015, 862–871. [PDF]

Fraser, Kathleen C. and Hirst, Graeme. “Detecting semantic changes in Alzheimer's disease with vector space models.” 

Proceedings, Workshop on Resources and Processing of Linguistic and Extra-Linguistic Data from People with Various Forms of Cognitive/Psychiatric Impairments (Linköping Electronic Conference Proceedings vol. 128), Portorož, May 2016, 1–8. [PDF]

Fraser, Kathleen C.; Rudzicz, Frank; and Hirst, Graeme. “Detecting late-life depression in Alzheimer's disease through analysis of speech and language.” Proceedings, 3rd Workshop on Computational Linguistics and Clinical Psychology, San Diego, June 2016, 1–11. [PDF]

Hirst, Graeme and Feng, Vanessa Wei. “Changes in style in authors with Alzheimer's disease.” English Studies (special issue on stylometry and authorship attribution), 93(3), May 2012, 357–370. [PDF]

Le, Xuan; Lancashire, Ian; Hirst, Graeme; and Jokel, Regina. “Longitudinal detection of dementia through lexical and syntactic changes in writing: A case study of three British novelists.” Literary and Linguistic Computing, 26(4), December 2011, 435–461. [PDF]