Investigating Actual and Perceived Videotext Complexity in Second Language Video Comprehension
The overarching aim of my research is to understand what makes a video complex for (language) learners. As a digital artefact, video is a dynamic, interactive and complex multimodal artefact in which meanings are constructed through the interactions of different (semiotic) communicative modes (language, visual, and acoustic) across spatial-temporal dimensions. Language is the most expressive mode, yet it is one among many other communicative modes in videos. Visuals (eg images, diagrams, speaker’s facial expression and gestures) and acoustic features (eg pitch, rhythm, and speaker’s accent) do also contribute to meaning constructions in videos.
The complex interactions of these modes can be better understood through systemic multilayered analyses of how meaning constructions within and across modes. Using various analytic tools (advanced Natural Language Processing, Data Mining models, and learner-centric visual complexity measures), I will first investigate what video features, eg linguistic, visual and acoustic features, contribute to the complexity of meaning comprehension for language learners. Additionally, I will develop an interactive online tool for video data visualization which can be used for educational and research purposes.
A secondary aim of my research is to develop machine learning models that could predict the complexity level of videos based on the linguistic, visual, and acoustic features found to be difficult for language learners.
Emad Alghamdi, PhD Candidate, School of Languages and Linguistics