Национално издателство "Аз-буки"
Министерство на образованието и науката
Wikipedia
  • Вход
  • Регистрация
Вестник „Аз-буки”
Няма резултати
Вижте всички резултати
  • Начало
  • За вестника
  • Екип
  • Архив
  • Контакт
  • Реклама
  • Абонамент
  • en_US
  • Начало
  • За вестника
  • Екип
  • Архив
  • Контакт
  • Реклама
  • Абонамент
  • en_US
Няма резултати
Вижте всички резултати
Вестник „Аз-буки”
Няма резултати
Вижте всички резултати
Начало Uncategorized

Digital Technologies and Artificial Intelligence in The Multimodal Design of Sign Language Education

„Аз-буки“ от „Аз-буки“
08-05-2025
в Uncategorized
A A

Diyana Georgieva

Nikolay Tsankov

Trakia University

https://doi.org/10.53656/str2025-2-1-dig

Abstract. In recent decades, technological advances have inspired tangible changes in the development of applications and algorithms in response to the communicative needs of D/deaf people, the removal of social and communicative barriers with hearing people, and the blurring of linguistic boundaries between the two populations. The ubiquitous presence of multi-modal forms of cognitive, communicative, and information input in the digital space render these types of resources crucial to education. The article focuses on the meta-analysis of a large pool of publications to the effect of proposing a systematic, empirically based operationalization of sign language, its teaching, and its study by D/deaf and hearing children, pupils, and students in a multimodal educational environment, designed and shaped through the implementation of digital infrastructure, of which digital technologies and artificial intelligence are an integral part. The collected and systematically analyzed data reveal the attributes of a wide range of innovations for operating with the unique code of natural human language, which is realized through the precise combination of visual, kinetic, and spatial modalities.

Keywords: multimodality; sign language; digital infrastructure; digital technologies; artificial intelligence

 

Introduction

The doctrine of multimodality postulates that discourse is composed of smaller parts called modes. Therefore, multimodal education involves combining different modes into harmonious ensembles to achieve educational goals (Kress 2000).

The digital revolution at the beginning of the 21st century has brought about a profound transformation in primary, secondary, and higher education institutions worldwide. Although these changes are influenced by a variety of geopolitical and economic factors, the emerging phenomenon carries the characteristics of a paradigmatic shift that has irreversibly impacted all subjects within the global educational system (Peters & Jandric 2015).

Following a document of international significance that administered activities for the development of digital education for the period 2018-2020, which included priority areas such as: (1) better use of digital technologies for teaching and learning; (2) development of digital competencies and skills; and (3) improving education through better data analysis and forecasting, a Digital Education Action Plan for the period 2021 – 2027 has been developed[1]. This plan focuses on adapting education to the digital age and outlines the European Commission’s vision for high-quality, inclusive, and accessible digital education in Europe. It promotes the development of a highly effective digital education ecosystem, taking into account the need for: (1) infrastructure, connectivity, and digital technical equipment; (2) effective planning and development of digital capacity, including modern organizational capabilities; (3) competent and confident teachers in the field of digital technologies, along with all stakeholders involved in providing high-quality education and training; (4) high-quality learning content, user-friendly tools, and secure platforms that respect privacy and ethical standards. In the educational space, the concept of the central position of multimodality in the intensified digital model for the education of D/deaf people is gaining momentum. Although multimodality supports the idea of a broader understanding of educational discourse, which includes the ecological connections between ethics and pedagogy, people and empiricism (Skyer 2022), the education of D/deaf people and multimodal research intersect in a shared interest in sign languages.

The need to introduce technologies for the learning and use of sign language (SL) in order to support communication and social inclusion of D/deaf people is an issue around which the interest of members of the research community gravitates. While their development is a real challenge due to the existence of multiple sign languages and the lack of massive corpora of annotated data, rapid advances in artificial intelligence (AI) and machine learning have played a significant role in automating and streamlining these technologies.

The operationalization of SL is a multidisciplinary research area whose perimeter includes image recognition, computer vision, natural language processing, and sign and computational linguistics (Aran et al. 2009). The multifaceted nature of the problem is driven by the complexity and intricate nature of the visual analysis of spatial-kinetic signs on the one hand, and their unique multimodal nature on the other. Despite their perfect structure of subsystems – phonology, vocabulary, morphology and syntax, visual languages differ from spoken ones in the mechanisms of expression and perception: the human hand simultaneously occupies a linguistically significant configuration and performs a certain movement towards a fixed location, which is diametrically different from the sequential linear manner in which spoken sounds appear in the spoken words. The linguistic characteristics of sign languages contrast with those of spoken languages due to specific linguistic elements (forming the prosodic component) that influence the context of utterances, such as head movements and facial expressions, which complement the lexical meanings of the movements of the articulators – the hands (Liddel 2003; Stokoe 2005).

An ephemeral look at the visual-spatial sign system reveals the challenges faced by the process of its automatic parameterization: the phonology of language operates on hand shape, the location of sign articulation and movement, and palm orientation; morphology uses directionality, aspect (verb type), and numerical incorporation; and syntax uses spatial localization and coherence as well as facial expression. The entire message is contained in a precisely synchronized exposition of manual segments (shapes, locations, hand movements) and non-manual markers (facial expression, head/shoulder/forearm movements). Woven into a rich amalgam, manual and non-manual elements give rise to an internal multimodality of language. The facts presented here lead to the generalization that specifying the parameters of SL is a complex case study that involves the detailed identification of all the building blocks comprising its composite: signs s as lexical units functioning through specific segments; gesticulations, facial areas and body parts; facial expressive features (Ong & Ranganath 2005).

The fragmented presence of systematic scientific analyses on the discussed issue, especially on a national scale, and the persistent interest in the capacity of computer systems for teaching, learning, and communication through visual language, motivated the undertaking of a theoretical study. This study involved a scientific dissection of some of the most impressive achievements (from a subjective perspective) in this field.

Methodology

The research priority is oriented towards multimodal sign language learning of a heterogeneous population of subjects. The subject of the theoretical study is the digital infrastructure, part of which are the resources of digital technologies and artificial intelligence in the design of multimodal design for sign language learning. In consonance with the stated research intentions is the defined goal related to the scientific observation of the achievements, challenges and perspectives in the evolution of technologies equipped with multimodal interfaces for a more efficient access to a linguistic reality realized in multimodal code dimensions.

The scientific inquiry involved the study of library documents and periodicals in relevant databases, shedding light on the status of the issue concerning the creation of a multimodal design in sign language (SL) education, with digital infrastructure as its foundation. The identified data were extracted from 34 scientific publications between April and June 2024. The collection and summarization of scientific facts were carried out concerning the main linguistic operations and activities (recognition, representation, translation, teaching, learning) and the corresponding technological solutions aimed at optimizing SL education in a digital environment.

The triad of conventional research methods (general: analysis, systematization, synthesis; theoretical: generalization, systematic analysis; empirical: source/document review, description, comparison) formed the methodological backbone of the theoretical study.

Sign language identification and translation

Meta-analyses combining results from studies focusing on hand gesture recognition and sign language identification have examined strategies for using instrumented gloves that provide accurate hand position and finger configuration data (Moher et al. 2009). In order to be operationalized, the designed systems imply mandatory equipment of their users. However, people’s preference is towards systems they can operate in natural conditions. The mid-1990s were marked by improvements in camera hardware that enabled the recognition of manual signs in a real time frame (Prathan et al. 2008). The use of instrumented gloves has been replaced by the implementation of vision-based systems that are limited to one or more cameras connected to the computing device. In parallel to a user-friendly working environment, the inventions provide the ability to detect and segment the configuration of the hand and fingers or deal with occlusions[2]. Initial studies (Starner & Pentland 1996) reflect evidence on systems with a limited recognition lexicon: about 40 – 50 visual-spatial symbols and associated sentences – with limited sign composition (personal pronoun, verb, noun and adjective). Ongoing research (Vogler & Metaxas 1998; Wu & Huang 2001) is moving in the direction of identifying visual phonemes – the structural segments of a sign, which implies codifying a larger number of linguistic facts and generating a richer vocabulary of signs for recognition.

Another challenge in sign language that requires creative solutions in the scientific discourse is the incorporation of non-manual components, which are an integral part of the structure of signs. It is important to note that systems equipped with instrumental gloves, although they have a solid database of 5,119 signs and an average recognition accuracy of 91.9% (Fang, Gao & Zhao 2007), focus solely on sign analysis. However, without incorporating non-manual markers, it is impossible to fully interpret the meanings of these signs. A review of the literature reveals a limited number of studies that focus on the integration of manual and non-manual signals for visual recognition (Ong & Ranganath 2005). Modern multimodal systems are capable of integrating signs with lip movements, facial expressions (Ming & Ranganath 2002), and head movements (Dreuw et al. 2008), which once again raises concerns about the imperfections in recognizing the sign in its entirety.

This points to the search for the ideal solution to create an automatic sign language recognition system that meets the following requirements (Aran et al. 2009): (1) to identify with a high level of accuracy a wide range of lexical items included in an infinite set of sentences; (2) to operate in real-time frames; (3) to feature robustness to different environmental conditions (lighting, acoustic components, etc.); (4) to process in parallel manual and non-manual signals as well as segments of the morphological and syntactic subsystem of the SL.

Contemporary digital practice offers numerous potential areas for the application of sign language recognition systems, with a primary focus on converting signs into written message. The technologies discussed contain projections that enable human-computer interaction and access to public information through translation and dialogue devices in order to achieve equal communication opportunities for all people.

Part of the publications analyzed provide data on various types of smartphone applications and their efficiency. In this context, Zhou at al. (Zhou et al. 2022) turned their idea of creating a platform for recognizing Hong Kong Sign Language (HKSL) into reality. For the project, they generated a set of linguistic units for HKSL. The front end of the platform is equipped with a mobile application for pre-processing video with sign encoding, followed by the use of Jetson Nano[3]  (Cass 2020), designed to translate visual language into spoken language based on a pre-prepared deep learning[4] model. It is clear that the method requires further refinement, particularly considering that the translation currently operates only at the word level and involves a limited number of lexical units. Another research team (Ku et al. 2019) conducted a study using a smartphone camera with two photo sensors to capture the sign production of participants in the experiment. The built-in OpenPose system evaluates and extracts data on hand skeletal movement, while a convolutional neural network (CNN) model decodes and interprets the meaning of the visual language data. One of the drawbacks of the application is the lack of real-time operation. In contrast, another team of researchers (Oyeniran et al. 2020), whose research focus falls on Indian Sign Language (ISL), proposes a smartphone application built from three modules. The sound classification module is used to detect and categorize the incoming acoustic signal while simultaneously alerting the user through vibration stimuli. The next module identifies ISL’s video-recorded linguistic data and converts it into spoken linguistic code, and the third module is designed to convert the sign message into spoken one in various Indian regional languages or convert spoken speech into sign text. The high sensitivity of the sound classification module in extremely noisy environments is pointed out as a limitation of the invention. Also noteworthy are research texts on taxonomic models of various technologies (Lee & Lee 2014) that can be installed as applications in a smartphone and facilitate communication between hearing and D/deaf people: Augmentative and Alternative Communication (AAC), Text-To-Speech (TTS), Speech-To-Text (STT), Human motion recognition systems (HMR).

Teaching and learning sign language

The practice of teaching and learning natural language can be greatly improved through verification and the availability of feedback (Aran et al. 2009). The claim is valid for both vocal and visual languages. With respect to spoken languages, students are able to evaluate their articulation and make adjustments based on auditory control of their own speech. By analogy, SL teachers offer their students the mirror as a natural means of visual feedback. However, the boom of digital innovations has opened a new perspective for students to check and evaluate the performed visual-kinetic signs through a multimodal system called Sign Tutor by its creators (Aran et al. 2009). The team of specialists identified it as a reliable tool to promote the learning of SL, predominantly for hearing students, where it is studied as a second language (L2). Sign Tutor is an interactive system for teaching learners the basics of the language. Its main advantage is considered to be its programmed capabilities for automatic evaluation of the produced sign through the visual feedback generated and the information provided about the quality and accuracy of the presented sign. Observing and learning new signs, practicing them and exercising control over their presentation is another positive aspect resulting from the interactive nature of the system. The option to communicate the result through different feedback modalities (text message, recorded video of the user, video of the segmented hands and/or animation of an avatar) make it a technological product with high added value. The uniqueness of the system is given by the embedded electronic elements for the integration of the individual spatial-kinetic characteristics of the sign, including the non-manual marker head movements – as a constituent unit of the prosodic system of the SL. Linguistic head movements produced in synchrony with signs are an attribute of visual languages that presents a particular challenge for most students. In a study by Oya Aran et al. (Aran et al. 2009), the usability of the multimodal system was evaluated with the participation of students in an entry-level Turkish Sign Language (TSL) course. The performance test data showed 99% recognized signs involving only a manual component and 85% identified signs with an incorporated non-manual component (head movements, facial expression). The researchers conclude that the multimodal application secures the modelling of a flexible and easily accessible environment to promote sign language learning.

Sign language recognition is at the core of many applications that have been developed with a focus on optimizing sign language education. Representatives of the scientific and cultural community from the Hellenic Republic share the results of a successful SL-ReDu project whose scientific goal is correlated with the teaching and learning of Greek Sign Language (GSL) as L2 (Papadimitriou et al. 2023). The pedagogical experiment was accompanied by self-monitoring, subjective and objective evaluation of the participating students, forming a sample of 150 stochastic units. Not unimportant was the module introduced in the prototype system (based on deep learning) for visual detection of isolated GSL signs, as well as the HRNet framework for skeleton detection of the communicator’s body, hands and face in 2D and 3D format. Seeking creative solutions to overcome learning difficulties for deaf children, Joy et al. (Joy et al. 2019) propose SiLearn, a mobile application that can function as a visual dictionary. The embedded modules allow the identification of physical objects and their written labels and transform them into signs. Quantitative analyses obtained from the tests of 28 deaf students are an indicator of a high rate of acquisition of visual vocabulary units. Improvements to be made are in the direction of expanding the vocabulary, limited to 950 characters, and the capacity of the animated videos for signing symbols that can be stored on the mobile device to ignore the delays in loading them.

The subject of a dissertation research by deVilliers (2014) is a visually-based South African Sign Language (SASL) learning system capable of generating detailed context-sensitive feedback to the user. The developed software is in contrast to existing SL learning systems that are unable to provide such a service. The feedback, designed with the user’s experience in mind, automatically guides corrections, requiring minimal effort on their part. Additionally, a feature has been introduced that allows the feedback to take the form of a task list (deViliers 2014).

Another research project (Ackovska, Kostoska & Gjuroski 2012) is also dedicated to an interactive e-learning platform for Macedonian Sign Language (MSL), which is a collection of games and modules designed to optimize language learning and improve mental capacity and memory characteristics in deaf children. The central part of the application consists of 3D animations of a child presenting a fingerspelling sign, word or sign of an object chosen by the user. The user can orient the animation in all directions, allowing different perspectives for a complete perception of the manual symbols.

Academic advances in the virtual representation of sign language continue to reflect the multidisciplinary nature of the applied field; advances in the theory and practice of graphical representation of virtual humans and language processing in a visual modality. This finds expression in a publication (Kennaway & Glauert 2008) whose data from a large-scale research project to approbate multimodal technology for modelling and processing linguistic material and avatar-based representation of sign language (involving representatives from 3 countries: Germany, UK, Netherlands) is an important reference to: (1) uncovering objective possibilities for using relevance-constructed avatars to generate a decodable real-time presentation of sign language driven by phonetic-level descriptions via HamNoSys or SiGML; (2) Confirmation that high-level linguistic analysis and HPSG-based sign language modeling techniques are foundational to support semi-automatic generation of high-quality text translations from English (or other spoken language) to sign language: German Sign Language (DSL), British Sign Language (BSL), Dutch Sign Language (NSL), while creating phonetic-level presentations that can be performed by a sign language avatar; (3) finding evidence that a phonetic-level interface in terms of a sign language avatar is also a reliable support for creating sign language content for the purpose of administering websites and other applications through the use of more accessible sign lexicons and grammatical structures.

Many studies (Kipp et al. 2011; Smith et al. 2010; Smith 2014; Naert et al. 2020) have focused on the application of modern avatars to represent sign language in order to provide D/deaf people with easier access to written texts on websites. Their creation faces multiple challenges, ranging from content exposition (as there is no universal written system of sign languages), to implementing easy-to-interpret animations. A sign language is essentially a multi-channel language where the hands, shoulders, face and body should act in synchrony at different levels. Modern avatars, with state-of-the-art features, reach intelligibility levels of 58 – 71%. Undoubtedly, future research belongs to the prosodic component in order to achieve the optimal levels of decoding and interpretation by humans operating with a visual-spatial-kinetic code.

A collection of examples showcasing the use of ICT and AI resources is expanded by a study conducted by an American team (Paudyal et al. 2019), which tested a smartphone application providing feedback to users about the parameters (location, movement, orientation, hand configuration) of the signs they produced. The experiment generated possibilities for structuring a database of codified linguistic data collected from 100 trained individuals for 25 visual symbols belonging to American Sign Language (ASL). Of particular interest is the virtual environment created by specialists for learning ASL through the use of headphones equipped with a Leap Motion[5] sensor (Shioppo et al. 2019). The development, testing, and evaluation of the system were conducted according to the 26 signs of the ASL fingerspelling alphabet. The use of the Elf Sandbot robot to facilitate the learning of sign language by deaf individuals is also among the optimal technological solutions explored by researchers (Luccio et al. 2020). They constructed two smartphone and tablet applications: one to control the robot’s movements and the other to receive input from spoken or written words/sentences, translate them into a spatial-kinetic code, and present them in video format.

Attention is also warranted by a research project  (Vijitkunsawat et al. 2023), introducing an innovation aimed at facilitating information exchange between deaf and hearing individuals, which also serves as an excellent tool for creating a learning environment for mastering Thai Sign Language (TSL). The experiment encouraged students to independently select their preferred lexical units and perform exercises using animation.

The team effort of researchers (Bansal et al. 2021) led to the design of the remarkable game CopyCat, intended for deaf children raised in hearing families who do not have consistent access to ASL, resulting in a lower working memory capacity compared to hearing children and their deaf peers raised by deaf parents. The technology is equipped with a high-resolution camera and pose evaluation software. Data from its testing support the thesis that it successfully stimulates language development and increases working memory capacity.

The advancement of computing power has encouraged research (Quandt et al. 2020) related to the design of an avatar functioning as a teacher in a virtual environment, teaching the basics of ASL to beginners. Users have visual access to a digital presentation of their hands via the LEAP Motion application. However, the system is unable to capture signs that involve touching specific parts of the body, which is identified as its weak point. Nevertheless, this does not diminish the authors’ contribution to enriching the scientific kaleidoscope.

Synergistic Nexus: Digital Technologies and Artificial Intelligence – Sign Language Education – Digital Competence

The fact that sign language education and the related pedagogical achievements can no longer revolve solely around a single-mode domain existing in analogue form is evident. Consequently, the recontextualization of literacy and competence involves considering multiple modes for constructing meaning through the use of digital tools.

Research on digital competence, recognized as a key skill in the early 21st century, has primarily focused on creating theoretical models, competence frameworks, and research tools for competence assessment (Martin 2005, Martin 2006, Helsper & Eynon 2013)

Despite numerous studies on digital literacy, digital skills, information literacy, computer literacy, ICT literacy, media literacy, e-literacy, ICT competence, and digital competence, a systematic analysis of the scientific literature on this topic (in the Scopus, Web of Science, and ERIC databases) conducted before 2020 shows that there is no consensus on the definitions. A long path remains to reach agreement on their content and structure, necessitating the exploration of ways to clarify the issue at a conceptual level.

As a starting point for revealing the content of digital competence – a crucial component of the multi-layered portfolio of sign language specialists – the published guidelines on teacher competences in sign language education (Teacher Competences for Sign Languages in Education) serve as a valuable resource. These guidelines are result of the joint activity of an international a team of linguists, sign language teachers, members of various European deaf communities. The document, as a result of the ProSign (Promoting excellence in sign language instruction) project successfully completed in 2019, presents a taxonomic model of competences in 8 domains, among which digital competence is assigned a proper place (Bleichenbacher et al. 2019).

A review of the research indicates that most documented data focus on studies related to assessing digital competence (the level achieved by students and pupils) rather than on its development and design within an educational context (Sánchez-Caballé et al. 2020). Analogous to linguistic and communicative competence, digital competence in the context of sign languages is considered transversal – transferable across activities, ages, and subjects (Bleichenbacher et al. 2019). Therefore, knowledge, skills, and attitudes/behaviors in the sphere of digital technologies and artificial intelligence, which interact with the new (alternative) linguistic reality, play a dominant role in expanding the range of pedagogical competence; enhancing collaboration skills with other educators and stakeholders; and achieving higher educational standards and personal prosperity for teachers. The interaction of different discourses within the multimodal discourse, based on the semiotic foundation of sign languages, generates opportunities for multifaceted representation of events. Generating multimodal messages using various semiotic tools requires teachers to possess skills in creating and editing online platforms’ websites, accessible to students; producing still and dynamic images; discussing hidden and explicit messages; and filming and editing materials. These described characteristics represent only a small part of the spectrum of digital means of expression.

The continuous refocusing of research, which began in the past decade, has led to the enhancement of digital competence and the filling of the concept with new content (Ala-Mutka 2011). This new layer comprises a broader and more complex set of knowledge, skills, attitudes, and relationships, leading to the inclusion of a specific way of critical thinking about coding, decoding, and recoding information related to the continuous adaptation to expanding technological capabilities and their projections, ultimately reaching (Janssen et al. 2013) the levels of expert digital competence or at least the vision of it. The conceptual understandings of this group of authors fit appropriately into the contemporary scientific view of sign languages, which is undergoing a radical transformation, expressed in a multidisciplinary approach to their functioning and operationalization.

In a classical understanding that bridges to the future of sign language development, digital competence is understood as “the combination of knowledge, skills, attitudes (thus including abilities, strategies, values, and awareness) required when using ICT and digital media to perform tasks and solve problems; for communication; information management; collaboration; content creation and sharing” (Ferrari 2012).

Other research examines the degree of correlation between digital competence and various concepts such as computational thinking (Juškeviciene & Dagiene 2018) or security awareness (Nyikes 2018). Some studies explore teaching practices related to digital competence (Napal Fraile et al. 2018, Rolf et al. 2019, Morellato 2014) or develop tools for assessing teachers’ digital technology competencies (Cantabrana et al. 2019). The transversal competencies related to digital technologies and artificial intelligence in these dimensions are also relevant to the capabilities of individuals operating with the semiotic code of sign language and can be illustrated with examples such as: (1) integrating applications for digital sign language learning (software programs for learning, textbooks, digital online dictionaries, encyclopedias, etc.); (2) considering aspects related to Internet security (promoting the effective, informed, and critical use of tools such as digital forums; websites that allow users to jointly edit their content and structure; co-creation of written texts; sharing of texts, audio, and video files via the Internet or computer network, email, videophone, etc.).

A variety of research is also found regarding the assessment and self-assessment of digital competencies by students and teachers (Lasić-Lazić et al. 2017; Kuzminska et al. 2018). Often, in studies by specialists related to digital competence and its decomposition, such as E. Instefjord (2015), the emphasis is placed on critical thinking skills as key, with the continuous emphasis on the need for critical and reflective use of technology in building new knowledge. As a result of expanding research pursuits, new dimensions have been added to the classical definitions of digital competence, giving it a broader focus that seeks the added value and significance of digital knowledge and skills for social engagement in society as a whole (Instefjord 2015) and specifically for the linguistic and cultural community of Deaf people (Skyer 2022).

Conclusion

The idea for the theoretical research conducted was inspired by the enduring interest in the unlimited potential of digital technologies and artificial intelligence to structure a design for sign language education that combines various modes of interaction. The primary goal of the research was to present identified, classified, and generalized data on the wide range of systems, algorithms, and operations in the ever-evolving digital reality within the field of sign language. The proposed written evidence highlights significant technological solutions for recognizing, presenting, and translating sign language codes and for building a comprehensive AI-based linguistic system. In this whirlwind of information, a special place is reserved for innovative sign language applications that optimize the process of teaching and learning, facilitate communication between hearing and deaf people, and promote their social inclusion by blurring the boundaries between populations.

All the inventions, rationalizations, and discoveries shaping the current discourse have demonstrated interesting and compelling ways to advance applied research ethically, verifying the utility of digital technologies and artificial intelligence in multimodal deaf and hearing education. Given their exponentially increasing use in human existence, the study of these ways is not only a justified action, but also a necessary condition for the preservation of D/deaf people, for their prosperity in life.

 

Acknowledgments & Funding

This study is financed by the European Union-NextGenerationEU, in the frames of  the National Recovery and Resilience Plan of the Republic of Bulgaria, first pillar „Innovative Bulgaria“, through the Bulgarian Ministry of Education and Science (MES), Project № BG-RRP-2.004-0006-C02 “Development of research and innovation at Trakia University in service of health and sustainable well-being”, subproject „Digital technologies and artificial intelligence for multimodal learning – a transgressive educational perspective for pedagogical specialists“ № Н001-2023.47/23.01.2024.

 

NOTES

[1]. Digital Education Action Plan (2021 - 2027). Resetting education and training for the digital age. Education and Training. European Union, 2020. https://ec.europa.eu/education/sites/education/files/document-library-docs/deap-communication-sept2020_en.pdf .

[2]. The term “occlusion” is used to refer to the obstruction of the visibility of one hand element by another during the composition of a gesture.

[3]. Jetson Nano is a small AI computer that provides the performance and energy efficiency needed for AI workloads, enabling the management of multiple neural networks in parallel and the simultaneous processing of data from several high-resolution sensors.

[4]. The term Deep Learning refers to a subset of machine learning methods based on neural networks. The term “Deep” relates to the use of multiple layers within the network. These methods can be supervised, semi-supervised, or unsupervised.

[5]. Leap Motion is an optical hand-tracking module that captures hand movements with unparalleled precision. It makes interaction with digital content natural and easy.

 

REFERENCES
ACKOVSKA, N.; KOSTOSKA, M. & GJUROSKI, M., 2012. Sign Language Tutor – Digital improvement for people who are deaf and hard of hearing. ICT Innovatons Conference Web Proceedings. pp. 103 – 112. https://proceedings.ictinnovations.org/2012/paper/42/sign-language-tutor–digital-improvement-for-people-who-are-deaf-and-hard-of-hearing
ALA-MUTKA, K., 2011. Mapping Digital Competence: Towards a Conceptual Understanding. Luxembourg: European Commission. Joint Research Centre, Institute for Prospective Technological Studies.
ARAN, O.; ARI, I.; BENOIT, A.; CAMPR, P. et al., 2009. Sign Tutor: An Interactive System for Sign Language Tutoring. IEEE MultiMedia, vol. 16, no. 1, pp. 81 – 93. https://academics.boun.edu.tr/bulent.sankur/sites/bulent.sankur/files/inline-files/Jour_Aran_Sign%20Tutor_IEEE-MM.pdf
BANSAL, D., RAVI, P., SO, M., AGRAWAL, P., CHANDHA, I., MURUGAPPAN, G., DUKE, C., 2021. CopyCat: Using Sign Language Recognition to Help Deaf Children Acquire Language Skills. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Virtual, 8 – 13 May 2021, pp. 1 – 10. https://www.researchgate.net/publication/351426391.
BLEICHENBACHER, L.; GOULLIER, F. et al., 2019. Teacher competences for languages in education: Conclusions of the project. European Centre for Modern Languages (ECML), Council of Europe https://www.ecml.at/Portals/1/5MTP/Bleichenbacher/CEFRLT-conclusions-EN.pdf?ver=2019-11-29-150323-533
LÁZARO-CANTABRANA, J.; USART-RODRÍGUEZ, M. & GISBERT-CERVERA, M., 2019. Assessing Teacher Digital Competence: the Construction of an Instrument for Measuring the Knowledge of Pre-Service Teachers. Journal of New Approaches in Educational Research (NAER Journal), vol. 8, no.1, pp.73 – 78. https://naerjournal.com/rt/captureCite/370
CASS, S., 2020. Nvidia makes it easy to embed AI: The Jetson nano packs a lot of machine-learning power into DIY projects-[Hands on]. IEEE Spectrum, vol. 57, no. 7, pp. 14 – 16. https://ieeexplore.ieee.org/document/9126102
DE VILLIERS, H. A. C., 2014. A Vision-based South African Sign Language Tutor. Dissertation: PhD. of Philosophy in the Faculty of Engineering at Stellenbosch University. https://core.ac.uk/download/pdf/37421343.pdf
DREUW, P.; NEIDLE, C.; ATHITSOS, V.; SCLAROFF, S. & NEY, H., 2008. Benchmark databases for video-based automatic sign language recognition. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA), pp. 1115 – 1120. http://www.lrec-conf.org/proceedings/lrec2008/pdf/287_paper.pdf
FANG, G., GAO, W. & ZHAO, D., 2007. Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 37, no. 1, pp. 1 – 9. https://www.researchgate.net/publication/3412599
FERRARI, A., 2012. DIGCOMP: A Framework for Developing and Understanding Digital Competence in Europe. Luxembourg: Publications Office of the European Union.
HELSPER, E., EYNON. R., 2013. Pathways to digital literacy and engagement. European journal of communication, vol. 28, no. 6, pp. 696 – 713.
INSTEFJORD, E., 2015. Appropriation of Digital Competence in Teacher Education. Nordic Journal of Digital Literacy, vol. 9, no. 04, pp. 313 – 329.
JANSSEN, J., STOYANOV, S., FERRARI, A., PUNIE, Y., PANNEKEET, K., SLOEP, P., 2013. Experts’ views on digital competence: Commonalities and differences. Computers & Education, vol. 68, pp. 473 – 481. DOI: https://doi.org/10.1016/j.compedu.2013.06.008
JOY, J., KANNAN, B., & SREERAJ, M., 2019. SiLearn: an intelligent sign vocabulary learning tool. Journal of Enabling Technologies, vol. 13, no. 3, pp. 173 – 187. https://doi.org/10.1108/JET-03-2019-0014
JUŠKEVICIENE, A., DAGIENE. V., 2018. Computational Thinking Relationship with Digital Competence. Informatics in Education, vol. 17, no. 2, pp. 265 – 284.
KENNAWAY, R. & GLAUERT, J., 2008. Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Universal Access in the Information Society, vol. 6, no. 4, pp. 375 – 391.
https://www.researchgate.net/publication/220606812_Linguistic_modelling_and_language-processing_technologies_for_Avatar-based_sign_language_presentation
KIPP, M., HÉLOIR, A., NGUYENN, Q., 2011. Sign Language Avatars: Animation and Comprehensibility. 11th International Conference, IVA 2011, Reykjavik, Iceland, September Proceedings, pp.15 – 17. https://link.springer.com/chapter/10.1007/978-3-642-23974-8_13
KRES, G. R., 2000. Multimodality: Challenges to Thinking About Language. TESOL Quarterly, vol. 34, no. 2, pp. 337 – 340.
https://www.academia.edu/38569648/Multimodality_Challenges_to_Thinking_about_Language
KU, Y.J., CHEN, M.J., KING, C.T., 2019. A Virtual Sign Language Translator on Smartphones. In Proceedings of the IEEE 2019 Seventh International Symposium on Computing and NetworkingWorkshops (CANDARW), pp. 445 – 449. https://www.semanticscholar.org/paper/
KUZMINSKA, O.; MAZORCHUK, M.; MORZE, N;, PAVLENKO, V.; PROKHOROV, A., 2018. Study of Digital Competence of the Students and Teachers in Ukraine. International Conference on Information and Communication Technologies in Education, Research, and Industrial Applications, pp.148 – 169. Springer, Cham.
LASIĆ-LAZIĆ, J., PAVLINA, K. & PAVLINA, A. P., 2017. Digital Competence of Future Teachers. European Conference on Information Literacy, pp. 340 – 347. Springer, Cham.
LEE, J. & LEE, H., 2014. Developing and validating a citizen-centric typology for smart city services. Government Information Quarterly, vol. 31, pp. 93 – 105. DOI: https://doi.org/10.1016/j.giq.2014.01.010
LIDDELL, S. K., 2003. Grammar, gesture, and meaning in American sign language, Cambridge University Press. In: ARAN, O., ARI, I., BENOIT, A., CAMPR, P., CARRILLO, A. H, FANARD, F-X., AKARUN, L., CAPLIER, A., ROMBAUT, M. & SANKUR, B. (Eds.). Sign Tutor: An Interactive System for Sign Language Tutoring. IEEE MultiMedia vol. 16, no. 1, pp. 81 – 93. https://academics.boun.edu.tr/bulent.sankur/sites/bulent.sankur/files/inline-files/Jour_Aran_Sign%20Tutor_IEEE-MM.pdf
LUCCIO, F. L., GASPARI, D., 2020. Learning Sign Language from a Sanbot Robot. In Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good, pp. 138–143. https://doi.org/10.1145/3411170.3411252
MARTIN, A., 2005. DigEuLit–a European framework for digital literacy: a progress report. Journal of eLiteracy 2.2., pp.130 – 136.
MARTIN, A., 2006.A European framework for digital literacy. Nordic Journal of Digital Literacy 1.02. pp.151 – 159.
MING, K. W. & RANGANATH, S., 2002. Representations for facial expressions. In: Ong, S. C. W., & Ranganath, S. (Eds.). Automatic sign language analysis: A survey and the future beyond lexical meaning, IEEE Transactions on PAMI, vol. 27, no. 6, pp. 873 – 891. https://www.researchgate.net/publication/7799837.
MOHER, D., LIBERATI, A., TETZLAFF, J., ALTMAN, D. G., 2009. Preferred reporting items for systematic reviews and metaanalyses: The PRISMA statement. Open Med, vol. 3, no. 2, pp.123 – 130. https://www.researchgate.net/publication/51156625
MORELLATO, M., 2014. Digital competence in tourism education: Cooperative-experiential learning. Journal of Teaching in Travel & Tourism, vol.14, no. 2, pp.184 –  209.
NAERTN, L.; LARBOULETTE, C. & GIBET, S., 2020. A survey on the animation of signing avatars: From sign representation to utterance synthesis. Computers and Graphics, vol. 92, pp.76 – 98. https://hal.science/hal-03005762/document.
NAPAL FRAILE, M., PEÑALVA-VÉLEZ, A., MENDIÓROZ LACAMBRA, A.M., 2018. Development of Digital Competence in Secondary Education Teachers’ Training. Education Sciences, vol. 8, no.3, 104 – 116.
NYIKES, Z., 2018. Digital competence and the safety awareness base on the assessments results of the Middle East-European generations. Procedia Manufacturing, vol. 22, pp. 916 – 922.
ONG, S. C. W. & RANGANATH, S., 2005. Automatic sign language analysis: A survey and the future beyond lexical meaning, IEEE Transactions on PAMI, vol. 27, no. 6, pp. 873 – 891. https://www.researchgate.net/publication/7799837
OYENIRAN, A.; OYENIYI, O. et al., 2020. Review of the application of artificial intelligence in Sign language recognition system. International Journal of Engineering and Artificial Intelligence, vol.1 no. 4, pp. 29 – 34. https://www.researchgate.net/publication/343917663.
PAPADIMITRIOU, K.; POTAMIANOS, G.; SAPOUNTZAKI, G. et al. 2023. Greek sign language recognition for an education platform. Univ Access Inf Soc. ISSN 1615-5297. https://doi.org/10.1007/s10209-023-01017-7.
PAUDYAL, P., LEE, J., KAMZIN, A., SOUDKI, M., BANERJEE, A., & GUPTA, S., 2019. Learn2Sign: Explainable AI for sign language learning. CEUR Workshop Proceedings, 2327. https://ceur-ws.org/Vol-2327/IUI19WS-ExSS2019-13.pdf
PETERS, M. & JANDRIC, P., 2015. Philosophy of education in the age of digital reason. Review of Contemporary Philosophy, vol. 14, pp. 162 – 181. https://www.researchgate.net/publication/285207018
PRADHAN, G., PRABHAKARAN, B., LI, C., 2008. Hand-gesture computing for the hearing and speech impaired. IEEE MultiMed, vol. 15, no. 2, pp. 20 – 27. https://www.researchgate.net/publication/3339054
QUANDT, L. C., LAMBERTON, J., WILLIS, A.S., WANG, J. WEEKS, K., KUBICEK, E., MALZKUHN, M., 2020. Teaching ASL Signs using Signing Avatars and Immersive Learning in Virtual Reality. In Proceedings of the 22nd International ACM SIGACCESS Conference, on Computers and Accessibility, pp. 1 – 4. https://www.researchgate.net/publication/345391930
ROLF, E.; KNUTSSON, O.; RAMBERG, R., 2019. An analysis of digital competence as expressed in design patterns for technology use in teaching. British Journal of Educational Technology, vol.50, no. 6, pp. 3361–3375. https://www.researchgate.net/publication/330704280_An_analysis_of_digital_competence_as_expressed_in_design_patterns_for_technology_use_in_teaching
SÁNCHEZ-CABALLÉ, A., GISBERT-CERVERA, M., ESTEVE-MON, FR., 2020. The digital competence of university students: a systematic literature review. Aloma: Revista de Psicologia. vol. 38, no. 1, pp. 63 – 74, http://www.revistaaloma.net/index.php/aloma/article/view/388.
SHIOPPO, J., MEYER, Z., FABIANO, D., CANAVAN, S., 2019. Sign Language Recognition: Learning American Sign Language in a Virtual Environment. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4 – 9 May 2019; pp. 1 – 6. https://www.researchgate.net/publication/332770587.
SKYER, M., 2022. Power in Deaf Pedagogy and Curriculum Design: Multimodality in the Digital Environments of Deaf Education (DE2). Arts & Cultural Studies Review, vol. 3, no. 53, pp. 345 – 381. https://ejournals.eu/en/journal/przeglad-kulturoznawczy/article/power-in-deaf-pedagogy-and-curriculum-design-multimodality-in-the-digital-environments-of-deaf-education-de2.
SMIT, R., MORRISSEY, S., SOMERS, H., 2010. HCI for the Deaf community: Developing human-like avatars for sign language synthesis. iHCI 4th Irish Human Computer Interaction Conference Proceedings, pp.1 – 4. https://www.researchgate.net/publication/48418375
SMITH, R., 2014. The role of emotion and facial expression in synthesised sigh language avatar. Dissertation for the award of Master of Science in Computing. https://www.researchgate.net/publication/299549995.
STARNER, T. & PENTLAND, A., 1996. Realtime American sign language recognition from video using hidden Markov models. International Symposium on Computer Vision Proceedings, pp. 109 – 116. https://cdn.aaai.org/Symposia/Fall/1996/FS-96-05/FS96-05-017.pdf.
STOKOE, W. C., 2005. Sign Language Structure: An outline of the visual communication systems of the American deaf. The Journal of Deaf Studies and Deaf Education, vol. 10, no. 1, pp. 3 – 37. https://doi.org/10.1093/deafed/eni001.
VIJITKUNSAWAT, W.; RACHARAK, T.; NGUYEN, C. & MINH, N., 2023. Video-Based Sign Language Digit Recognition for the Thai Language: A New Dataset and Method Comparisons. the 12th International Conference on Pattern Recognition Applications and Methods Proceedings, pp. 775 – 782.  https://www.scitepress.org/Papers/2023/116437/116437.pdf.
VOGLER, C. & METAXAS, D., 1998. ASL recognition based on a coupling between HMMs and 3D motion analysis, the Sixth International Conference on Computer Vision Proceedings, pp. 363 – 369. https://www.researchgate.net/publication/262395332.
WU Y. & HUANG, T. S., 2001. Hand modeling, analysis, and recognition for vision based human computer interaction. IEEE Signal Processing Magazine, vol. 18, no. 3, pp. 51 – 60. https://www.researchgate.net/publication/220610449.
ZHOU, Z.; TAM, V. & LAM, E., 2022. A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework. Micromachines, vol.13, no. 2, pp. 1 – 15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8877205/.

 

Dr. Diyana Georgieva, Assoc. Prof.
ORCID ID: 0000-0001-5324-930X

Prof. Dr. Nikolay Tsankov, DSc.
ORCID ID: 0000-0002-3206-8144

Trakia University
Stara Zagora, Bulgaria

E-mail: diana.georgieva@trakia-uni.bg

nikolay.tsankov@trakia-uni.bg

>> Изтеглете статията в PDF <<

Your Image Description

Свързани статии:

Default ThumbnailAn Innovative Model for Developing Digital Competences of Social Workers Default ThumbnailRecent Trends and Applications of the Artificial Intelligence in the Education Default ThumbnailPossibilities for the Application of Artificial Intelligence Systems in the Education of Students of Pedagogical Specialties Default ThumbnailThe Dual Impact of Artificial Intelligence: Catalyst for Innovation or Threat to Stability
Етикети: artificial intelligencedigital infrastructuredigital technologiesmultimodalitysign language

Последвайте ни в социалните мрежи

Viber
СподелянеTweet
Предишна статия

Премиерът Росен Желязков: INSAIT надмина всички очаквания

Следваща статия

Model of Professionally Directed Training of Future Engineer-Teachers

Следваща статия

Model of Professionally Directed Training of Future Engineer-Teachers

Вестник „Аз-буки“ – брой 18/2025 г.

Вестник „Аз-буки“ – брой 18/2025 г.

Снимка на броя: Царство на децата

Снимка на броя: Царство на децата

Последни публикации

  • Правителството одобри близо 108 млн. лева за развитие на науката и изследванията
  • Одобрени са над 37 млн. лева за строеж и ремонт на училища, детски градини и студентски общежития
  • В Министерския съвет
  • Над 200 директори и учители обменят иновативни практики за подобряване на учебния процес
  • Световната банка ще подпомага България за по-ефективното използване на информационните ресурси в образованието
  • Магията на XV софийски фестивал на науката
  • И най-доброто писмо е…
  • Битка на роботи край морето
  • 516 млади театрали взривяват сцената във Велинград
  • „Заедно за Гергьовден“ в Кюстендил
  • Флота или флот
  • ПГТ „Проф. д-р Асен Златаров“ – Варна, с блестящо представяне на националните състезания по професии в туризма
  • Цената на труда: Пазарни реалности, правни рамки и социални дилеми
  • 200 млн. лв. за подобряване качеството на висшето образование
  • Снимка на броя: Царство на децата
  • Премиерът Росен Желязков: INSAIT надмина всички очаквания
  • Национално издателство „Аз-буки“ обявява конкурс за научна статия
  • Министър Красимир Вълчев: Предлагаме въвеждане на час по добро
  • Млади химици представят изследванията си пред министъра на образованието и науката
  • Конкурс за диктовки по случай 24 май
  • Скитащият саламандър е изключително ловък
  • Столичната община очаква вота на гражданите за най-добрите детски градини и училища

София 1113, бул. “Цариградско шосе” № 125, бл. 5

+0700 18466

izdatelstvo.mon@azbuki.bg
azbuki@mon.bg

Полезни линкове

  • Къде можете да намерите изданията?
  • Вход за абонати
  • Начало
  • Контакт
  • Абонамент
  • Проекти
  • Реклама

Вестник „Аз-буки”

  • Вестник “Аз-буки”
  • Абонамент
  • Архив

Научните списания

  • Стратегии на образователната и научната политика
  • Български език и литература
  • Педагогика
  • Математика и информатика
  • Обучение по природни науки и върхови технологии
  • Професионално образование
  • История
  • Чуждоезиково обучение
  • Философия

Бюлетин

  • Достъп до обществена информация
  • Условия за ползване
  • Профил на купувача

© 2012-2025 Национално издателство "Аз-буки"

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
bg_BG
en_US bg_BG
  • Вход
  • Sign Up
Няма резултати
Вижте всички резултати
  • Начало
  • За вестника
  • Екип
  • Архив
  • Контакт
  • Реклама
  • Абонамент
  • en_US

© 2012-2025 Национално издателство "Аз-буки"