Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(4): 414-421

Published online December 25, 2022

https://doi.org/10.5391/IJFIS.2022.22.4.414

© The Korean Institute of Intelligent Systems

Authoring System of Mobile Tutorial Modules Based on Auditory Multimedia

Karim Q. Hussein

Department of Computer Science, College of Science, Mustansiriyah University, Baghdad, Iraq

Correspondence to :
Karim Q. Hussein (Karim.q.h@uomustansiriyah.edu.iq)

Received: March 25, 2022; Revised: August 27, 2022; Accepted: October 12, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this study, we consider technologies of auditory multimedia as a mode of communication between a mobile device and blind students regarding instructional material. We propose an authoring system designed to create mobile tutorial modules (MTMs) for use by blind students. The system includes two major phases. In the first phase, the instructor submits the contents of a tutorial module in natural language into a server. The server saves the contents of tutorial modules in a database. The server then creates a website based on PHP with MYSQL. Thus, the system creates a website with all the material entered by the instructor. The second phase considers an Android environment with XML for the graphical user interface (GUI) and Java source coding for activities). An API library in the server manages the tutorial content into typical tutorial module. Because the application is designed for totally blind students, all the contents of the GUI are converted into corresponding text-to-speech data (Audio files) using Android. Finally, the blind student receives the MTMs as sound output from the device. We invited several instructors to experimentally evaluate some sample MTMs. The results show that the proposed approach was considered successful in this application. Additionally, an algorithm was applied to reduce the time required for multiple users to interact with the application. We also implemented text-to-speech on Android as well. The proposed approach was received positively by the participants and users.

Keywords: Authoring system, Mobile tutorial modules, Auditory multimedia, Text-to-speech

In this study, we considered the following research aims.

  • We developed an authoring System to generate typical mobile tutorial modules (MTMs) for blind students. The system contains two phases. First, the instructor submits their desired instructional material using the system. Then, MTMs are constructed to be presented to blind students via sound output by converting all GUI contents into corresponding audio data using Android.

  • Moreover, we conducted an experimental evaluation of some experimental MTMs with real instructors (experts) in real school with blind students to obtain feedback regarding the generated MTMs.

We considered five selected related works.

i. Dr. Karim Q. Hussein, “AUTHORING SYSTEM OF DRILL & PRACTICE E-LEARNING MODULES FOR HEARING IMPAIRED STUDENTS” [1]

People with hearing impairments (HI) typically need some suitable accommodations in educations settings. Teaching methodologies for HI students differ from those of hearing students. HI students should be involved in practicing visual communication methods like sign language to learn healthy communication skills. Teaching methodologies for HI students recommends demonstration and repetition with slow presentations of instructional material. The teacher should display their lessons directly face to face without visual distractions. More reinforcement and encouragement should be provided to HI students, and fun and enjoyment should be strongly included in e-lessons in additional continuous interaction between the teacher and HI students. Drill & Practice (D&P) e-learning modules (eLMs) have been developed for selected topics such as mathematics. In this study, we consider teaching mathematics to people with HIs using eLMs. The results of feedback by experts who tried the learning material were positive. The author used visual programming techniques for both the teacher interface project and the application for students with HIs, which were enhanced strongly with multimedia components.

ii. DRIGAS, ATHANASIOS & KLUKIANKIS, LAYTERIS & PAPAYERASIMAY, YUNNIS [11]. The authors considered blind students and how information and communication technology (ICT) could compensate for sight disability. This paper presented a multi-purpose e-environment for educational and informative services for engineering education for students with visual impairments.

Although this e-environment can be used for actions that support online classes as well as activities related continuing training and remote learning, it was developed to support visually impaired persons to enhance their skills and address both their special personal and communication needs mainly by redirecting the information through other sensory routes. Special ?Assistive Technology? was used and the principles of ? Design for All? and ?Universal Accessibility? were followed to produce a user-friendly and easily navigable environment for this group of students in engineering education.

The authors considered the need to fully understand the needs of people with disabilities or physical handicaps as well as the most appropriate and suitable technologies. These approaches should provide greater accessibility to various services, which is considered a significant social benefit.

iii. JEONG, SANG-MOK; SONG, KI-SANG[12].

A Community-Based Intelligent e-Learning System

With the widespread popularity of remove or online learning, the demand for e-learning based on information technology (IT) is gradually increasing. Effective e-learning systems can be comparable to in-person educational experiences in terms of their effectiveness. In this study, we developed a community-based intelligent e-learning system in which a teacher character equipped with face to face communication functions can interact with learners to recreate the experience of attending a class in person with a remote system. The community-based intelligent e-learning system developed in this work is expected to improve learners’ performance owing to its built-in function to provide precise diagnosis and feedback on students’ performance. Intelligent learning is a promising approach owing its ability to extend the role of computational systems. As these methods allow computers not only to save and retrieve information but also to simulate the character, activities, and emotions of an instructor, their implementation poses a considerable challenge.

iv. Monika Podsiadło, Shweta Chahar [13]. The authors considered blind people and presented research on individuals with vision loss using text-to-speech technology in most of their interactions with devices. The authors focused was on an Android screen reader designed as an accessibility support for visually impaired users. They presented their findings with a study of the TTS user experience provided by Google with users of screen readers for people with visual disabilities. They also presented the results of perceptual experiments showing how different voice qualities may affect preference and likeability.

They concluded that lower-pitched, mature-sounding voices of either gender with neutral emotional expression were strongly preferred by visually impaired users. Further, they found that different voices were needed to construct different applications. through an experimental evaluation.

v. Krzysztof Dobosz [14]. This work considered both visual and hearing disabilities. It focuses on the needs of users with low vision users on mobile devices. The rapid development of information technology has led to barriers to access to mobile applications for people with visual impairments. As of just a few years ago, many authors considered the problem of digital exclusion. They presented many different technologies related to the use of web applications based on the Web Access Initiative (WAI) campaign, which contains web design principles for people who may have limited access to the content included. Users with visual or auditory impairments were considered as target users. As many of the guidelines are organized according to internationally fixed standards, and many of these standards can be transferred easily in the field of mobile applications. In this work, we also focuses on mobile devices for better accessibility of web technology.

Two algorithms have been used:

3.1 Load-Balancing Algorithm

  • The customer creates the order from the UserBase. UserBase is a group of diverse clients in a specific region.

  • The data center monitor receives this cloud data. It stores all the information on the status of each VM. The Datacenter Controller checks if virtual machines are available for Cloudlets.

  • If a VM is found it will be assigned as available to Cloudlets and will receive a Datacenter Controller VMid for this specific VM and change its status to BUSY.

  • If a VM is found unavailable, a Cloudlet waits in the Datacenter console. A VM that completes processing with a cloudlet.

Consider Figure 1 for explaining the Algorithm. This algorithm used for multiple users of the MTMs.

3.2 Text-To-Speech Algorithm

The basic steps of text-to-speech (TTS) technology are to convert written input to spoken output by generating synthetic speech. There are several ways of performing speech synthesis.

  • Simple voice recording and playback.

  • Splitting of speech into 30–50 phonemes (basic linguistic units) and their re-assembly in a fluent speech pattern.

  • The use of approximately 400 diaphones (splitting of phrases at the center of the phonemes and not at the transition). The next diagram explains the major technical stages for Text-to-Speech [15].

The proposed system included two major projects, which were referred to as instructor project and student project, and the server side which creates database of entry material to generate typical MTMs on the student’s mobile device.

Figure 3 explains the framework of the tutorial method. Tutorial lessons could be considered when the need to present new concepts and information of the lesson.

Tutorial teaching has recently attracted considerable attentions. Tutorial teachings are generally provided to students after ae regular lecture. This is a remedial teaching method that is individualized or given to a specific group of students.

The aim of the tutorial or remedial teaching is to help the students to improve their cognitive and other academic abilities. The strategies of the tutorial teaching are based on principles listed and discussed in the following.

  • Remedial teaching.

  • Specific differences.

  • Presenting new instructional material or supporting instructional skills.

The proposed tutorial methodology is summarized by the diagram above.

Purpose : to present new instructional material or to promote current instructional skills of the candidate.

Steps :

Introduction tutorial section present information question and answer session (Q & A ) feedback.

The next diagram represents the major block diagram of the proposed authoring system.

The proposed system consists of the following parts.

  • i- The first user (instructor) who submits the information via his Laptop accordingly.

  • ii- Server side / the instructor uses the website.

    The website creates database to manage and save

    information submitted by instructor.

  • iii- After finishing submission and creating of database. a reformed information would be sent to the client side (the Blind student) so as to be an Android application .

  • iv- In the mobile application, three tasks are performed.

    First reconstruct the information according to typical template of tutorial modules so as to develop MTMs in the application.

    Second task: applying Text-To-Speech Algorithm to convert all contents in the application to auditory mode.

    Third task: to Applying Load Balance Algorithm to manage time among multiple blind students (Multiple Clients side)

  • v- Owing to the previous items, two algorithms have been applied to implement the authoring system, including text-to-speech and load-balancing algorithms. PHP and MySQL Server were also used to create website. Further, the concepts and fundamentals of e-Learning were applied regarding the e-learning tutorial method.

In Figure 4 all the previous items are explained. It is the core of the paper. Three major parts, instructor side, website side and blind students side.

We considered an intriguing study that was carried out on an ?Authoring System of Mobile Tutorial Modules Based on Auditory Multimedia? To summarize, the research aimed at Auditory Multimedia to connect a mobile device and a blind student. Blind pupils’ aid technologies. The teacher enters his typical English language education module onto a computer (server). Tutorial modules’ content is server-side. The server creates a PHP and MySQL website with full content. The server API library manages instructional module material. Android converts the Mobile GUI components to Text-to-Speech (Audio files) for the blind user. The blind student hears MTMs. Instructors tried new MTMs. The end assured MTMs. The time balancing method was used with several users. Text-to-Speech for Android built. Experts who tried MTMs are mostly pleased.

Key outcomes of the paper:

  • 1- Use of touch and double click by student, makes the easy using and guiding for MTMs.

  • 2- Algorithm of time balance was considered successfully, reducing the time when using multiple users.

  • 3- Algorithm of Text-To-Speech in Android environment was successfully implemented.

Novelty could be represented by the following items.

  • Flexibility and ability for the Instructor to introduce any desired MTMs although the instructor has no experience techniques of introducing MTMs as well as Text-To-Speech technique. The instructor submits his contents of the instructional material according to the instructions of the system.

  • MTMs allow the blind student to hear the MTMs in clear way and with his own speed of listening. The blind person can hear and trace his desired MTM easily without need for help or guide.

  • MTMs represent an active method of e-Learning . MTMs let the blind student to take the major role of learning. This active role realizes the perfect achieving and the meaningful learning by the blind student because he listens, understands, traces and answers the questions without the help of other person.

  • The paper includes two major algorithms, Text-To-Speech and Load Balance as well as a major e-Learning method “Tutorial”. Results covered two approaches, successfully creating MMTs as well successfully applying the mentioned algorithms besides the creating of the website. Further, applying experimental MTMs on experts enhanced the novelty of this work. These types of papers are practical and reflects the novelty of computer science concepts on corresponding real world cases.

An experimental case study has been tested with four different end users / Client side (Supposed them blind students). Multiple screen shoot have been shown for the instructor as well as the corresponding screen shoot of generated MTMs for blind students. Figure 5 shows three screenshots for each user, the instructor, and the blind student, including a screenshot of the instructor’s screen with corresponding a scree of a mobile blind student. The first screen shoot (to the left) belongs to the laptop of Instructor, then the next screen shoot (to the right) belongs to the mobile of the client (the blind student).

There are two sub-items to be presented in Section 9, it is important to explain by details mechanism of GUI for both Instructor project and Mobile UI for the blind user.

9.1 Presentation of the Six Images in the Case Study

The previous picture represents a screenshot of the instructor side who will submit his desired instructional material according to the instructions of authoring system (images to the left of the figure). It is the methodology of tutorial module.

The next picture (the image to the right of the figure) represents the corresponding screen shoot of mobile’s blind student. However all contents of the user interface in the mobile are translated to corresponding sound ?Auditory Multimedia? using Text-To-Speech. In Figure 5, it is clear to display the screen shoot of the instructor side then direct display the screen shoot of the corresponding blind student side. Therefore, there are three pairs of screen shoot all screens shoot were represented by Figure 5. Those six images in Figure 5 represent a case study of basic physics topic.

9.2 Explanation for the User Interface Images (UI) in the Case Study

This sub-item could be divided again into two sub-items:

9.2.1 Explanation of UI for Instructor project

As explained previously, the image to the left represents the instructor-side interface. The instructor is asked to submit their instructional material item by item according to the instructions. Thus, in the first UI of the instructor side, the instructor submitted three phases of matter, including liquids, solids, and gases.

In the next screen, the instructor submitted some text explanation about liquid like water.

In the third screen, the instructors submitted question with three multiple choices deals with the previous screen (about liquid).

9.2.2 Explanation of UI for blind student project

Owing to the submitted information in the three screens of Instructor side, the first mobile UI which is corresponding to the first UI screen of the instructor side, it displays the three kinds of material physically (Liquid, Solids and Gases). Similarly for Mobile UI in the second and third Mobile UI. Now as we see the UI of mobile UI without colors, because the target not visually interface, but to convert the visual text in the mobile UI into corresponding voice aspect.

For more information about mechanism of UI for both instructor project and mobile project please consider item 6. Summary of the Mechanism of the Authoring System for blind Parsons:

Results are divided into two sections.

10.1 Results Regarding Technical Issues of the Authoring System and MTMs

  • 1- The Authoring System supports creating API library, which is stored in the server, thus no need to download database in the Mobile. Memory of Mobile would not loaded by database of MTMs.

  • 2- Applying Text-To-Speech algorithm and coding in Java via Android environment was successfully implemented. Blind students are guided accordingly regarding each visual text in the activity. Further, a few icons were used so the blind students concentrate on the information of the MTMs

  • 3- Using the Load balance algorithm was successfully implanted to distribute time; thus, it reduced server load and retrieval data faster.

10.2 Results Regarding the Instructional Feedback by the Experts

The MTMs have been tried and tested by sample instructors (Experts), and interviews were conducted to obtain valuable feedback regarding the technical comments of the system as well as educational comments.

The author tried to test the MTMs with real blind students, but the COVID-19 pandemic prevented students from attending school. Thus, the author tested the experiment only with teachers who attended school part-time.

The comments of experts could be summarized as shown below.

  • 1- It was successfully used by multiple students at the same time.

  • 2- Speed of displaying auditory contents is suitable.

  • 3- No errors or technical problems when using the MTM.

  • 4- The user can use the MTM easily without need for help.

  • 5- The user can repeat the MTM easily and can select the desired item of MTM.

We also identified some limitations, disadvantages and suggestions provided by the experts.

  • 1- They suggested converting the contents into the Arabic language and other local languages.

  • 2- They suggested generating multiple kinds of mobile learning modules such as exercises, examinations, problem solving, and so forth.

  • 3- Still, MTMs represent an enhancement and reinforcement techniques to the instructor not instead of him totally.

  • 4- They suggested adding aspects of fun and enjoyments to the MTMs, Experts said MTMs must be accessible for blind people.

The conclusions of this work as summarized as follows:

  • 1- Use of touch and double click by student, makes the easy using and guiding for MTMs.

  • 2- Algorithm of time balance was considered successfully, reducing the time when using multiple users.

  • 3- Algorithm of Text-To-Speech in Android environment was successfully implemented.

  • 4- As per the comments of the experts, the instructor can easily generate his desired MTMs for blind students also those MTMs are expected to be easily used without much help, the student can use them individually in his own speed of learning. Further, the student can select any section of the MTM as well as repeat any section. Finally, MTMs of the system could be used easily in flexible aspect.

Owing to the domain of this work, some possible directions for future work are provided below.

  • 1- Design and implement a similar system using public cloud to save the generated modules. This would allow the modules to be used by all blind persons inexpensively.

  • 2- Using more effective techniques for greater flexibility, like stopping, repeating, and jumping to another MMT.

  • 3- It is important to control the speed of translation from text to speech; the researcher can make some technical improvement so as to control on speed.

  • 4- Further, not only form English Text to Speech, what is about other languages. But the problem here is there is component specific to convert text to speech regarding that language? This is an important matter.

  • 5- I think it is important to design and implement a Real Time Sharing System for Blind Students when they use similar Module and how can share experience and knowledge.

  • 6- Such MMTs require evaluation not only by instructors, But they must be evaluated by blind students to study their feedback regarding using MMTs as well evaluate the achievement of students using Instructional tools accordingly.

  • 7- Researchers and professional companies can produce specific portable devices to translate every text in book or journal, into its corresponding voice, using Text-to-Speech Techniques.

  • 8- Researchers can compare between the proposed system versus current/past system, which considered the same works in producing Mobile Learning Modules for blind students.

The author would like to express his thanks for the experts who tried the MTTs and exposed their valuable comments as well as deep thanks to Mustansiriyah university for the encouragement and support in order to accomplish this applied research.
Fig. 1.

Block diagram for Load Balancing algorithm [7].


Fig. 2.

Diagram of text-to-speech techniques [15].


Fig. 3.

Diagram of tutorial module.


Fig. 4.

Block diagram of the proposed authoring system.


Fig. 5.

Three screens shoot of instructor project (the images to the left) with the corresponding three screen shoot of the mobile of the blind student (images to the right).


  1. Hussein, Karim Q (2015). AUTHORING SYSTEM OF DRILL & PRACTICE E-LEARNING MODULES FOR HEARING IMPAIRED STUDENTS. International Journal of Computer Science & Information Technology (IJCSIT). 7.
  2. Jabbar, Qabas Abdal Zahraa (2012). Evaluating Model for E-learning Modules According to Selected Criteria: An Object Oriented Approach. Computer and Information Science. 5. Published by Canadian Center of Science and Education
    CrossRef
  3. Sierra, JS, and De Togores, JSR (2012). Designing mobile apps for visually impaired and blind users: Using touch screen based mobile devices: IPhone/iPad. ACHI 2012 - 5th Int. Conf. Adv. Comput. Interact. 7, 47-52.
  4. Sanjana, B, and Rejinaparvin, J (2016). Voice Assisted Text Reading System for Visually Impaired Persons Using TTS Method. 6, 15-23.
    CrossRef
  5. Podsiadło, M, and Chahar, S (2016). Text-to-speech for individuals with vision loss : a user study, 347-351.
  6. Ashraf, MM, Hasan, N, Lewis, L, Hasan, MR, and Ray, P (2018). A Systematic Literature Review of the Application of Information Communication Technology for Visually Impaired People. Int J Disabil Manag. 11, 2016.
    CrossRef
  7. Agarwal, AK (2015). A New Static Load Balancing Algorithm in Cloud Computing A New Static Load Balancing Algorithm in Cloud Computing, 2016.
    CrossRef
  8. Paul Bills, TP, Bozarth, Jane, Davis, Karen, Everhart, William, Huggett, Cindy, Katz, Judy, and Mercier, Sarah. (2020) . Learning Solutions. [Online]. Available: https://learningsolutionsmag.com/articles/10-steps-for-creating-a-voice-user-interface-for-learning
  9. Dokhe, S, Dube, M, Gade, S, and Nemade, V (2018). Survey Paper: Image Reader For Blind Person. Int Res J Engin Technol. 5, 1738-1740.
  10. Hussein, Karim Q 2007. INSTRUCTIONAL COMPUTER SYSTEM FOR HEARING IMPAIRED PERSONS. PhD Thesis. submitted to Faculty of Computer Studies Symbiosis International University. Pune, India.
  11. Romano, M (2017). Understanding Touch and Motion Gestures for Blind People on Mobile Devices To cite this version : HAL Id : hal-01599659.
  12. Sang-Mok, Jeong,, and Song, Ki-Sang. The Community-Based Intelligent e-Learning System. Advanced Learning Technologies, 769-771. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/10084/32317/01508811.pdf?arnumber=1508811
  13. Podsiadło, M, and Chahar, S (2016). Text-to-speech for individuals with vision loss : a user study, 347-351.
  14. Dobosz, K (2017). Designing Mobile Applications For Visually Impaired People.
  15. Jaber, Rana Abdullah 2020. Generic Synthesis System of Mobile Cloud Learning Modules for Blind Persons. A MSc Thesis. the Iraqi Commission for Computers and Informatics / Informatics Institute for Postgraduate Studies in a partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science.
  16. The future of Voice User Interfaces (VUIs) - Digital Doughnut.,” [Online]. Available: https://www.digitaldoughnut.com/articles/2018/october/the-future-of-voice-user-interfaces-(vuis)
  17. Siebra, C, Silva, F, and Santos, A (2015). Usability Requirements for Mobile Accessibility : A Study on the Vision Impairment. no Mum, 384-389.
  18. Akkoyunlu Buket, MB, Allegra, Mario, Arrigo, Marco, Buzzi, Maria Claudia, La Sait Çelik, D, Diamantini, Davide, Hüllen, Jürgen, Kukulska-Hulme, Agnes, Guardia, GT, Leporini, Barbara, and Pieri, Michelle (2012). Mobile Learning for Visually Impaired People.pdf. 81.
  19. Antunes, A, and Silva, C (2020). Designing for Blind Users: Guidelines for Developing Mobile Apps for Supporting Navigation of Blind People on Public Transports, 1-25.
  20. Poobrasert, Onintra, and Mguine, Brain . Knowledge Engineering in Multimedia Design and Computer Assisted Learning for Special Needs Training : Effectiveness., The 9th World Multi-Conference on Systemic, Cybernetics and Informatics, July 10–13, 2005, Orlando, Florida, USA, Array. http://www.iiisci.org/sci2005/proceedingssci/vol8-2001.asp
  21. Dharanidharan, J, Puviarasi, R, and Boselin Prabhu, SR (2019). Object detection system for blind people. International Journal of Recent Technology and Engineering. 8, 1675-1676. https://doi.org/10.35940/ijrte.B1129.0882S819
  22. Manoufali, M, Aladwani, A, Alseraidy, S, and Alabdouli, A 2011. Smart guide for blind people., Proceedings of the 2011 International Conference and Workshop on the Current Trends in Information Technology, CTIT’11, January, Array, pp.61-63. https://doi.org/10.1109/CTIT.2011.6107935
  23. Lotterbach, S, and Peissner, M (). Voice User Interfaces in Industrial Environments, 592-596.
  24. Ferrett, Lauren Jade. Authoring Tools, Authorware, What are e-Learning Tools?. http://iiit.bloomu.edu/spring2006-eBook-files/chapter4.htm
  25. Van Marcke, K . Learner Adaptivity in Generic Instructional Strategies., Proc. of AIED95, 1995, pp.323-333.
  26. Dyson, LE, Raban, R, Litchfield, A, and Lawrence, E . Embedding Mobile Learning into Mainstream Educational Practice: Overcoming the Cost Barrier., IMCL 2008 Conference, 16–18 April 2008, Amman, Jordan.

Karim Q. Hussein (Ph.D. in Computer Science). TITLES OF THESES : Ph.D. “Instructional Computer System For Hearing Impaired Persons.”

From October 2016 : Assistant Professor – Department of Computer Science – Faculty of Science – Mustansiriyah University – Baghdad-Iraq, Teaching Postgraduate as well as Undergraduate. He has been guiding many M.Sc. researches in their field of interest as well as Ph.D. activities, teaching, guiding, and examining.

Area of research and teaching : Mainly e-learning particularly Multimedia and e-Learning for handicapped persons (Deaf & Blind), 3D Animation using (MAYA), Internet programming, Cloud Computing, Mobile Computing, Mobile Cloud Computing, Data Mining, Deep Learning, and Authoring Systems.

Dr. Hussein has published more than 55 scientific papers in many dcientific journals and conferences in the field of computer science.

Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(4): 414-421

Published online December 25, 2022 https://doi.org/10.5391/IJFIS.2022.22.4.414

Copyright © The Korean Institute of Intelligent Systems.

Authoring System of Mobile Tutorial Modules Based on Auditory Multimedia

Karim Q. Hussein

Department of Computer Science, College of Science, Mustansiriyah University, Baghdad, Iraq

Correspondence to:Karim Q. Hussein (Karim.q.h@uomustansiriyah.edu.iq)

Received: March 25, 2022; Revised: August 27, 2022; Accepted: October 12, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this study, we consider technologies of auditory multimedia as a mode of communication between a mobile device and blind students regarding instructional material. We propose an authoring system designed to create mobile tutorial modules (MTMs) for use by blind students. The system includes two major phases. In the first phase, the instructor submits the contents of a tutorial module in natural language into a server. The server saves the contents of tutorial modules in a database. The server then creates a website based on PHP with MYSQL. Thus, the system creates a website with all the material entered by the instructor. The second phase considers an Android environment with XML for the graphical user interface (GUI) and Java source coding for activities). An API library in the server manages the tutorial content into typical tutorial module. Because the application is designed for totally blind students, all the contents of the GUI are converted into corresponding text-to-speech data (Audio files) using Android. Finally, the blind student receives the MTMs as sound output from the device. We invited several instructors to experimentally evaluate some sample MTMs. The results show that the proposed approach was considered successful in this application. Additionally, an algorithm was applied to reduce the time required for multiple users to interact with the application. We also implemented text-to-speech on Android as well. The proposed approach was received positively by the participants and users.

Keywords: Authoring system, Mobile tutorial modules, Auditory multimedia, Text-to-speech

1. Aim of Research

In this study, we considered the following research aims.

  • We developed an authoring System to generate typical mobile tutorial modules (MTMs) for blind students. The system contains two phases. First, the instructor submits their desired instructional material using the system. Then, MTMs are constructed to be presented to blind students via sound output by converting all GUI contents into corresponding audio data using Android.

  • Moreover, we conducted an experimental evaluation of some experimental MTMs with real instructors (experts) in real school with blind students to obtain feedback regarding the generated MTMs.

2. Related Work

We considered five selected related works.

i. Dr. Karim Q. Hussein, “AUTHORING SYSTEM OF DRILL & PRACTICE E-LEARNING MODULES FOR HEARING IMPAIRED STUDENTS” [1]

People with hearing impairments (HI) typically need some suitable accommodations in educations settings. Teaching methodologies for HI students differ from those of hearing students. HI students should be involved in practicing visual communication methods like sign language to learn healthy communication skills. Teaching methodologies for HI students recommends demonstration and repetition with slow presentations of instructional material. The teacher should display their lessons directly face to face without visual distractions. More reinforcement and encouragement should be provided to HI students, and fun and enjoyment should be strongly included in e-lessons in additional continuous interaction between the teacher and HI students. Drill & Practice (D&P) e-learning modules (eLMs) have been developed for selected topics such as mathematics. In this study, we consider teaching mathematics to people with HIs using eLMs. The results of feedback by experts who tried the learning material were positive. The author used visual programming techniques for both the teacher interface project and the application for students with HIs, which were enhanced strongly with multimedia components.

ii. DRIGAS, ATHANASIOS & KLUKIANKIS, LAYTERIS & PAPAYERASIMAY, YUNNIS [11]. The authors considered blind students and how information and communication technology (ICT) could compensate for sight disability. This paper presented a multi-purpose e-environment for educational and informative services for engineering education for students with visual impairments.

Although this e-environment can be used for actions that support online classes as well as activities related continuing training and remote learning, it was developed to support visually impaired persons to enhance their skills and address both their special personal and communication needs mainly by redirecting the information through other sensory routes. Special ?Assistive Technology? was used and the principles of ? Design for All? and ?Universal Accessibility? were followed to produce a user-friendly and easily navigable environment for this group of students in engineering education.

The authors considered the need to fully understand the needs of people with disabilities or physical handicaps as well as the most appropriate and suitable technologies. These approaches should provide greater accessibility to various services, which is considered a significant social benefit.

iii. JEONG, SANG-MOK; SONG, KI-SANG[12].

A Community-Based Intelligent e-Learning System

With the widespread popularity of remove or online learning, the demand for e-learning based on information technology (IT) is gradually increasing. Effective e-learning systems can be comparable to in-person educational experiences in terms of their effectiveness. In this study, we developed a community-based intelligent e-learning system in which a teacher character equipped with face to face communication functions can interact with learners to recreate the experience of attending a class in person with a remote system. The community-based intelligent e-learning system developed in this work is expected to improve learners’ performance owing to its built-in function to provide precise diagnosis and feedback on students’ performance. Intelligent learning is a promising approach owing its ability to extend the role of computational systems. As these methods allow computers not only to save and retrieve information but also to simulate the character, activities, and emotions of an instructor, their implementation poses a considerable challenge.

iv. Monika Podsiadło, Shweta Chahar [13]. The authors considered blind people and presented research on individuals with vision loss using text-to-speech technology in most of their interactions with devices. The authors focused was on an Android screen reader designed as an accessibility support for visually impaired users. They presented their findings with a study of the TTS user experience provided by Google with users of screen readers for people with visual disabilities. They also presented the results of perceptual experiments showing how different voice qualities may affect preference and likeability.

They concluded that lower-pitched, mature-sounding voices of either gender with neutral emotional expression were strongly preferred by visually impaired users. Further, they found that different voices were needed to construct different applications. through an experimental evaluation.

v. Krzysztof Dobosz [14]. This work considered both visual and hearing disabilities. It focuses on the needs of users with low vision users on mobile devices. The rapid development of information technology has led to barriers to access to mobile applications for people with visual impairments. As of just a few years ago, many authors considered the problem of digital exclusion. They presented many different technologies related to the use of web applications based on the Web Access Initiative (WAI) campaign, which contains web design principles for people who may have limited access to the content included. Users with visual or auditory impairments were considered as target users. As many of the guidelines are organized according to internationally fixed standards, and many of these standards can be transferred easily in the field of mobile applications. In this work, we also focuses on mobile devices for better accessibility of web technology.

3. Algorithms Used

Two algorithms have been used:

3.1 Load-Balancing Algorithm

  • The customer creates the order from the UserBase. UserBase is a group of diverse clients in a specific region.

  • The data center monitor receives this cloud data. It stores all the information on the status of each VM. The Datacenter Controller checks if virtual machines are available for Cloudlets.

  • If a VM is found it will be assigned as available to Cloudlets and will receive a Datacenter Controller VMid for this specific VM and change its status to BUSY.

  • If a VM is found unavailable, a Cloudlet waits in the Datacenter console. A VM that completes processing with a cloudlet.

Consider Figure 1 for explaining the Algorithm. This algorithm used for multiple users of the MTMs.

3.2 Text-To-Speech Algorithm

The basic steps of text-to-speech (TTS) technology are to convert written input to spoken output by generating synthetic speech. There are several ways of performing speech synthesis.

  • Simple voice recording and playback.

  • Splitting of speech into 30–50 phonemes (basic linguistic units) and their re-assembly in a fluent speech pattern.

  • The use of approximately 400 diaphones (splitting of phrases at the center of the phonemes and not at the transition). The next diagram explains the major technical stages for Text-to-Speech [15].

4. Authoring System

The proposed system included two major projects, which were referred to as instructor project and student project, and the server side which creates database of entry material to generate typical MTMs on the student’s mobile device.

Figure 3 explains the framework of the tutorial method. Tutorial lessons could be considered when the need to present new concepts and information of the lesson.

Tutorial teaching has recently attracted considerable attentions. Tutorial teachings are generally provided to students after ae regular lecture. This is a remedial teaching method that is individualized or given to a specific group of students.

The aim of the tutorial or remedial teaching is to help the students to improve their cognitive and other academic abilities. The strategies of the tutorial teaching are based on principles listed and discussed in the following.

  • Remedial teaching.

  • Specific differences.

  • Presenting new instructional material or supporting instructional skills.

The proposed tutorial methodology is summarized by the diagram above.

Purpose : to present new instructional material or to promote current instructional skills of the candidate.

Steps :

Introduction tutorial section present information question and answer session (Q & A ) feedback.

The next diagram represents the major block diagram of the proposed authoring system.

5. Major Parts of the Authoring System

The proposed system consists of the following parts.

  • i- The first user (instructor) who submits the information via his Laptop accordingly.

  • ii- Server side / the instructor uses the website.

    The website creates database to manage and save

    information submitted by instructor.

  • iii- After finishing submission and creating of database. a reformed information would be sent to the client side (the Blind student) so as to be an Android application .

  • iv- In the mobile application, three tasks are performed.

    First reconstruct the information according to typical template of tutorial modules so as to develop MTMs in the application.

    Second task: applying Text-To-Speech Algorithm to convert all contents in the application to auditory mode.

    Third task: to Applying Load Balance Algorithm to manage time among multiple blind students (Multiple Clients side)

  • v- Owing to the previous items, two algorithms have been applied to implement the authoring system, including text-to-speech and load-balancing algorithms. PHP and MySQL Server were also used to create website. Further, the concepts and fundamentals of e-Learning were applied regarding the e-learning tutorial method.

In Figure 4 all the previous items are explained. It is the core of the paper. Three major parts, instructor side, website side and blind students side.

6. Summary of Authoring System

We considered an intriguing study that was carried out on an ?Authoring System of Mobile Tutorial Modules Based on Auditory Multimedia? To summarize, the research aimed at Auditory Multimedia to connect a mobile device and a blind student. Blind pupils’ aid technologies. The teacher enters his typical English language education module onto a computer (server). Tutorial modules’ content is server-side. The server creates a PHP and MySQL website with full content. The server API library manages instructional module material. Android converts the Mobile GUI components to Text-to-Speech (Audio files) for the blind user. The blind student hears MTMs. Instructors tried new MTMs. The end assured MTMs. The time balancing method was used with several users. Text-to-Speech for Android built. Experts who tried MTMs are mostly pleased.

Key outcomes of the paper:

  • 1- Use of touch and double click by student, makes the easy using and guiding for MTMs.

  • 2- Algorithm of time balance was considered successfully, reducing the time when using multiple users.

  • 3- Algorithm of Text-To-Speech in Android environment was successfully implemented.

7. Novelty Aspects

Novelty could be represented by the following items.

  • Flexibility and ability for the Instructor to introduce any desired MTMs although the instructor has no experience techniques of introducing MTMs as well as Text-To-Speech technique. The instructor submits his contents of the instructional material according to the instructions of the system.

  • MTMs allow the blind student to hear the MTMs in clear way and with his own speed of listening. The blind person can hear and trace his desired MTM easily without need for help or guide.

  • MTMs represent an active method of e-Learning . MTMs let the blind student to take the major role of learning. This active role realizes the perfect achieving and the meaningful learning by the blind student because he listens, understands, traces and answers the questions without the help of other person.

  • The paper includes two major algorithms, Text-To-Speech and Load Balance as well as a major e-Learning method “Tutorial”. Results covered two approaches, successfully creating MMTs as well successfully applying the mentioned algorithms besides the creating of the website. Further, applying experimental MTMs on experts enhanced the novelty of this work. These types of papers are practical and reflects the novelty of computer science concepts on corresponding real world cases.

8. Case Study

An experimental case study has been tested with four different end users / Client side (Supposed them blind students). Multiple screen shoot have been shown for the instructor as well as the corresponding screen shoot of generated MTMs for blind students. Figure 5 shows three screenshots for each user, the instructor, and the blind student, including a screenshot of the instructor’s screen with corresponding a scree of a mobile blind student. The first screen shoot (to the left) belongs to the laptop of Instructor, then the next screen shoot (to the right) belongs to the mobile of the client (the blind student).

9. Discussion of the Previous Six Screens Shot

There are two sub-items to be presented in Section 9, it is important to explain by details mechanism of GUI for both Instructor project and Mobile UI for the blind user.

9.1 Presentation of the Six Images in the Case Study

The previous picture represents a screenshot of the instructor side who will submit his desired instructional material according to the instructions of authoring system (images to the left of the figure). It is the methodology of tutorial module.

The next picture (the image to the right of the figure) represents the corresponding screen shoot of mobile’s blind student. However all contents of the user interface in the mobile are translated to corresponding sound ?Auditory Multimedia? using Text-To-Speech. In Figure 5, it is clear to display the screen shoot of the instructor side then direct display the screen shoot of the corresponding blind student side. Therefore, there are three pairs of screen shoot all screens shoot were represented by Figure 5. Those six images in Figure 5 represent a case study of basic physics topic.

9.2 Explanation for the User Interface Images (UI) in the Case Study

This sub-item could be divided again into two sub-items:

9.2.1 Explanation of UI for Instructor project

As explained previously, the image to the left represents the instructor-side interface. The instructor is asked to submit their instructional material item by item according to the instructions. Thus, in the first UI of the instructor side, the instructor submitted three phases of matter, including liquids, solids, and gases.

In the next screen, the instructor submitted some text explanation about liquid like water.

In the third screen, the instructors submitted question with three multiple choices deals with the previous screen (about liquid).

9.2.2 Explanation of UI for blind student project

Owing to the submitted information in the three screens of Instructor side, the first mobile UI which is corresponding to the first UI screen of the instructor side, it displays the three kinds of material physically (Liquid, Solids and Gases). Similarly for Mobile UI in the second and third Mobile UI. Now as we see the UI of mobile UI without colors, because the target not visually interface, but to convert the visual text in the mobile UI into corresponding voice aspect.

For more information about mechanism of UI for both instructor project and mobile project please consider item 6. Summary of the Mechanism of the Authoring System for blind Parsons:

10. Results

Results are divided into two sections.

10.1 Results Regarding Technical Issues of the Authoring System and MTMs

  • 1- The Authoring System supports creating API library, which is stored in the server, thus no need to download database in the Mobile. Memory of Mobile would not loaded by database of MTMs.

  • 2- Applying Text-To-Speech algorithm and coding in Java via Android environment was successfully implemented. Blind students are guided accordingly regarding each visual text in the activity. Further, a few icons were used so the blind students concentrate on the information of the MTMs

  • 3- Using the Load balance algorithm was successfully implanted to distribute time; thus, it reduced server load and retrieval data faster.

10.2 Results Regarding the Instructional Feedback by the Experts

The MTMs have been tried and tested by sample instructors (Experts), and interviews were conducted to obtain valuable feedback regarding the technical comments of the system as well as educational comments.

The author tried to test the MTMs with real blind students, but the COVID-19 pandemic prevented students from attending school. Thus, the author tested the experiment only with teachers who attended school part-time.

The comments of experts could be summarized as shown below.

  • 1- It was successfully used by multiple students at the same time.

  • 2- Speed of displaying auditory contents is suitable.

  • 3- No errors or technical problems when using the MTM.

  • 4- The user can use the MTM easily without need for help.

  • 5- The user can repeat the MTM easily and can select the desired item of MTM.

We also identified some limitations, disadvantages and suggestions provided by the experts.

  • 1- They suggested converting the contents into the Arabic language and other local languages.

  • 2- They suggested generating multiple kinds of mobile learning modules such as exercises, examinations, problem solving, and so forth.

  • 3- Still, MTMs represent an enhancement and reinforcement techniques to the instructor not instead of him totally.

  • 4- They suggested adding aspects of fun and enjoyments to the MTMs, Experts said MTMs must be accessible for blind people.

11. Conclusion

The conclusions of this work as summarized as follows:

  • 1- Use of touch and double click by student, makes the easy using and guiding for MTMs.

  • 2- Algorithm of time balance was considered successfully, reducing the time when using multiple users.

  • 3- Algorithm of Text-To-Speech in Android environment was successfully implemented.

  • 4- As per the comments of the experts, the instructor can easily generate his desired MTMs for blind students also those MTMs are expected to be easily used without much help, the student can use them individually in his own speed of learning. Further, the student can select any section of the MTM as well as repeat any section. Finally, MTMs of the system could be used easily in flexible aspect.

12. Future Work Suggestions

Owing to the domain of this work, some possible directions for future work are provided below.

  • 1- Design and implement a similar system using public cloud to save the generated modules. This would allow the modules to be used by all blind persons inexpensively.

  • 2- Using more effective techniques for greater flexibility, like stopping, repeating, and jumping to another MMT.

  • 3- It is important to control the speed of translation from text to speech; the researcher can make some technical improvement so as to control on speed.

  • 4- Further, not only form English Text to Speech, what is about other languages. But the problem here is there is component specific to convert text to speech regarding that language? This is an important matter.

  • 5- I think it is important to design and implement a Real Time Sharing System for Blind Students when they use similar Module and how can share experience and knowledge.

  • 6- Such MMTs require evaluation not only by instructors, But they must be evaluated by blind students to study their feedback regarding using MMTs as well evaluate the achievement of students using Instructional tools accordingly.

  • 7- Researchers and professional companies can produce specific portable devices to translate every text in book or journal, into its corresponding voice, using Text-to-Speech Techniques.

  • 8- Researchers can compare between the proposed system versus current/past system, which considered the same works in producing Mobile Learning Modules for blind students.

Fig 1.

Figure 1.

Block diagram for Load Balancing algorithm [7].

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 414-421https://doi.org/10.5391/IJFIS.2022.22.4.414

Fig 2.

Figure 2.

Diagram of text-to-speech techniques [15].

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 414-421https://doi.org/10.5391/IJFIS.2022.22.4.414

Fig 3.

Figure 3.

Diagram of tutorial module.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 414-421https://doi.org/10.5391/IJFIS.2022.22.4.414

Fig 4.

Figure 4.

Block diagram of the proposed authoring system.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 414-421https://doi.org/10.5391/IJFIS.2022.22.4.414

Fig 5.

Figure 5.

Three screens shoot of instructor project (the images to the left) with the corresponding three screen shoot of the mobile of the blind student (images to the right).

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 414-421https://doi.org/10.5391/IJFIS.2022.22.4.414

References

  1. Hussein, Karim Q (2015). AUTHORING SYSTEM OF DRILL & PRACTICE E-LEARNING MODULES FOR HEARING IMPAIRED STUDENTS. International Journal of Computer Science & Information Technology (IJCSIT). 7.
  2. Jabbar, Qabas Abdal Zahraa (2012). Evaluating Model for E-learning Modules According to Selected Criteria: An Object Oriented Approach. Computer and Information Science. 5. Published by Canadian Center of Science and Education
    CrossRef
  3. Sierra, JS, and De Togores, JSR (2012). Designing mobile apps for visually impaired and blind users: Using touch screen based mobile devices: IPhone/iPad. ACHI 2012 - 5th Int. Conf. Adv. Comput. Interact. 7, 47-52.
  4. Sanjana, B, and Rejinaparvin, J (2016). Voice Assisted Text Reading System for Visually Impaired Persons Using TTS Method. 6, 15-23.
    CrossRef
  5. Podsiadło, M, and Chahar, S (2016). Text-to-speech for individuals with vision loss : a user study, 347-351.
  6. Ashraf, MM, Hasan, N, Lewis, L, Hasan, MR, and Ray, P (2018). A Systematic Literature Review of the Application of Information Communication Technology for Visually Impaired People. Int J Disabil Manag. 11, 2016.
    CrossRef
  7. Agarwal, AK (2015). A New Static Load Balancing Algorithm in Cloud Computing A New Static Load Balancing Algorithm in Cloud Computing, 2016.
    CrossRef
  8. Paul Bills, TP, Bozarth, Jane, Davis, Karen, Everhart, William, Huggett, Cindy, Katz, Judy, and Mercier, Sarah. (2020) . Learning Solutions. [Online]. Available: https://learningsolutionsmag.com/articles/10-steps-for-creating-a-voice-user-interface-for-learning
  9. Dokhe, S, Dube, M, Gade, S, and Nemade, V (2018). Survey Paper: Image Reader For Blind Person. Int Res J Engin Technol. 5, 1738-1740.
  10. Hussein, Karim Q 2007. INSTRUCTIONAL COMPUTER SYSTEM FOR HEARING IMPAIRED PERSONS. PhD Thesis. submitted to Faculty of Computer Studies Symbiosis International University. Pune, India.
  11. Romano, M (2017). Understanding Touch and Motion Gestures for Blind People on Mobile Devices To cite this version : HAL Id : hal-01599659.
  12. Sang-Mok, Jeong,, and Song, Ki-Sang. The Community-Based Intelligent e-Learning System. Advanced Learning Technologies, 769-771. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/10084/32317/01508811.pdf?arnumber=1508811
  13. Podsiadło, M, and Chahar, S (2016). Text-to-speech for individuals with vision loss : a user study, 347-351.
  14. Dobosz, K (2017). Designing Mobile Applications For Visually Impaired People.
  15. Jaber, Rana Abdullah 2020. Generic Synthesis System of Mobile Cloud Learning Modules for Blind Persons. A MSc Thesis. the Iraqi Commission for Computers and Informatics / Informatics Institute for Postgraduate Studies in a partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science.
  16. The future of Voice User Interfaces (VUIs) - Digital Doughnut.,” [Online]. Available: https://www.digitaldoughnut.com/articles/2018/october/the-future-of-voice-user-interfaces-(vuis)
  17. Siebra, C, Silva, F, and Santos, A (2015). Usability Requirements for Mobile Accessibility : A Study on the Vision Impairment. no Mum, 384-389.
  18. Akkoyunlu Buket, MB, Allegra, Mario, Arrigo, Marco, Buzzi, Maria Claudia, La Sait Çelik, D, Diamantini, Davide, Hüllen, Jürgen, Kukulska-Hulme, Agnes, Guardia, GT, Leporini, Barbara, and Pieri, Michelle (2012). Mobile Learning for Visually Impaired People.pdf. 81.
  19. Antunes, A, and Silva, C (2020). Designing for Blind Users: Guidelines for Developing Mobile Apps for Supporting Navigation of Blind People on Public Transports, 1-25.
  20. Poobrasert, Onintra, and Mguine, Brain . Knowledge Engineering in Multimedia Design and Computer Assisted Learning for Special Needs Training : Effectiveness., The 9th World Multi-Conference on Systemic, Cybernetics and Informatics, July 10–13, 2005, Orlando, Florida, USA, Array. http://www.iiisci.org/sci2005/proceedingssci/vol8-2001.asp
  21. Dharanidharan, J, Puviarasi, R, and Boselin Prabhu, SR (2019). Object detection system for blind people. International Journal of Recent Technology and Engineering. 8, 1675-1676. https://doi.org/10.35940/ijrte.B1129.0882S819
  22. Manoufali, M, Aladwani, A, Alseraidy, S, and Alabdouli, A 2011. Smart guide for blind people., Proceedings of the 2011 International Conference and Workshop on the Current Trends in Information Technology, CTIT’11, January, Array, pp.61-63. https://doi.org/10.1109/CTIT.2011.6107935
  23. Lotterbach, S, and Peissner, M (). Voice User Interfaces in Industrial Environments, 592-596.
  24. Ferrett, Lauren Jade. Authoring Tools, Authorware, What are e-Learning Tools?. http://iiit.bloomu.edu/spring2006-eBook-files/chapter4.htm
  25. Van Marcke, K . Learner Adaptivity in Generic Instructional Strategies., Proc. of AIED95, 1995, pp.323-333.
  26. Dyson, LE, Raban, R, Litchfield, A, and Lawrence, E . Embedding Mobile Learning into Mainstream Educational Practice: Overcoming the Cost Barrier., IMCL 2008 Conference, 16–18 April 2008, Amman, Jordan.