M.Sc. Ignacio Díaz Oreiro

M.Sc. Ignacio Díaz Oreiro

Es estudiante: 
No
Programa en que estudia: 

Proyectos

Publicaciones

Standardized Questionnaires for User Experience Evaluation: A Systematic Literature Review

Descripción:

Standardized questionnaires are one of the methods used to evaluate User Experience (UX). Standardized questionnaires are composed of an invariable group of questions that users answer themselves after using a product or system. They are considered reliable and economical to apply. The standardized questionnaires most recognized for UX evaluation are AttrakDiff, UEQ, and meCUE. Although the structure, format, and content of each of the questionnaires are known in detail, there is no systematic literature review (SLR) that categorizes the uses of these questionnaires in primary studies. This SLR presents the eligibility protocol and the results obtained by reviewing 946 papers from four digital databases, of which 553 primary studies were analyzed in detail. Different characteristics of use were obtained, such as which questionnaire is used more extensively, in which geographical context, and the size of the sample used in each study, among others.

Tipo de publicación: Journal Article

Publicado en: 13th International Conference on Ubiquitous Computing and Ambient ‪Intelligence UCAmI 2019‬

Smart Meeting Room Management System Based on Real-Time Occupancy

Descripción:

This paper proposes the creation of a smart meeting room through the incorporation of a PIR sensor and an AWS IoT button that allows the booking system to reflect a more precise availability of meeting rooms according to the actual occupancy status. The Internet of Things (IoT) devices are controlled using a Wi-Fi module that allows them to connect to the REST web service and to integrate with the open source Meeting Room Booking System (MRBS). In order to evaluate the system a storyboard evaluation was conducted with 47 participants. All participants filled out the User Experience Questionnaires (UEQ), described the product using three words and expressed their opinion through open comments. Finally, 19 participants took part in a real-life simulation of the smart meeting room and evaluated the system using the UEQ questionnaire. Based on the positive acceptance reflected in the evaluations, results show that the proposed system is considered very attractive and useful by the participants.

Tipo de publicación: Conference Paper

Publicado en: 2019 IV Jornadas Costarricenses de Investigación en Computación e Informática (JoCICI)

User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana

Descripción:

Natural user interfaces are becoming popular. One of the most common natural user interfaces nowadays are voice activated interfaces, particularly smart personal assistants such as Google Assistant, Alexa, Cortana, and Siri. This paper presents the results of an evaluation of these four smart personal assistants in two dimensions: the correctness of their answers and how natural the responses feel to users. Ninety-two participants conducted the evaluation. Results show that Alexa and Google Assistant are significantly better than Siri and Cortana. However, there is no statistically significant difference between Alexa and Google Assistant.

Tipo de publicación: Journal Article

Publicado en: 13th International Conference on Ubiquitous Computing and Ambient ‪Intelligence UCAmI 2019‬

UX Evaluation with Standardized Questionnaires in Ubiquitous Computing and Ambient Intelligence: A Systematic Literature Review

Descripción:

Standardized questionnaires are well-known, reliable, and inexpensive instruments to evaluate user experience (UX). Although the structure, content, and application procedure of the three most recognized questionnaires (AttrakDiff, UEQ, and meCUE) are known, there is no systematic literature review (SLR) that classifies how these questionnaires have been used in primary studies reported academically. This SLR seeks to answer five research questions (RQs), starting with identifying the uses of each questionnaire over the years and by geographic region (RQ1) and the median number of participants per study (how many participants is considered enough when evaluating UX?) (RQ2). This work also aims to establish whether these questionnaires are combined with other evaluation instruments and with which complementary instruments are they used more frequently (RQ3). In addition, this review intends to determine how the three questionnaires have been applied in the fields of ubiquitous computing and ambient intelligence (RQ4) and also in studies that incorporate nontraditional interfaces, such as haptic, gesture, or speech interfaces, to name a few (RQ5). Methods. A systematic literature review was conducted starting from 946 studies retrieved from four digital databases. The main inclusion criteria being the study describes a primary study reported academically, where the standardized questionnaire is used as a UX evaluation instrument in its original and complete form. In the first phase, 189 studies were discarded by screening the title, abstract, and keyword list. In the second phase, 757 studies were full-text reviewed, and 209 were discarded due to the inclusion/exclusion criteria. The 548 resulting studies were analyzed in detail. Results. AttrakDiff is the questionnaire that counts the most uses since 2006, when the first studies appeared. However, since 2017, UEQ has far surpassed AttrakDiff in uses per year. The contribution of meCUE is still minimal. Europe is the region with the most extended use, followed by Asia. Within Europe, Germany greatly exceeds the rest of countries (RQ1). The median number of participants per study is 20, considering the aggregated data from the three questionnaires. However, this median rises to 30 participants in journal studies while it stays in 20 in conference studies (RQ2). Almost 4 in 10 studies apply the questionnaire as the only evaluation instrument. The remaining studies used between one and five complementary instruments, among which the System Usability Scale (SUS) stands out (RQ3). About 1 in 4 studies analyzed belong to ubiquitous computing and ambient intelligence fields, in which UEQ increases the percentage of uses when compared to its general percentage, particularly in topics such as IoT and wearable interfaces. However, AttrakDiff remains the predominant questionnaire for studies in smart cities and homes and in-vehicle information systems (RQ4). Around 1 in 3 studies include nontraditional interfaces, being virtual reality and gesture interfaces the most numerous. Percentages of UEQ and meCUE uses in these studies are higher than their respective global percentages, particularly in studies using virtual reality and eye tracking interfaces. AttrakDiff maintains its overall percentage in studies with tangible and gesture interfaces and exceeds it in studies with nontraditional visual interfaces, such as displays in windshields or motorcycle helmets (RQ5).

Tipo de publicación: Journal Article

Publicado en: Advances in Human-Computer Interaction

A Mobile Application for Improving the Delivery Process of Notifications

Descripción:

t present, there are systems in charge of classifying and sending notifications to smart devices at different times. However, there are not many studies that demonstrate the effectiveness of these systems in real world settings. We propose a method that classifies and prioritizes notifications by analyzing only the content of the notification and the sender of the message. We also developed a system implementing this method. User diaries were used to analyze the behavior of the system in real world situations, and the results showed that the implemented system significantly reduces interruptions to users. Additionally, the user experience of the system was evaluated through the standardized questionnaire UEQ (User Experience Questionnaire). The results obtained were positive in most of the scales of this instrument, above the average according to UEQ benchmarks. However, aspects such as stimulation and creativity can be improved in the future to motivate users to use the system.

Tipo de publicación: Book Chapter

Publicado en: Advances in Intelligent Systems and Computing

Conversational Design Patterns for a UX Evaluation Instrument Implemented by Voice

Descripción:

In recent years there has been an increase of voice interfaces, driven by developments in Artificial Intelligence and the expansion of commercial devices that use them, such as smart assistants present on phones or smart speakers. One field that could take advantage of the potential of voice interaction is in the self-administered surveys data collection, such as standardized UX evaluation questionnaires. This work proposes a set of conversational design patterns for standardized UX evaluation questionnaires that use semantic difference scales as a means of collecting quantitative information on user experience, as is the case of AttrakDiff and UEQ (User Experience Questionnaire). The presented design patterns seek to establish a natural conversation created in accordance with the user, the conservation of context between subsequent questions, the minimization of statements and with statement repair mechanisms not completely understood by the user or voice agent, as eliciting explanation of a concept or repetition.

Tipo de publicación: Book Chapter

Publicado en: Lecture Notes in Networks and Systems

Comparing Written and Voice Captured Responses of the User Experience Questionnaire (UEQ)

Descripción:

Standardized questionnaires are widely used instruments to evaluate UX and their capture mechanism has been implemented in written form, either on paper or in digital format. This study aims to determine if the UX evaluations obtained in the standardized UEQ questionnaire (User Experience Questionnaire) are equivalent if the response capture mechanism is implemented using the traditional written form (digitally) or if a conversational voice interface is used. Having a UX evaluation questionnaire whose capture mechanism is implemented by voice could provide an alternative to collect user responses, preserving the advantages present in standardized questionnaires (quantitative results, statistically validated, self-reported by users) and adding the ease of use and growing adoption of conversational voice interfaces. The results of the case study described in this paper show that, with an adequate number of participants, there are no significant differences in the results of the six scales that make up UEQ when using either of the two response capture mechanisms.

Tipo de publicación: Book Chapter

Publicado en: Lecture Notes in Networks and Systems

And How Enjoyable? Converting a User Experience Evaluation Questionnaire into a Voice Conversation

Descripción:

Voice interfaces have gained popularity, driven by the proliferation of intelligent assistants and the development of natural language processing. Given their progress, they could be used to implement self-reported instruments such as standardized user experience (UX) evaluation questionnaires, particularly in the response capture mechanism. This novel use of voice interfaces would make sense if the evaluations using the voice mechanism did not differ significantly from the evaluations carried out with the traditional written questionnaire. In addition, given different possible implementations of the voice mechanism, it must be analyzed whether the user experience is affected by the format of the instrument used. This paper compares two implementations of the User Experience Questionnaire in which the capture mechanism is implemented by voice. Results show that in both implementations the evaluations retain their validity, and that the use of conversational patterns improves the user experience and increases the quality of the answers, which present fewer inconsistencies.

Tipo de publicación: Conference Paper

Publicado en: Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023)

Translation and Validation of the AttrakDiff User Experience Questionnaire to Spanish

Descripción:

The AttrakDiff questionnaire is a widely used instrument for measuring User Experience. However, a Spanish version of the questionnaire has yet to be validated. This represents a significant limitation, given the importance of the Spanish-speaking community. This study aims to translate and validate AttrakDiff to Spanish. Several techniques for translation were used, and the results were joined in a translation proposal. The translated version was evaluated in two scenarios. First, an evaluation with 200 + participants to assess the translation proposal. Second, an evaluation of three systems to perform a factorial analysis and determine the correlations between questions of the same dimension. The results of this study will contribute to the advancement of UX research and practice in the Spanish-speaking context and provide a valuable tool for practitioners and researchers who work with Spanish-speaking users.

Tipo de publicación: Book Chapter

Publicado en: Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023)

Improving the user experience of standardized questionnaires by incorporating gamified mechanisms

Descripción:

User experience (UX) evaluation is essential for organizations since it allows them to generate competitive advantage and ensure that the software created is in accordance with the target users. There are several UX evaluation techniques and tools for information systems, among which the most widely used are standardized written questionnaires. However, in recent years, more interactive techniques have begun to take center stage, such as gamified methodologies that seek to include gaming techniques to attract greater interest from evaluators. This research aims to analyze the use of gamified mechanisms in combination with standardized UX evaluation questionnaires, taking advantage of the already demonstrated statistical capabilities of the questionnaires and at the same time providing the novelty of gamification techniques so that the collection of evaluation responses is attractive to participants.

Tipo de publicación: Conference Paper

Publicado en: 2024 IEEE VII Congreso Internacional en Inteligencia Ambiental, Ingeniería de Software y Salud Electrónica y Móvil (AmITIC)

Evaluation of Usability, User Experience and Accessibility of Applications: a Tertiary Review

Descripción:

In the present time, there is a growing need to understand how users interact with digital applications to ensure the success of new products and a good experience for the users. Three main concepts are central to this subject: usability, user experience, and accessibility. As such research efforts have been undertaken to create evaluation instruments for each and apply them to multiple tools. This study aims to perform a compilation of the research gathered in multiple secondary studies on the subject of evaluating these concepts.A systematic search process was performed to answer research questions related to how evaluations in the field of usability, user experience, and accessibility are performed. From this search 45 secondary studies were obtained and further analyzed. We were able to identify the most common methodologies and tools high-lighted across the secondary studies for each concept. The main results found were the following: questionnaires are ubiquitous in the field of user experience evaluation and to a lesser extent for usability while accessibility evaluation is mostly performed with the aid of automated tools, there is a tendency for authors to develop and use their own questionnaires without validating them and 24 main categories for the evaluated characteristics were found, of which Satisfaction, Efficiency, Effectiveness, and Attractiveness were the most common.

Tipo de publicación: Conference Paper

Publicado en: 2024 IEEE VII Congreso Internacional en Inteligencia Ambiental, Ingeniería de Software y Salud Electrónica y Móvil (AmITIC)

Current Practices in Accessibility Evaluation: A Literature Review of Over 100 Studies

Descripción:

This article aims to fill the gap in accessibility assessment practices through a comprehensive review of over 100 research articles, identifying the most commonly used accessibility assessment methods. The review includes both studies assessing the accessibility of systems and those promoting accessibility or assistive technologies. By reviewing existing research, we seek to provide a clear overview of the predominant methodologies in the field, highlighting current practices and pointing out areas for methodological improvement and innovation. The significance of this study lies in its potential to inform future research and practice, providing a basis for developing more effective and comprehensive assessment frameworks that improve accessibility for users with disabilities.

Tipo de publicación: Conference Paper

Publicado en: 2024 IEEE VII Congreso Internacional en Inteligencia Ambiental, Ingeniería de Software y Salud Electrónica y Móvil (AmITIC)

Supporting UX Evaluation with Open Text Word Clustering

Descripción:

This research work proposes using word clustering to identify concepts associated with the user experience (UX) evaluation of a version of the standardized User Experience Questionnaire (UEQ), where the usual written of responses is replaced by input through a voice interface. The clusters were made from a Word2vec model of word embeddings that was built based on sentences contributed by participants who evaluated the implementation of the voice interface, using traditional quantitative questionnaires and which they complemented with open text comments. The results show clusters around keywords such as ’assistant’, ’understand’, ’response’ and ’survey’, which allow the identification of words associated with both the voice implementation and the UEQ questionnaire itself, and provide information about attitudes researchers could investigate in more detail about the assistant implemented, for example in a subsequent evaluation to be carried out through a focus group or semi-structured interviews.

Tipo de publicación: Conference Paper

Publicado en: 2024 IEEE VII Congreso Internacional en Inteligencia Ambiental, Ingeniería de Software y Salud Electrónica y Móvil (AmITIC)

Smart Speakers and Intelligent Personal Assistants: Experiences, Perspectives, and UX Evaluation

Descripción:

This paper consolidates the efforts of our research group in designing and developing voice applications for smart speakers and intelligent personal assistants (such as Amazon Alexa, Google Assistant, Siri, and Cortana) over recent years. We present six case studies: a career guidance system, an application for educational fairs, an appointment booking system for a dentist's office, a set of games for seniors, a notification system for smart farming, and the use of smart speakers as mechanisms to evaluate user experience. These case studies encompass various contexts, each requiring unique approaches. Each voice application was individually evaluated using standardized questionnaires and customized surveys. Additionally, we conducted two parallel evaluations to address key questions about human interaction with intelligent personal assistants. These evaluations involved both expert groups and end-users of the voice applications. Finally, we synthesize the knowledge, experiences, and perspectives gathered over these years to present a list of lessons learned, aimed at guiding researchers and developers in enhancing the user experience (UX) of voice applications.

Tipo de publicación: Conference Paper

Publicado en: 2024 IEEE VII Congreso Internacional en Inteligencia Ambiental, Ingeniería de Software y Salud Electrónica y Móvil (AmITIC)