M.Sc. Ignacio Díaz Oreiro

M.Sc. Ignacio Díaz Oreiro

Es estudiante: 
No
Programa en que estudia: 

Proyectos

Publicaciones

Standardized Questionnaires for User Experience Evaluation: A Systematic Literature Review

Descripción:

Standardized questionnaires are one of the methods used to evaluate User Experience (UX). Standardized questionnaires are composed of an invariable group of questions that users answer themselves after using a product or system. They are considered reliable and economical to apply. The standardized questionnaires most recognized for UX evaluation are AttrakDiff, UEQ, and meCUE. Although the structure, format, and content of each of the questionnaires are known in detail, there is no systematic literature review (SLR) that categorizes the uses of these questionnaires in primary studies. This SLR presents the eligibility protocol and the results obtained by reviewing 946 papers from four digital databases, of which 553 primary studies were analyzed in detail. Different characteristics of use were obtained, such as which questionnaire is used more extensively, in which geographical context, and the size of the sample used in each study, among others.

Tipo de publicación: Journal Article

Publicado en: 13th International Conference on Ubiquitous Computing and Ambient ‪Intelligence UCAmI 2019‬

Smart Meeting Room Management System Based on Real-Time Occupancy

Descripción:

This paper proposes the creation of a smart meeting room through the incorporation of a PIR sensor and an AWS IoT button that allows the booking system to reflect a more precise availability of meeting rooms according to the actual occupancy status. The Internet of Things (IoT) devices are controlled using a Wi-Fi module that allows them to connect to the REST web service and to integrate with the open source Meeting Room Booking System (MRBS). In order to evaluate the system a storyboard evaluation was conducted with 47 participants. All participants filled out the User Experience Questionnaires (UEQ), described the product using three words and expressed their opinion through open comments. Finally, 19 participants took part in a real-life simulation of the smart meeting room and evaluated the system using the UEQ questionnaire. Based on the positive acceptance reflected in the evaluations, results show that the proposed system is considered very attractive and useful by the participants.

Tipo de publicación: Conference Paper

Publicado en: 2019 IV Jornadas Costarricenses de Investigación en Computación e Informática (JoCICI)

User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana

Descripción:

Natural user interfaces are becoming popular. One of the most common natural user interfaces nowadays are voice activated interfaces, particularly smart personal assistants such as Google Assistant, Alexa, Cortana, and Siri. This paper presents the results of an evaluation of these four smart personal assistants in two dimensions: the correctness of their answers and how natural the responses feel to users. Ninety-two participants conducted the evaluation. Results show that Alexa and Google Assistant are significantly better than Siri and Cortana. However, there is no statistically significant difference between Alexa and Google Assistant.

Tipo de publicación: Journal Article

Publicado en: 13th International Conference on Ubiquitous Computing and Ambient ‪Intelligence UCAmI 2019‬

UX Evaluation with Standardized Questionnaires in Ubiquitous Computing and Ambient Intelligence: A Systematic Literature Review

Descripción:

Standardized questionnaires are well-known, reliable, and inexpensive instruments to evaluate user experience (UX). Although the structure, content, and application procedure of the three most recognized questionnaires (AttrakDiff, UEQ, and meCUE) are known, there is no systematic literature review (SLR) that classifies how these questionnaires have been used in primary studies reported academically. This SLR seeks to answer five research questions (RQs), starting with identifying the uses of each questionnaire over the years and by geographic region (RQ1) and the median number of participants per study (how many participants is considered enough when evaluating UX?) (RQ2). This work also aims to establish whether these questionnaires are combined with other evaluation instruments and with which complementary instruments are they used more frequently (RQ3). In addition, this review intends to determine how the three questionnaires have been applied in the fields of ubiquitous computing and ambient intelligence (RQ4) and also in studies that incorporate nontraditional interfaces, such as haptic, gesture, or speech interfaces, to name a few (RQ5). Methods. A systematic literature review was conducted starting from 946 studies retrieved from four digital databases. The main inclusion criteria being the study describes a primary study reported academically, where the standardized questionnaire is used as a UX evaluation instrument in its original and complete form. In the first phase, 189 studies were discarded by screening the title, abstract, and keyword list. In the second phase, 757 studies were full-text reviewed, and 209 were discarded due to the inclusion/exclusion criteria. The 548 resulting studies were analyzed in detail. Results. AttrakDiff is the questionnaire that counts the most uses since 2006, when the first studies appeared. However, since 2017, UEQ has far surpassed AttrakDiff in uses per year. The contribution of meCUE is still minimal. Europe is the region with the most extended use, followed by Asia. Within Europe, Germany greatly exceeds the rest of countries (RQ1). The median number of participants per study is 20, considering the aggregated data from the three questionnaires. However, this median rises to 30 participants in journal studies while it stays in 20 in conference studies (RQ2). Almost 4 in 10 studies apply the questionnaire as the only evaluation instrument. The remaining studies used between one and five complementary instruments, among which the System Usability Scale (SUS) stands out (RQ3). About 1 in 4 studies analyzed belong to ubiquitous computing and ambient intelligence fields, in which UEQ increases the percentage of uses when compared to its general percentage, particularly in topics such as IoT and wearable interfaces. However, AttrakDiff remains the predominant questionnaire for studies in smart cities and homes and in-vehicle information systems (RQ4). Around 1 in 3 studies include nontraditional interfaces, being virtual reality and gesture interfaces the most numerous. Percentages of UEQ and meCUE uses in these studies are higher than their respective global percentages, particularly in studies using virtual reality and eye tracking interfaces. AttrakDiff maintains its overall percentage in studies with tangible and gesture interfaces and exceeds it in studies with nontraditional visual interfaces, such as displays in windshields or motorcycle helmets (RQ5).

Tipo de publicación: Journal Article

Publicado en: Advances in Human-Computer Interaction

A Mobile Application for Improving the Delivery Process of Notifications

Descripción:

t present, there are systems in charge of classifying and sending notifications to smart devices at different times. However, there are not many studies that demonstrate the effectiveness of these systems in real world settings. We propose a method that classifies and prioritizes notifications by analyzing only the content of the notification and the sender of the message. We also developed a system implementing this method. User diaries were used to analyze the behavior of the system in real world situations, and the results showed that the implemented system significantly reduces interruptions to users. Additionally, the user experience of the system was evaluated through the standardized questionnaire UEQ (User Experience Questionnaire). The results obtained were positive in most of the scales of this instrument, above the average according to UEQ benchmarks. However, aspects such as stimulation and creativity can be improved in the future to motivate users to use the system.

Tipo de publicación: Book Chapter

Publicado en: Advances in Intelligent Systems and Computing

Conversational Design Patterns for a UX Evaluation Instrument Implemented by Voice

Descripción:

In recent years there has been an increase of voice interfaces, driven by developments in Artificial Intelligence and the expansion of commercial devices that use them, such as smart assistants present on phones or smart speakers. One field that could take advantage of the potential of voice interaction is in the self-administered surveys data collection, such as standardized UX evaluation questionnaires. This work proposes a set of conversational design patterns for standardized UX evaluation questionnaires that use semantic difference scales as a means of collecting quantitative information on user experience, as is the case of AttrakDiff and UEQ (User Experience Questionnaire). The presented design patterns seek to establish a natural conversation created in accordance with the user, the conservation of context between subsequent questions, the minimization of statements and with statement repair mechanisms not completely understood by the user or voice agent, as eliciting explanation of a concept or repetition.

Tipo de publicación: Book Chapter

Publicado en: Lecture Notes in Networks and Systems

Comparing Written and Voice Captured Responses of the User Experience Questionnaire (UEQ)

Descripción:

Standardized questionnaires are widely used instruments to evaluate UX and their capture mechanism has been implemented in written form, either on paper or in digital format. This study aims to determine if the UX evaluations obtained in the standardized UEQ questionnaire (User Experience Questionnaire) are equivalent if the response capture mechanism is implemented using the traditional written form (digitally) or if a conversational voice interface is used. Having a UX evaluation questionnaire whose capture mechanism is implemented by voice could provide an alternative to collect user responses, preserving the advantages present in standardized questionnaires (quantitative results, statistically validated, self-reported by users) and adding the ease of use and growing adoption of conversational voice interfaces. The results of the case study described in this paper show that, with an adequate number of participants, there are no significant differences in the results of the six scales that make up UEQ when using either of the two response capture mechanisms.

Tipo de publicación: Book Chapter

Publicado en: Lecture Notes in Networks and Systems

And How Enjoyable? Converting a User Experience Evaluation Questionnaire into a Voice Conversation

Descripción:

Voice interfaces have gained popularity, driven by the proliferation of intelligent assistants and the development of natural language processing. Given their progress, they could be used to implement self-reported instruments such as standardized user experience (UX) evaluation questionnaires, particularly in the response capture mechanism. This novel use of voice interfaces would make sense if the evaluations using the voice mechanism did not differ significantly from the evaluations carried out with the traditional written questionnaire. In addition, given different possible implementations of the voice mechanism, it must be analyzed whether the user experience is affected by the format of the instrument used. This paper compares two implementations of the User Experience Questionnaire in which the capture mechanism is implemented by voice. Results show that in both implementations the evaluations retain their validity, and that the use of conversational patterns improves the user experience and increases the quality of the answers, which present fewer inconsistencies.

Tipo de publicación: Conference Paper

Publicado en: Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023)

Translation and Validation of the AttrakDiff User Experience Questionnaire to Spanish

Descripción:

The AttrakDiff questionnaire is a widely used instrument for measuring User Experience. However, a Spanish version of the questionnaire has yet to be validated. This represents a significant limitation, given the importance of the Spanish-speaking community. This study aims to translate and validate AttrakDiff to Spanish. Several techniques for translation were used, and the results were joined in a translation proposal. The translated version was evaluated in two scenarios. First, an evaluation with 200 + participants to assess the translation proposal. Second, an evaluation of three systems to perform a factorial analysis and determine the correlations between questions of the same dimension. The results of this study will contribute to the advancement of UX research and practice in the Spanish-speaking context and provide a valuable tool for practitioners and researchers who work with Spanish-speaking users.

Tipo de publicación: Conference Paper

Publicado en: Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023)