
FLUBBY
An AI-companion that support children explore their emotions
USER EXPERIENCE | USER INTERFACE | CHATBOT | AI
GOAL
Explore the emotional and ethical dimensions of AI by developing a child-friendly AI companion and analyzing how embodiment influences AI-human relationships.
MY ROLE
UX Researcher & Designer
Workshops, Persona, Storyboarding, Wireframes, Prototyping, User testing
TOOLS
Figma, Miro, Replika
Above | An overview of the project’s methodology
Empathize
During our workshop discussions, we addressed some important questions regarding AI's ability to express emotions and interact with users.
We debated whether AI should express emotions and, if so, how it could be achieved. We explored various modalities such as color, movement, facial expressions, and tone of voice.
We also discussed the possibility of assigning emotions to objects, considering the concept of embodiment and anthropomorphism and their impact on user interaction.
In addition, we explored the ethical considerations surrounding AI's ability to ask questions and what are considered appropriate levels of autonomy. We identified scenarios where this behavior could enhance the user experience or facilitate learning.
These sessions provided valuable insights into the ethical and practical considerations of AI emotional expression and interaction.
Workshops: AI expressions and embodiment
Exercises
Replika as AI companion
We used an AI app called Replika as a companion for several weeks to explore the daily use of AI. The app allows users to choose their companion's human appearance, including hair, eyes, and skin details, and the type of relationship they want with it (such as romantic, friendship, or mentorship).
However, it was difficult to discuss abstract concepts such as emotions, values, and character traits with Replika, as it tended to avoid these topics. On the other hand, talking about its body was interesting, as it described having humanoid features.
Above | Conversations with Replika AI companion
Finding emotion in objects
We explored how different objects can evoke emotions. During this exercise, we took pictures of objects that triggered certain emotions and compiled our findings on a board. In the second step of the exercise, we searched for objects that evoked opposite emotions to those we had already discovered.
After discussing and reflecting on these exercises in the workshops, we decided it was important to narrow down the topic and focus on the ideation phase of creating an AI-powered Chatbot.
Ideation
Devising a concept
To prompt the brainstorming process, the “How might we…?” method helped explore the different conceptual possibilities. Some of the questions that fought our attention were:
How might we support the user in understanding certain emotions with the help of a Chatbot?
How might we design a Chatbot that can assess and provide a proper emotional response depending on the situation?
Additionally, after considering various audiences, we found it interesting to focus on working with children aged 5 to 7; a group could greatly benefit from further exploring their emotions. We then created a proto-persona to delve into the potential characteristics, goals, and experiences of this audience. One of the major challenges with this age group is that they are just starting to read and write, so it's important to use simple language with the support of nonverbal communication, such as voice and graphic features.
Taking the proto-persona into account, we described the desired characteristics of our Chatbot concept that we consider for a successful interaction with users.
Based on the concept description, we created storyboards to illustrate potential user-AI interactions in different scenarios where our user could be experiencing various emotions (e.g., sad, happy, angry, scared). This exercise allowed us to visualize possible interactions and highlighted the importance of establishing a trusting relationship with the chatbot for its success.
Above | Protopersona and the elements of the concept
DESIGN & TESTING
In the final stage of our project, we created a user flow for our chatbot interface. We named our chatbot FLUBBY and designed the first prototype using Figma. We used the Wizard of Oz method to test the prototype, first with three colleagues and we two children between 5-7 years old. The latter was particularly difficult because we needed to get consent from parents, and they tend to have concerns about the impact of technology and AI on their children's lives. Based on their feedback, we made some tweaks to present the final prototype.
Sneak peak at the testing using Wizard of Oz
EVALUATE
While creating an AI chatbot for children, we gained some valuable insights. It is crucial to consider various factors such as their age, abilities, and attention span, as well as whether they can read or write. Additionally, the AI should appear warm and friendly to help children build a relationship with it more easily, leading to better adoption.
FLUBBY is an excellent example of an unembodied chatbot concept that can benefit children. The chatbot aims to learn about the user's emotions and preferences actively, tailoring the user experience and providing more accurate activities not only for the target audience but for the user itself. Additionally, it can provide companionship for children who spend much time alone and might support their emotional needs.
However, using a chatbot companion presents challenges and raises questions about establishing boundaries between the AI and users. What are the possible negative effects of this interaction, and can they be mitigated? Which topics are appropriate for FLUBBY to teach kids, and are there any inappropriate ones? Finally, how can parents monitor these interactions between AI and children?
As concepts and tools related to AI, such as FLUBBY, continue to emerge and advance, we hope they will not only hold promise for enhancing people's emotional well-being but also spur important discussions about the implications and safeguards surrounding interactions between users and AI.