UX/UI + Research
Intel's Bleep is an end-user Windows application that puts users in control of offensive content that may they may encounter during online gaming by utilizing AI to detect and redact audio based on user preferences. The app combines the AI models from Intel and Spirit AI with the Windows audio architecture. From the UI, the user can both select which toxicity filters they would like to enable, as well as review the conversation transcript and associated analysis.
Goals
Create a proof-of-concept alpha project to test with real users to test the artificial intelligence (AI) product.
Reduce toxic behavior in online gaming
Opportunity
Online gaming has been known for its toxic behavior that may ruin a gaming experience with hateful speech and abusive words. Research has shown that toxic language is a pain point for many gamers. Toxicity has been cited as a top 3 pain point by Intel customers.
74% of online gamers in the U.S. experience harassment
Research
In 2020, I was on a small design and research team to work through the Alpha testing phase. A large part of the project that I was involved in was facilitating the user testing, research, and analysis.
Ideation
I wasn't able to ideate on this project as much as I would have liked to have. My main role and key contributions come from the research side, however, I did work with the Design Director to ideate around the overall UX and UI of Bleep.
Execution
I created a user testing program, documented findings, and delivered them to the broader team. Both the Spirit AI and Intel teams then built improvements based on that feedback.
Results
Bleep was originally announced at GDC 2019, back when it was a very early prototype. Intel gave the first preview of the UI of Bleep during their GDC 2021 Showcase. They are currently in the Open Beta phase and it is available for testing.
Although they recognize this is not a holistic solution, Intel believes they can provide tools to help improve gamers’ experiences by leveraging machine learning hardware accelerators like the GNA (Gaussian Neural Accelerator).
Reflection
What I'd do differently:
Looking back, I'd like to have spent more time evaluating the ethics of choosing to turn off or keep on specific words or types of toxicity.
While we were asked to produce an MVP for the UI, I would have liked to have spent more time on it.