How might we empower people to discuss, critique, and reflect on ways in which AI programs find their way into unexpected areas of our lives? FutureShift is a prototyping workshop for exploring alternative future scenarios and values in emerging technologies.
For my Master’s capstone project in Human Centered Design & Engineering, I created a speculative design workshop to explore the implications of machine learning algorithms in the near future. Participants explored Black Mirror-esque scenarios and created boundary-pushing prototypes that might exist in those scenarios.
Skip to our Process Book where we documented our workshop and learnings.
- Paul Roberts: Design Strategist & Project Manager
- Ariel Duncan: UX Researcher
Role: Visual Design Lead
- Responsible for creative direction, and writing & editing for the workshop and process book.
- Led literature review and partnered with Research Lead on expert interviews.
- Collaborated on all other aspects of this project.
Timeline: 10 weeks
Is frictionless, seamless design good for society?
From fake news on Facebook to predictive policing software, we realized we were blind to how machine learning algorithms were dictating many aspects of our lives. As algorithms increasingly make decisions for us, they gain agency without us noticing it.
We wanted to shift the agency back towards people, and let them decide how much power AI programs should have over their day-to-day decisions.
To understand how we want to shape and be shaped by AI programs, we need to create space to challenge presumed virtues and dominant values inscribed in them.
Speculative design gives us an opportunity to challenge our own biases (our “givens”) and critically reflect on what we want and don’t want to see in our future as a society.
- Enable participants to explore the social and ethical implications of technological predictions through making a “speculative object” that expressed their (one of many) desired relationships with AI programs or machine learning algorithms.
- Display these objects at a “yard sale of the future” where the fictitious products would be sold alongside items one might find at a real yard sale. The yard sale provided a real context to start conversations with visitors around the values embedded in fictitious products.
We created a workshop called FutureShift
We conducted our workshop with 4 participants who all came from the University of Washington. Diversity of participants was critical, so the university environment allowed us to recruit people from different majors, from Junior to Ph.D level.
- Meet My Algorithms Icons
- Algorithmic Self Drawing Sheets
- Future Signals
- FutureShift Card Deck
- Yard Sale Price Cards
- Workshop Evaluation Form
- Workshop Protocol
While we didn’t expect to cultivate agency in 3 hours, we saw hints of agency emerge during our discussions and post-workshop evaluation.
2) Yard Sale
We had a total of 19 visitors and engaged in conversations about workshop participants’ speculative objects with 10 visitors. We filmed some of the visitors’ responses.
Since conducting this workshop, we discovered other companies have used a similar process.
Understanding the issue at scale
We conducted 6 interviews with experts in the field and read over 50 pieces of academic literature and industry reports.
- Unpack the interplay between algorithms and their impact on culture & society, and uncover methods for revealing how algorithms work (i.e. reverse-engineering algorithm’s outputs).
- Explore strategies from the field to inform how we could inspire awareness, discussion, and reflection around algorithms and people’s desired relationship with them.
Findings from discovery research
1) People were conflicted about how machine learning algorithms “see” them because they didn’t trust the intentions behind the companies (i.e. Facebook, Google).
But at the same time, they were frustrated when these systems didn’t know enough about them to personalize the experience.
2) People make up stories about how algorithmic systems work–and then try to train the system to better align with their preferences.
People see the data and content that are served up but not how they’re being pulled together or weighted.
3) We risk creating algorithms that exclude and discriminate if we don’t actively challenge our narrow view of the world.
- It’s hard to challenge a decision made by algorithms when we only see the output – much less hold the system accountable.
- It’s easy to accept a result as a given when algorithms are perceived to be objective, even when they’re not. They encode human biases in the form of rules, categories, and criteria.
The ability to change how algorithms work rest with developers, designers, and researchers. Without challenging our own personal assumptions that go into designing algorithms, we recreate current inequalities in algorithmic systems.
Reframing the Challenge
As designers, developers, and researchers, we have an opportunity to shift the agency back to “non-experts” – in a process to highlight people, perspectives, and values that might have been forgotten during product development.
Empowering non-experts to voice their concerns and tensions about how machine learning algorithms are currently designed.
How might we empower people to discuss, critique, and reflect on ways in which AI programs might find their way into unexpected areas of our lives?
Two immediate audiences:
- Non-experts: We want to hear from people who aren’t usually included in developing machine learning algorithms or thinking about the long-term future.
- Design, development, and research teams: These people have direct impact on creating algorithmic systems. We want to empower them to challenge their own perspectives about “what’s desirable.”
Why it matters:
By working backwards from preferred visions, we can enable tangible actions today while including alternative values and considering the ethical implications of rules, categories, and criteria we embed into algorithms.
Arriving at Our Concept
Push into the “ridiculous” to provoke what we think is a “desirable future”
Our “solution” will seek to:
- Empower people to challenge the status quo of how the future is currently being imagined and to add their own personal critiques and desires into consideration.
- Make algorithms and their impact more visible and tangible.
- Include a more diverse audience when we discuss future implications of algorithms.
How can we help everyday people touch, see, smell, and imagine “the future” today?
We decided to modify a participatory workshop framework from the Extrapolation Factory. Participants made speculative, future-facing objects and then placed them in everyday contexts to engage the public in spontaneous discussions about possible futures.
Iterating the Workshop Concept
1) From present solutions to future implications
But we eventually took out the STEEP lenses (Social, Technological, Environmental, Economic, Political) because participants were getting stuck at categorizing the future signals instead of articulating their desired scenarios.
We simplified the Futures Wheel to keep up the momentum of coming up with implications on society from participants’ chosen signals.
We curated a list of future signals from research firms like Gartner and Intel to provoke discussions and probe for people’s desired relationships with AI programs.
To embed their own personal critiques and desires, participants created narratives to articulate their point of view about the implications of their chosen signals.
Even through we removed the STEEP lenses, I brainstormed provocative questions about each lens and created a deck of “STEEP” cards to help participants articulate their thoughts.
2) From barely noticeable to visceral
Participants made tangible objects to playfully materialize their hopes and concerns about AI programs. By using materials that people can see, feel, and smell, we were able to talk about other possible versions of future in a more concrete way.
One challenge was to overcome professional designers’ and engineers’ point-of-view already encoded into finished products.
We broke apart finished products to make them more modular and “playable.” We also carefully selected items that loosely communicated how they should be used.
An appropriate context encourages people to consider near-future products side-by-side with the everyday and the “outdated” products. This sort of comparison brought out visceral and emotional connections to potential scenarios embodied in future products, and we used them to generate discussions with the public.
We tested our assumptions around how “familiar” a context would be to ensure that it grounded both the public and workshop participants.
We landed on yard sale of the future concept based on these criteria: accessibility, familiarity, natural evaluative-ness, and feasibility.
3) From deductive thinking to inductive thinking
Future predictions are often dictated by research firms and academic think tanks. To make it more relatable to everyday life, we tested two warm-up activities: Meet My Algorithms and Algorithmic Self Drawing Exercise.
We sparked spontaneous 1-on-1 conversations with participants to help them articulate their hopes and concerns about a future prediction.
We also worked 1-on-1 with participants during the prototyping activity to help them “think with” the materials.
In the post-workshop survey, participants said that our facilitation made the workshop more enjoyable and helped them materially express the narrative they wrote.
Walkthrough of the FutureShift Workshop
Our workshop, which we named FutureShift, consisted of several phases.
Let’s Use an Actual Example
Experiencing the Workshop
Here are the steps through the FutureShift workshop.
1) Meet My Algorithms
As they entered, participants were invited to chart their relationship with common algorithms (e.g. Facebook, Amazon, Google, Netflix) on a wall. They mapped these algorithms on a 2×2 diagram using two scales: “trustworthy” to “creepy” and “gets me” to “doesn’t know me at all.”
2) Algorithmic Self Drawing Exercise
To reveal the invisible work of algorithmic systems, participants sketched and shared versions of themselves from the perspective of three popular algorithms: Facebook, Netflix, and Amazon.
3) Choosing a signal
Participants were invited to review 35 predictions about the impact of algorithms on culture and select one or two that they’d like to explore further. These “signals” challenge stereotypical ideas about what counts as “the future” by exposing participants to a variety of scenarios about how algorithms might influence unexpected areas of their life.
4) Sorting futures
Participants chose whether their signal about the future belongs in “probable”, “plausible”, or “possible” future. Using a pink thread, they mapped a route through the appropriate section of the Future Cone and into the first arrow of the next phase of the diagram. The Future Cone helped participants understand “futures” in the plural.
Participants fleshed out the “world” of their signal, creating multiple narratives that explored the implications on other aspects of life in that future scenario. A deck of provocative questions helped encourage participants to consider different perspectives. We created a two-level chevron as a visual cue to push them to explore secondary and tertiary consequences of their signal’s scenario.
6) Making speculative products
Participants were invited to choose materials that were of interest to them and physically render any aspects of the narratives they created. We placed the materials close to the Future Cone area to help them make connections between the materials’ qualities and the scenarios they were considering.
7) Putting the products up for sale
Participants were invited to imagine their object existing in the context of a yard sale of the future. They created narratives about how their object ended up for sale and the price they would ask for it. Doing this helped them think about how their object might be used, become obsolete, or be discarded.
8) Yard Sale of the Future
We hosted an actual yard sale where participants’ objects were displayed amongst “normal” items one would find at a typical yard sale. Participants’ objects were used to inspire spontaneous discussions about the possible futures they represented as visitors browsed items for sale. We were able to engage passersby in brief but emotionally-charged conversations about the role algorithms might play in the near future.
As designers, we focus around the narrow goals of the user and the business as it exists today and underestimate the long-term impact of things we design.
Without changing our work processes, we won’t be able to change our outcome. When we haven’t been asking “what if?” and “is this a good idea?” it’s harder to parse out which possibilities are actually preferable and which are terrifying, as Black Mirror invites us to consider.