How might we empower people to discuss, critique, and reflect on ways in which AI programs find their way into unexpected areas of our lives? FutureShift is a prototyping workshop for exploring alternative future scenarios and values in emerging technologies.
For my Master’s capstone project in Human Centered Design & Engineering, I created a speculative design workshop to explore the implications of machine learning algorithms in the near future. Participants explored Black Mirror-esque scenarios and created boundary-pushing prototypes that might exist in those scenarios.
Skip to our Process Book where we documented our workshop and learnings.
- Paul Roberts: Design Strategist & Project Manager
- Ariel Duncan: UX Researcher
Role: Visual Design Lead
- Responsible for creative direction, and writing & editing for the workshop and process book.
- Led literature review and partnered with Research Lead on expert interviews.
- Collaborated on all other aspects of this project.
Timeline: 10 weeks
Is frictionless, seamless design good for society?
From fake news on Facebook to predictive policing software, we realized we were blind to how machine learning algorithms were dictating many aspects of our lives. As algorithms increasingly make decisions for us, they gain agency without us noticing it.
We wanted to shift the agency back towards people, and let them decide how much power AI programs should have over their day-to-day decisions.
To understand how we want to shape and be shaped by AI programs, we need to create space to challenge presumed virtues and dominant values inscribed in them.
Speculative design gives us an opportunity to challenge our own biases (our “givens”) and critically reflect on what we want and don’t want to see in our future as a society.
We created a workshop called FutureShift
We led participants through an interactive, hands-on workshop to explore the social and ethical implications of technological predictions. They made “speculative objects” that expressed their (one of many) desired relationships with AI-enabled products and services.
We staged a “yard sale of the future” where we sold these fictitious products alongside real items. (Yes, people actually bought stuff and we made a tiny amount of money.) We started conversations with visitors about the fictitious products and tried to understand how they thought about the unexpected ways that technology might personalize and predict in the near future.
- Meet My Algorithms Icons
- Algorithmic Self Drawing Sheets
- Future Signals
- FutureShift Card Deck
- Yard Sale Price Cards
- Workshop Evaluation Form
- Workshop Protocol
Some reactions we gathered
While we didn’t expect to cultivate agency in 3 hours, we saw hints of agency emerge during our discussions and post-workshop evaluation.
2) Yard Sale
We had a total of 19 visitors and engaged in conversations about workshop participants’ speculative objects with 10 visitors. We filmed some of the visitors’ responses.
Since conducting this workshop, we discovered other companies have used a similar process.
Understanding the issue at scale
We conducted 6 interviews with experts in the field and read over 50 pieces of academic literature and industry reports.
Highlights from discovery research
1) People felt conflicted about how machine learning algorithms “see” them because while they didn’t trust the intentions of some companies (i.e. Facebook, Google), they became frustrated when digital services didn’t “know” them.
2) People see the data and content that are served up but not how they’re being pulled together or weighted. Some people tried to manipulate algorithms through guesswork.
3) It’s hard to challenge a decision made by algorithms when we only see the output – much less hold the system accountable. They encode human biases in the form of rules, categories, and criteria.
The ability to change how algorithms work rest with developers, designers, and researchers. Without challenging our own personal assumptions that go into designing algorithms, we recreate current inequalities in algorithmic systems.
Reframing the challenge
We have an opportunity to shift the agency back to “non-experts” – in a process to highlight people, perspectives, and values that might have been forgotten during product development.
Empowering non-experts to voice their concerns and tensions about how machine learning algorithms are currently designed.
How might we empower people to discuss, critique, and reflect on ways in which AI programs might find their way into unexpected areas of our lives?
By working backwards from preferred visions, we can enable tangible actions today while including alternative values and considering the ethical implications of rules, categories, and criteria we embed into algorithms.
Arriving at our concept
Push into the “ridiculous” to challenge what we think is a “desirable future.”
How can we empower everyday people touch, see, smell, and imagine “the future” today?
We want to…
- Empower people to add their own personal critiques and desires into how the future is currently being imagined.
- Make algorithms and their impact more visible and tangible.
- Include a more diverse voices when we discuss future implications of algorithms.
We decided to modify a participatory workshop framework from the Extrapolation Factory. Participants made speculative, future-facing objects and then placed them in everyday contexts to engage the public in spontaneous discussions about possible futures.
Iterating the workshop concept
1) From present solutions to future implications
We curated a list of future signals from research firms like Gartner and Intel to provoke discussions and probe for people’s desired relationships with AI programs.
After choosing a future signal, participants wrote a narrative to articulate their critiques and desires related to the prediction. I created a deck of cards with provocative questions to help participants articulate their thoughts.
2) From barely noticeable to visceral
Participants made speculative objects to playfully materialize their hopes and concerns about AI programs. By using materials that people can see, feel, and smell, we were able to talk about other possible versions of future in a more concrete way.
One challenge was to overcome professional designers’ and engineers’ point-of-view already encoded into finished products. So we broke apart finished products to make them more modular and “playable.” We also carefully selected items that loosely communicated how they should be used.
An appropriate context encourages people to consider near-future products side-by-side with the everyday and the “outdated” products. This sort of comparison brought out visceral and emotional connections to potential scenarios embodied in future products, and we used them to generate discussions with the public.
We tested our assumptions around how “familiar” a context should be to ensure it grounded both the public and workshop participants.
We landed on the yard sale of the future concept because it was accessible and familiar to people and we could set it up within our time constraint.
3) From deductive thinking to inductive thinking
Future predictions are often dictated by research firms and academic think tanks. To make it more relatable to everyday life, we tested two warm-up activities: Meet My Algorithms and Algorithmic Self Drawing Exercise.
We sparked spontaneous 1-on-1 conversations with participants to help them articulate their hopes and concerns about a future prediction.
We also worked 1-on-1 with participants during the prototyping activity to help them “think with” the materials.
In the post-workshop survey, participants said that our facilitation made the workshop more enjoyable and helped them materially express the narrative they wrote.