Interested in the missing gaps in algorithmic personalization, I talked to Apurva Shah, my professor at CCA, who’s also the CTO/cofounder of Duality, a startup company that develops a QA and prediction platform for autonomous robotics. I was inspired by his thoughts on different relationships between customers and providers of algorithmic personalization. Below are two types and their examples.
- Could be initiated by customers or program, but well-informed and consented by customers
- Established and trusted agreement of data collection
- Clear intentions and process explanation from the program
- Program making decisions for customers
- Unsolicited advice
- Pretending to be the “magic machine”
I started to reflect on the moments that I feel overwhelmed, and especially overloaded with information. Did I ask for it? If not, why is it there? The questions struck me. The answers to them are important to differentiate what digital experiences take advantage of the lack of people’s consent.
In my short exploratory interviews with my friends, YouTube came to be a well-understood example among them as a platform that recommends deeply personalized content. It makes use of the cutting-edge deep learning technology by Google. Such effectively-designed algorithm boosts engagement by exploiting human desire. YouTube has been condemned for pushing inappropriate videos to children and accelerating political radicalization. The consequences are the results of people gaming YouTube’s algorithm.
Coming back to the project, what can be done here? After flowing the thoughts with sticky notes, I wrote down the molecules (people/problem/solution statements). Wow, so fast! Who knew?
It seems easier to decide which problem I want to tackle when the project is two weeks long. I’m more comfortable to roll with assumptions and experiment, because there is less commitment to the idea and thus less fear of failure.
The personal challenge is to find something worth making for the rest of the week. I can’t change YouTube’s algorithm, but I can find ways to address the problem in more impactful ways. Below are two facets of the problem/solution with different levels of fidelity.
- Freedom of control: Giving options to turn on/off elements on YouTube website. Feasible with chrome extension/userscripts. Might be shippable.
- Transparency of algorithm: Displaying the most decision factor next to recommended videos. There might be ways to undercut, but within a week it will stay at prototype stage.
Wow, I just gave myself so much work.