Secure by design: UX case study on online extremism and harassment

The faith of techno-utopian thinkers is that technology and its abilities will enable the perfect society & future. Digital conversational spaces were built on this ideology. The ability of these technologies to connect us and help us communicate were not imagined to become tools for catalyzing hate speech & harassment. The growing threat of online hate expresses itself in memes and other user-generated content ranging from disparate threats of violence, organized campaigns, personal attacks on women and minorities to coordinated mobs. The design architecture of social media platforms and recommendation algorithms are now making it easier to promote this kind of extreme polarizing content and antagonistic behavior.

PewResearch’s report on online harassment revealed that 41% of Americans had experienced online harassment. Users also said that these platforms were responsible for addressing their safety. There is mounting pressure on social media companies to address this issue, over the years.

Mainly two approaches are being taken to combat this issue — Firstly, the expansion of content moderation teams to address the trust-safety issues of the users. Having to meet targets and looking at high volumes of hate speech and graphic content are taking a toll on the people doing this kind of work and their mental health. Secondly, the companies invested in the development & improvement of automated systems to detect problematic content. Relying on technical understanding of content will always prove inadequate because automated systems are being asked to understand the human culture—racial histories, geographical contexts, gender relations, power dynamics, etc.  This problem will require more than a technical fix.

Quick Overview

The faith of techno-utopian thinkers is technology and its ability would enable the perfect society & future. The conversational spaces that were built to connect us and help us communicate were not imagined to take on the political form it has, catalyzing hate speech and harassment. The growing threat of online hate expressing itself in memes and other user generated content that range from personal disparate threats of violence, organized campaigns, personal attacks on women and minorities, to co-ordinated mobs.

The algorithms originally designed to drive revenue on social media platforms, recommendation are now making it easier to promote extreme content. Pew Research's report on online harassment revealed that 41% of Americans had experienced online harassment over the past year. These users also reported that platforms were doing less to address their safety.

Social media companies (Facebook, Twitter, TikTok) approach to harm reduction was to invest in the development & improvement of automated systems and to detect problematic content. Relying on technical understanding of content will always prove inadequate because automated systems are being asked to understand human culture—racial histories, gender relations, power dynamics and so on.  This problem will require more than a technical fix.

Quick Overview

Design Affordances

It is common belief that user safety online is regulated by technical capabilities of an organization. In contrast to this belief, these social and technological infrastructures are bound by affordances created on these platforms by design and policies. In this case, an affordance can trigger both learning about the platform's policy about security, privacy, and anti-abuse and take action for the same.

The apparent architectures common to social media platforms appear to be of two broad categories:

1. The algorithm driven architecture that the platform uses to mitigate threats online.
2. The user based architecture when the platform users input is essential to identifying and mitigate threats online.

The algorithm driven architecture is an invisible and inaccessible model to the users. However, the second architecture has affordances that make the reporting possible.

Affordance Evaluation

It needs to be noted, that these affordances created to understand and take action in case of a violation of policies by popular social media platforms satisfies all the requirements of an affordance by the framework provided by Michael Hammond:

An affordance offers the perception of the possibility and constraints on action.
These perceived possibilities and constraints are provided by the properties of the instrument and shaped by past experiences and context.
They become habitual, eventually.
They arise because of symbolic properties of an instrument.
They offer actual opportunities and constraints for action among target.
They relate to other features of the environment, including incentives and desirable goals.
These affordances are often sequential in time and nested.

Heuristic Evaluation

Visibility of system status:
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

Visibility of system status:


Twitter
Facebook
Instagram
Tik tok

Match between system and the real world:
Follow real-world conventions, making information appear in a natural and logical order.

Match between system and the real world:


Twitter
Facebook
Instagram
Tik tok

User control and freedom:
Users often choose system functions by mistake and will need  to leave the unwanted state without having to go through an extended dialogue.

User control and freedom:


Twitter
Facebook
Instagram
Tik tok

Consistency and standards:
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

Consistency and standards:


Twitter
Facebook
Instagram
Tik tok

Error prevention:
Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

Error prevention:


Twitter
Facebook
Instagram
Tik tok

Recognition rather than recall:
Minimize the user's memory load by making objects, actions, and options visible.

Recognition rather than recall:


Twitter
Facebook
Instagram
Tik tok

Flexibility and efficiency of use:
Speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users.

Flexibility and efficiency of use:


Twitter
Facebook
Instagram
Tik tok

Aesthetic and minimalist design:
Dialogues should not contain information which is irrelevant or rarely needed.

Aesthetic and minimalist design:


Twitter
Facebook
Instagram
Tik tok

Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

Help users recognize, diagnose, and recover from errors:


Twitter
Facebook
Instagram
Tik tok

Help and documentation:
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation.

Help and documentation:


Twitter
Facebook
Instagram
Tik tok

Do the users feel safe?

The reporting models in the above section for the most part passes the design conventions of user experience. However, in ADLs report - the American experience 2021 on online harassment, they found that:  

1.
41% of respondents who experienced a physical threat stated that the platform took no action on a threatening post.
2.
LGBTQ+ respondents reported higher rates of overall harassment than all other demographics. 
among other insights.

While these designed affordances are novel in some ways utility alone does not address the safety concerns. Technologies must provide more agency to its users.

The existing modalities prove the need for alternate ways of prioritizing user safety. How might design contribute to a safer, and more inclusive environment?

Redesign for
user security
A series of recommendations & considerations that provides alternate ways of designing for user safety & well-being on communication platforms.

Control

Reporting
Choice
Give users control to choose to remain anonymous while reporting.
Easy Access
Give users the option to fill out a custom complaint in the case of low adherence to mentioned categories, high severity or need for contextual understanding to evaluate the problem.
Reporting Form
Enable dialogue in moderation
Give users the agency to provide accurate context to understand codified/regional/linguistic and other malicious intent for evaluating.
Manage urgency
User safety issues vary in degree of severity. Incorporating that into evaluation helps mitigating malicious content efficiently.
Notifications
Error Recovery
Ability to withdraw a reported problem once submitted incase of an error.

Transparency

Notifications
Algorithmic Transparency
Inform users the system using which a reported issue was evaluated.
Process Transparency I
A large number of users feel that today the platform doesn't act on harmful content they are subjected to. Use transparency in process with users to change that by allowing feedback.
Process Transparency II
Falsified abuse reports get flagged by platforms accidentally. This feature is primarily for journalists, people of color, LGBTQ+ and at risk groups whose posts get taken down without context.

Technological Considerations

Data collection must be inclusive taking into account the systemic biases against women, people of color, LFBTQ+ community and other minorities.

[Participatory workshops conducted can help understanding narratives of communities that have been historically left out.]

Assess how secure the data collected is, including inferences made by the machine during the output.
Sensitive data collected should be stored securely to maintain the privacy of the users.
Collecting behavioral data about the perfomance of an algorithm under specific situations.
Using behavioral data of an algorithm to assess if the algorithm negatively impacts the interests of specific groups of people.
Assess how secure the algorithm and the people allowed to use the algorithm within the organization.

Other Considerations

Ensure the design components are accessible to all user groups.
Design platforms with fallback for institutional and governmental malice. (i.e, censorship, internet shutdowns) 
Design by understanding the problems of different user groups experiencing online harassment/extremism. (i.e, women of color, LGBTQIA+)
The dynamic nature of the spread of malicious content demands technology, policy and design to work symbiotically at all times.
Design context-aware systems.
Some of the research material collected for this project can be found here.
Created for Major Studio I at Parsons School of Design.