The faith of techno-utopian thinkers is that technology and its abilities will enable the perfect society & future. Digital conversational spaces were built on this ideology. The ability of these technologies to connect us and help us communicate were not imagined to become tools for catalyzing hate speech & harassment. The growing threat of online hate expresses itself in memes and other user-generated content ranging from disparate threats of violence, organized campaigns, personal attacks on women and minorities to coordinated mobs. The design architecture of social media platforms and recommendation algorithms are now making it easier to promote this kind of extreme polarizing content and antagonistic behavior.
PewResearch’s report on online harassment revealed that 41% of Americans had experienced online harassment. Users also said that these platforms were responsible for addressing their safety. There is mounting pressure on social media companies to address this issue, over the years.
Mainly two approaches are being taken to combat this issue — Firstly, the expansion of content moderation teams to address the trust-safety issues of the users. Having to meet targets and looking at high volumes of hate speech and graphic content are taking a toll on the people doing this kind of work and their mental health. Secondly, the companies invested in the development & improvement of automated systems to detect problematic content. Relying on technical understanding of content will always prove inadequate because automated systems are being asked to understand the human culture—racial histories, geographical contexts, gender relations, power dynamics, etc. This problem will require more than a technical fix.
The faith of techno-utopian thinkers is technology and its ability would enable the perfect society & future. The conversational spaces that were built to connect us and help us communicate were not imagined to take on the political form it has, catalyzing hate speech and harassment. The growing threat of online hate expressing itself in memes and other user generated content that range from personal disparate threats of violence, organized campaigns, personal attacks on women and minorities, to co-ordinated mobs.
The algorithms originally designed to drive revenue on social media platforms, recommendation are now making it easier to promote extreme content. Pew Research's report on online harassment revealed that 41% of Americans had experienced online harassment over the past year. These users also reported that platforms were doing less to address their safety.
Social media companies (Facebook, Twitter, TikTok) approach to harm reduction was to invest in the development & improvement of automated systems and to detect problematic content. Relying on technical understanding of content will always prove inadequate because automated systems are being asked to understand human culture—racial histories, gender relations, power dynamics and so on. This problem will require more than a technical fix.
It is common belief that user safety online is regulated by technical capabilities of an organization. In contrast to this belief, these social and technological infrastructures are bound by affordances created on these platforms by design and policies. In this case, an affordance can trigger both learning about the platform's policy about security, privacy, and anti-abuse and take action for the same.
The apparent architectures common to social media platforms appear to be of two broad categories:
The algorithm driven architecture is an invisible and inaccessible model to the users. However, the second architecture has affordances that make the reporting possible.
It needs to be noted, that these affordances created to understand and take action in case of a violation of policies by popular social media platforms satisfies all the requirements of an affordance by the framework provided by Michael Hammond:
Visibility of system status:
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Visibility of system status:
Match between system and the real world:
Follow real-world conventions, making information appear in a natural and logical order.
Match between system and the real world:
User control and freedom:
Users often choose system functions by mistake and will need to leave the unwanted state without having to go through an extended dialogue.
User control and freedom:
Consistency and standards:
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Consistency and standards:
Error prevention:
Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Error prevention:
Recognition rather than recall:
Minimize the user's memory load by making objects, actions, and options visible.
Recognition rather than recall:
Flexibility and efficiency of use:
Speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users.
Flexibility and efficiency of use:
Aesthetic and minimalist design:
Dialogues should not contain information which is irrelevant or rarely needed.
Aesthetic and minimalist design:
Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help users recognize, diagnose, and recover from errors:
Help and documentation:
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation.
Help and documentation:
The reporting models in the above section for the most part passes the design conventions of user experience. However, in ADLs report - the American experience 2021 on online harassment, they found that:
While these designed affordances are novel in some ways utility alone does not address the safety concerns. Technologies must provide more agency to its users.
The existing modalities prove the need for alternate ways of prioritizing user safety. How might design contribute to a safer, and more inclusive environment?
[Participatory workshops conducted can help understanding narratives of communities that have been historically left out.]