Up to 11 prototypes of apps focused on delivering a safer online experience for. to accomplish Women, were featured at a series of policy making workshops organized by the World Wide Web Foundation. Address the apps gender-based violence on the internet were created in tThe non-profit organization explained the workshops with 20-25 participants, including 2-3 representatives from each technology company from product and policy teams, in a report published June 28th.

Based on fictional platforms, the participants’ prototype apps were based on a series of personas These “aim to reflect the experiences of highly visible women on the Internet from around the world while realizing that no group of personas can fully grasp the complexities of those experiences, specific identities or contexts,” the report added.

According to the Web Foundation, the draft guidelines of prototypes revolved around two main themes:

  • Curation: Focused on giving women more control and choice over what they watch online, when they watch it, and how they see it.
  • Reporting: Focuses on improving the processes that women use to report abuse.

Here is a list of all 11 prototype apps as described from the Web Foundation:

Calm the crowd

Type of abuse: Calm The Crowd is primarily designed for users who have been the target of an online mob.

How is it addressed?: When the prototype detects an increase in abuse, it prompts users to review their granular control settings. These settings help the user control who can see, share, comment or reply to posts. In addition to this, it also gives users the option to create their own keyword filters for replies and comments that they do not want to see.

Viral notification

Type of abuse: It is designed for users who receive hurtful and offensive messages because their posts are going viral.

How is it addressed: Primarily intended for video sharing sites, the prototype allows users to turn on viral mode if their posts are getting too much attention from other accounts. This feature provides the user with options to disable comments and downloads. It also includes a toggle button for the “cool down” phase.

Com mod

Type of abuse: The prototype is designed for users who experience a variety of hurtful and offensive messages in their comments, but are unable to report it all on their own.

How is it addressed?: With Com Mod, a person can delegate the responsibility for reviewing and flagging abusive posts to trusted contacts or communities. It also comes with more detailed user control settings that can be customized for each post or for a period of time. This way, the user who faces abuse will not be overwhelmed with constant bans and mutes of accounts.

Picture sign

Type of abuse: Image Shield is aimed at users who fear being identified in videos or images posted or shared by other accounts without their knowledge.

How is it addressed: When Image Shield detects a user in a video or picture posted by an external account, it notifies the user and gives them three options: review the content, ask a friend to review, or decline the notification. You can also collect and archive all flagged content with date stamp, platform, name and flag filter.

Reporting 2.0

Type of abuse: The prototype app is intended for users who feel exhausted from reporting online abuse or who find that the options they are given when reporting it do not usually reflect their experience.

How is it addressed?: In Reporting 2.0, the user can hover over a specific category of abuse, such as hate speech, and a brief explanation of the hate speech and related community guidelines will appear on the screen. This allows the user to report the content according to company guidelines. Users can also report the abuse in the original language

Report hub

Type of abuse: An app designed for users who are not familiar with how to properly label content or who do not receive feedback from the platform after reporting the content.

How is it addressed?: Via Report Hub, the user can track the status of all of their reports on a dashboard or timeline with key milestones such as “Report Created”, “Report In Progress”, “Review Complete” and “Decision Contested”.


Type of abuse: The prototype helps users who believe that their reports of abusive content must be accompanied by local cultural and political contexts.

How is it addressed: This reporting dashboard provides specific prompts to users based on the category of abuse so that they can provide the context and information necessary for the platform to respond more effectively. Users also have the option to indicate whether the report will be submitted in the same language as the abusive post. In addition, a toggle button gives users control over whether or not they want to see the content of the highlighted post.

One click

Type of abuse: One Click is intended for users who can expect to be attacked by a social media pal-on and want to be one step ahead of them.

How is it addressed?: With the prototype, users can set a time-limited security mode that can be toggled easily with one click. Security mode features include disabling comments or enabling a “delay time” for comments, enabling a profanity or keyword filter, marking keywords, and disabling tags.


Type of abuse: GateWay addresses online abuse in the form of defamatory or gender- and identity-based attacks. It is aimed at users who are struggling to balance their safety with their commitment to the causes they are committed to.

How is it addressed: Attacked users can send alerts to platforms via GateWay. Frequently attacked users can also apply for protected status. The prototype also facilitates easy access to trusted and verified civil society organizations for assistance in dealing with online abuse.


Type of abuse: This prototype is intended for users who receive a lot of hateful comments but don’t know how to respond.

How is it addressed: iMatter does things differently by hosting a chat interface and chatbots that assist users through the reporting process. The chatbots also offer users community support and check-ins with a psychologist. After the abuse is reported, iMatter will follow the conversation to inquire about the user’s health and well-being.

health check

Type of abuse: An app specially designed for users who feel at risk due to their online experience of abuse focusing on their personal traits and perceived lack of competency.

How is it addressed: Users can perform a risk assessment of the threats they are facing by answering a few short multiple-choice pop-up questions. When the assessment is complete, the results show the level of risk of a user profile at a given point in time, based on indicators set by the user.

Also read: