March 25, 2022
Everyone agrees we need a safer and more trustworthy environment in which technology companies operate, but few agree on what that means or how to do it.
Most tech companies are opaque about the processes and technologies they use to address harms. While many large tech companies do have best practices and assessments for risk management and cyber security, we need more transparency as well as rules and processes to address the trust and safety concerns of users. We also need to determine how to remove or reduce the spread of harmful content without suppressing online innovation and expression.
At System, we’ve decided to speak openly about the risks we face in developing our public resource, and the types of processes we’ve put in place to mitigate them. Our mission at System is to relate everything, to help the world see and solve anything, as a system.
To achieve this mission, the company we are building is as important as our technology. For example, we’re incorporated as a Public Benefit Corporation because we think it is critical we are as concerned about our impact on society as we are about our financial sustainability. We are driven by a charter which outlines a strong set of values.
As we build System, we frequently sit down as a team to discuss any potential unintended consequences of new features we release on system.com. We rank the risks we identify in terms of severity and likelihood, and we devise and coordinate actions to mitigate those risks. These actions are centered around our values:
Here are the top risk-related questions we’ve identified for v1.0 beta and the actions we’ve taken to mitigate them to the best of our abilities and resources.
It is our responsibility to:
How we are mitigating this risk:
Reproducibility We use a score of reproducibility to strengthen a relationship. Each relationship on System is labeled for its potential to be reproduced. This is based on the completeness of the information and material provided by the author. Read more about our methodology here.
External peer review Contributions from a scientific paper carries information about the paper, including the journal it was published in. Papers published in high impact factor journals go through rigorous peer review. We prioritize this information. Though peer-review is the most adopted standard of quality, it's not perfect by any means.
Risk level identified as low. Only a few trusted subject matter experts at System can contribute content today. For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.
Eventually, a wider community will be able to contribute and we will offer pipelines by which this content travels through for verification and removal.
It is our responsibility to:
How we are mitigating this risk:
We are in the early stage of identifying and forming guidelines, framework and workflow.
Topic and Metric Naming We respect and are aware that language use will vary across our communities. We are developing a set of guidelines to guide the choice of topic labels and metric labels. The guidelines will be based on known best practices and will be informed by, vetted and extended by our community. The guidelines will be regularly revisited to address new areas of concern and sensitive topics
Topic and Metric Review Workflow We are developing a framework and workflow by which topics and metrics can be reviewed for publication. We intend to rank topic and metric assignments so that we can prioritize areas of critical concern.
Combatting algorithmic bias We ask questions like, ‘how was this data collected?’, and ‘what assumptions are we making’. We actively try to keep a lens on inclusive data practices. We promote a transparent culture that establishes ethical guidelines and empowers the community to speak up if they see something problematic. Whenever possible, we encourage contributors to include a link to the data used in a model or study.
Risk level identified as low. For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.
It is our responsibility to:
How we are mitigating this risk:
Bot and fraud detection Both our community of users and our software are able to flag and immediately alert us when fraudulent or bot activity is detected. Our team acts immediately to address or block any fraudulent behavior.
Usage auditing System audits how users interact with system.com to help us improve the product. This data can also be useful to help detect behavior that might be suspicious.
Risk level identified as medium. We expose a limited set of APIs on v1.0-beta.
It is our responsibility to:
How we are mitigating this risk:
Avoiding addictive design At System, when introducing new features or services, we ask “who does this benefit?” and “how can we better safeguard a user’s health?” User manipulation and exploitation strategies are strictly in violation of our charter.
Impact mapping We use this planning technique to identify the human behavioral changes that must occur or not occur for a product to be successful. More about impact mapping here.
Risk level identified as low. We will produce tools and guardrails to help the community better protect against this risk.
It is our responsibility to:
How we are mitigating this risk:
Extensive measures to protect PII We treat the privacy and security of user data as our first priority and is of utmost importance to us. We will not release product features that compromise or exploit user data in any way. We embrace and promote all standards to protect PII, and empower our team to proactively take all necessary precautions.More about our data security policies here.
Risk level identified as low. We minimally collect and do not share any PII with external services.
We will regularly share with you the short- and medium-term risks we see and the actions we’re taking and we will update our risk framework with each new version of our product. However, we will not communicate an action when we believe its dissemination would likely increase the risk we seek to mitigate (for example, by learning how to game one of our algorithms or processes).
If you are not already a member, we invite you to connect with us and the community on Slack. We are excited to hear all of your thoughts and feedback, particularly around any additional questions we should be posing, or other strategies to employ while on our mission.
Join our community and together we can help the world see the whole system.
Filed Under:
Release Notes