The conference aims to make trust and security hot topics in IT
A woman with dyed hair and a TikTok-branded jacket chatted with a man dressed as an academic in the alumni center pavilion on Friday morning. Stanford’s first annual Trust and Safety Research Conference was a gathering of sorts.
Online trust and safety is an interdisciplinary field, and the two-day conference brought together experts in computer science, law and social sciences to unify their work on online harm and mistrust – an unprecedented effort for the field, participants said.
On Thursday and Friday, attendees attended panels and research presentations that took place on the first floor of the Alumni Center and networked outside in the courtyard. Popular presentation topics included improved tools for online moderation, spreading misinformation, and how organizations and businesses can design and implement bespoke policies for online safety.
The conference was organized by the Stanford Internet Observatory and the Trust and Safety Foundation. Early bird tickets cost $100 for participants from universities and civil society, with registration fees increased to $500 for participants from industry.
Evelyn Douek, public content moderation expert and assistant professor of law, described the conference’s goal as a way to connect those working on internet security in academia, industry and politics for the first time. .
“Community development is really important,” Douek said. “In fact, bringing people from many different disciplines together in one room, meeting, building these bridges.”
In Thursday’s introduction to the Journal of Online Trust and Safety’s research presentation, communications professor Jeff Hancock described how he co-founded the publication with other Stanford researchers in the field to “bridge that gap” between those who study online security in different disciplines. Along with the Stanford Internet Observatory (SIO), researchers aim to understand and prevent potential harm online.
Alex Stamos, director of SIO and cybersecurity expert, added in an interview: “One of our goals at SIO is to make [online] trust and security a legitimate academic subject.
Over the past couple of years, the threat of internet violence and public mistrust has become hard to ignore. Several mass shootings have been preceded by hateful screeds posted on the 8chan online forum. Online misinformation has been linked to COVID vaccine hesitancy, and conspiracy theories fueled the organization of last year’s Capitol insurrection on forums and social media sites.
“Security was not seen by CS academics as a real field,” Stamos said. “But these days, security is seen as one of the hottest aspects of IT. We need to have the same kind of transition in trust and security, but we’re not fifteen years old.
Panelists emphasized that a single framework for online safety simply cannot exist; The Internet is too big, managed and used by too many people.
It would be impossible to create a single governing force to regulate online content and behavior, Del Harvey, vice president of trust and safety at Twitter, told a panel.
“I keep hearing this: ‘What we have to do is make sure that it’s not the companies that make the decisions, but rather this benevolent entity that we are creating, which will have all the information informed by all things that are good and just and good in the world will be [enforce online safety]said Harvey. However, Harvey added: “We are far from the utopian world where that can exist.”
For panelist Mike Masnick, blogger and tech policy expert, the recent Kiwifarms hate forum platform by infrastructure provider Cloudflare demonstrated how important decisions about online safety are often left in the hands of a few small businesses. .
“The reality was that the situation was up to par [Cloudflare]”, Masnick said. “And a decision to do nothing meant people were going to get hurt.”
Some participants said that there may not be a single system that can prevent Internet harm, but they expressed hope that actors in the Internet ecosystem can take steps to prevent harm. and maintain public trust.
“The thing is, there is no perfect decision,” Douek said. “Every decision will always result in harm. You have to trust that you have thought about these decisions.