Blog Image (speaker) - Lead Image 1200px x 514px
T02QA1EAG-U23NHHB2R-9b1c32756133-512
Julia Wilson Principal Consultant

AI, Our Thinking Fri 8th December, 2023

2023 Trust & Safety Hackathon: Key Insights from Julia Wilson

Julia Wilson, a Principal Consultant here at Equal Experts, recently attended the Trust and Safety (T&S) Hackathon in Paris, an event designed to bring together professionals, engineers, policymakers, regulators, press, students, and other key audiences who want to dream up potential solutions that protect users from illegal content. We caught up with Julia to find out more. 

Hi Julia! First, can you tell me more about the Trust & Safety Hackathon?

It was great. Trust & Safety is a really complex area, and it’s also evolving at a rapid rate. People working in this space are usually trying to figure out how to address fairly similar challenges, but they’re often a little isolated from one another – so events like these are a great way to break down silos and for everyone to benefit from sharing experiences and ideas.

The hackathon was specifically scoped to address stopping the spread of illegal content, with a real emphasis on cooperation between organisations. This is a very timely topic, because of the increasing threats platforms are facing in this area, and the introduction of new legislation like the UK’s Online Safety Act and the EU’s Digital Services Act.

It was great to see so much collaboration from people with different perspectives on the topic, and the innovative ideas each team came up with.

What were the key challenges discussed on the day?

The topic of the hackathon was deliberately scoped to allow for a broad range of interpretations – with discussion on everything from generation of illegal images to the challenges for platform policy makers in interpreting nuanced laws appropriately in different countries.

A number of teams focused specifically on tackling online scams. This is a huge and growing threat to consumers, with billions lost each year. Because the financial incentive is so high, criminals who perpetrate these kinds of scams are very persistent and sophisticated, and it can be very difficult for platforms to successfully drive them away. It’s not just the big social media companies who are targeted by this – any platform that has user-generated content is vulnerable to abuse from scammers – and smaller and mid-size organisations don’t always have the expertise and bandwidth to deal with it. A lot of the discussion was around how to help these smaller organisations benefit from shared insights and technologies.

What advice would you give to organisations facing these kinds of challenges?

Not to fight it alone! There is a large community of Trust & Safety professionals out there who have made a lot of information available online.

It can be extremely difficult to navigate a successful strategy for dealing with these kinds of persistent threats – from knowing which specific technologies to choose, to tailoring product features to be more safe by design. It requires organisations to take a joined-up approach across many different disciplines and departments. So, connecting with the communities of people working in this space can really help in getting fresh insights and building on each other’s experiences about what works and what doesn’t.

How do you view AI’s involvement in these issues?

It’s something of a double-edged sword. On the one hand, AI and machine learning technologies are playing an increasingly essential role in keeping platforms safe in areas such as content moderation, detection of fake accounts and payment fraud. For example AI can adapt to emerging trends in language and content in a way that traditional automated techniques cannot, and at a scale which is not feasible for manual moderators.

But on the other hand, organisations are seeing increasingly sophisticated threats against them which are also in some part powered by AI – for example, to scale highly personalised scams and spread misinformation, and to generate and distribute abusive imagery. So it’s almost like an arms race; major new threats are emerging due to AI, but it’s AI that we’re turning to in order to prevent them.

What was your biggest takeaway from the hackathon?

Trust & Safety is a phenomenally complex and important area, which is facing new challenges all the time – so it’s incredibly valuable to have spaces in which to share experiences and ideas between diverse groups of people. I think everyone learned a lot, and hopefully there will be even more of this kind of collaboration in the future.