Skip to content

Commit d7fc9bb

Browse files
committed
Fixing toxicity page
1 parent f22f52c commit d7fc9bb

16 files changed

+147
-27
lines changed

_data/authors.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -694,3 +694,14 @@ Hao Yu:
694694
- label: LinkedIn
695695
url: https://www.linkedin.com/in/haoy/
696696

697+
698+
Ubisoft:
699+
name: Ubisoft
700+
avatar: /assets/images/bio/ubisoft.svg
701+
auto_update_publications: false
702+
current_role:
703+
type: Collaborator
704+
title: Professor
705+
affiliation:
706+
research_directions:
707+
- online-toxicity
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
<div class="{{ include.type | default: 'list' }}__item">
2+
<article class="highlighted-publication-card" itemscope itemtype="https://schema.org/CreativeWork">
3+
4+
<h2 class="highlighted-publication-title" itemprop="headline">
5+
<a href="{{ post.link }}">{{ post.title }}</a>
6+
</h2>
7+
8+
{% if post.excerpt %}
9+
<div class="highlighted-publication-excerpt" itemprop="description">
10+
{{ post.excerpt | markdownify }}
11+
</div>
12+
{% endif %}
13+
14+
<div class="highlighted-publication-venue">
15+
{% if post.venue_full %}
16+
{{ post.venue_full }}
17+
{% elsif post.venue %}
18+
{{ post.venue }}
19+
{% endif %}
20+
({{ post.date | date: "%Y" }})
21+
</div>
22+
23+
{% include display-publication-links.html pub=post%}
24+
</article>
25+
</div>
26+
<div class="highlighted-publications-spacing"></div>

_includes/display-publications.html

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
<article class="publication-card" itemscope itemtype="https://schema.org/CreativeWork">
22

33
{% if post.thumbnail %}
4-
{% assign currentYear = 'now' | date: "%Y" %}
5-
{% assign postYear = post.date | date: "%Y" %}
6-
{% if postYear == currentYear %}
7-
{% assign loading = "eager" %}
8-
{% else %}
9-
{% assign loading = "lazy" %}
10-
{% endif %}
11-
<img src="{{ post.thumbnail | relative_url }}" class="publication-thumbnail" loading="{{ loading }}" alt="A thumbnail that highlights the contribution of the paper">
4+
{% assign currentYear = 'now' | date: "%Y" %}
5+
{% assign postYear = post.date | date: "%Y" %}
6+
{% if postYear == currentYear %}
7+
{% assign loading = "eager" %}
8+
{% else %}
9+
{% assign loading = "lazy" %}
10+
{% endif %}
11+
<img src="{{ post.thumbnail | relative_url }}" class="publication-thumbnail" loading="{{ loading }}"
12+
alt="A thumbnail that highlights the contribution of the paper">
1213
{% endif %}
1314

1415
<h2 class="publication-title" itemprop="headline">
@@ -47,4 +48,4 @@ <h2 class="publication-title" itemprop="headline">
4748
{% endif %}
4849

4950
{% include display-publication-links.html pub=post%}
50-
</article>
51+
</article>
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
<!-- Overrides and modified from: https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/posts-category.html -->
2+
3+
<div>
4+
{%- for post in site.categories[include.taxonomy] -%}
5+
{%- unless post.hidden -%}
6+
{% if include.author %}
7+
{% if post.author == include.author %}
8+
{% include display-highlighted-publications.html %}
9+
{%- endif %}
10+
{%- else %}
11+
{% include display-highlighted-publications.html %}
12+
{%- endif %}
13+
{%- endunless -%}
14+
{%- endfor -%}
15+
</div>

_posts/papers/2023-01-01-10.18653-v1-2023.emnlp-industry.26.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,10 @@ tags:
77
- Conference on Empirical Methods in Natural Language Processing
88
link: https://doi.org/10.18653/v1/2023.emnlp-industry.26
99
author: Zachary Yang
10-
categories: Publications
10+
categories:
11+
- Publications
12+
- online-toxicity
13+
excerpt: "Identity biases arise commonly from annotated datasets, can be propagated in language models and can cause further harm to marginal groups. Existing bias benchmarking datasets are mainly focused on gender or racial biases and are made to pinpoint which class the model is biased towards. They also are not designed for the gaming industry, a concern for models built for toxicity detection in videogames’ chat."
1114

1215
---
1316

_posts/papers/2023-10-20-2310.18330.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,10 @@ tags:
66
- Conference on Empirical Methods in Natural Language Processing
77
link: https://arxiv.org/abs/2310.18330
88
author: Zachary Yang
9-
categories: Publications
9+
categories:
10+
- Publications
11+
- online-toxicity
12+
excerpt: "A simple and scalable model that reliably detects toxic content in real-time for a line of chat by including chat history and metadata. ToxBuster consistently outperforms conventional toxicity models across popular multiplayer games, including Rainbow Six Siege, For Honor, and DOTA 2. We conduct an ablation study to assess the importance of each model component and explore ToxBuster’s transferability across the datasets. Furthermore, we showcase ToxBuster’s efficacy in post-game moderation, successfully flagging 82.1% of chat-reported players at a precision level of 90.0%. Additionally, we show how an additional 6% of unreported toxic players can be proactively moderated."
1013

1114
---
1215

_posts/papers/2024-06-28-10.1145-3675805.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,10 @@ tags:
66
- Games Res. Pract.
77
link: https://doi.org/10.1145/3675805
88
author: Zachary Yang
9-
categories: Publications
9+
categories:
10+
- Publications
11+
- online-toxicity
12+
excerpt: "While game companies are addressing the call to reduce toxicity and promote player health, the need to understand toxicity trends across time is important. With a reliable toxicity detection model (average precision of 0.95), we apply our model to eight months’ worth of in-game chat data, offering visual insights into toxicity trends for Rainbow Six Siege and For Honor, two games developed by Ubisoft. Ultimately, this study serves as a foundation for future research in creating more inclusive and enjoyable online gaming experiences."
1013

1114
---
1215

_research_directions/online-crime.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ one-liner: "Developing responsible AI solutions to analyze suspicious patterns i
1313

1414
---
1515

16-
# Why should we care?
17-
1816
Sex trafficking impacts 4.8 million people globally and is a $99 billion USD industry that often operates undetected, including in Canada. Technology has become a critical tool for traffickers, enabling recruitment and exploitation while making these crimes harder to trace. However, innovative analytics can uncover hidden patterns, identify victims, and provide much-needed support to those impacted. Our interdisciplinary team of AI and criminology experts is dedicated to developing context-aware, human-centered solutions to tackle this issue responsibly. Through advanced techniques like data mining and anomaly detection, we are working to bring a data-driven approach to the fight against human trafficking in Canada.
1917

2018
{% include sub_research-directions.html category="online-crime" %}

_research_directions/online-toxicity.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,13 @@ header:
99
excerpt: "Our team collaborates with game companies like Ubisoft to develop responsible, real-time, human-in-the-loop AI systems for chat toxicity detection, creating safer online gaming communities."
1010
logo_image_path: /assets/images/home/TOG_logo-light.png
1111
logo_dark_image_path: /assets/images/home/TOG_logo-dark.png
12-
one-liner: "Building real-time, human-in-the-loop systems to foster healthier gaming communities, partnering with industry leaders to deploy scalable solutions that adapt to emerging challenges."
12+
one-liner: "Building real-time, human-in-the-loop systems to foster healthier gaming communities, partnering with industry leaders."
1313

1414
projects:
1515
- title: "Game on, Hate off"
16-
alt: "Game on, Hate off"
17-
image_path: /assets/images/research_directions/online-toxicity/game-on-hate-off.jpg
1816
excerpt: "While game companies are addressing the call to reduce toxicity and promote player health, the need to understand toxicity trends across time is important. With a reliable toxicity detection model (average precision of 0.95), we apply our model to eight months’ worth of in-game chat data, offering visual insights into toxicity trends for Rainbow Six Siege and For Honor, two games developed by Ubisoft. Ultimately, this study serves as a foundation for future research in creating more inclusive and enjoyable online gaming experiences."
1917
url: https://dl.acm.org/doi/10.1145/3675805
18+
2019
- title: "ToxBuster"
2120
alt: "ToxBuster"
2221
image_path: /assets/images/research_directions/online-toxicity/toxbuster.jpg
@@ -39,21 +38,20 @@ logos:
3938
name: Mitacs
4039
---
4140

41+
Toxic and harmful speech online is more than just unpleasant; it has widespread social and economic repercussions, particularly as it permeates social media and gaming platforms. In gaming, where toxicity affects 75% of young players, this behavior harms mental health, alienates communities, and even reduces player engagement and spending, which impacts the industry’s bottom line. Beyond financial losses, unchecked toxicity risks fostering real-world violence and inciting harmful social behaviors. Despite advances in detection methods, including AI-driven moderation, the ever-evolving nature of toxic language poses significant challenges to companies and communities alike. Addressing this problem isn’t just about improving user experience—it’s essential for maintaining safe, inclusive, and healthy online spaces.
4242

43-
# Why should we care?"
4443

45-
Toxic and harmful speech online is more than just unpleasant; it has widespread social and economic repercussions, particularly as it permeates social media and gaming platforms. In gaming, where toxicity affects 75% of young players, this behavior harms mental health, alienates communities, and even reduces player engagement and spending, which impacts the industry’s bottom line. Beyond financial losses, unchecked toxicity risks fostering real-world violence and inciting harmful social behaviors. Despite advances in detection methods, including AI-driven moderation, the ever-evolving nature of toxic language poses significant challenges to companies and communities alike. Addressing this problem isn’t just about improving user experience—it’s essential for maintaining safe, inclusive, and healthy online spaces.
44+
# Selected Publications
4645

46+
<!-- {% include feature_row id="projects"%} -->
4747

48-
# Higlighted Publications
48+
{% include posts-highlighted-publications.html taxonomy="online-toxicity" %}
4949

50-
{% include feature_row id="projects"%}
5150

5251
# Core Team Members
5352

5453
{% include team-gallery.html authors=site.data.authors research_direction="online-toxicity" render_current_role=true %}
5554

56-
5755
# Funding
5856

5957
We acknowledge funding from Ubisoft, the Canadian Institute for Advanced Research (CIFAR AI Chair Program), Natural Sciences and Engineering Research Council of Canada (NSERC) Postgraduate Scholarship-Doctoral (PGS D) Award. Funding from Ubisoft is governed through the Mitacs Accelerate Program.

_research_directions/poli-sci.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,10 @@ one-liner: "Investigating how digital platforms and emerging AI technologies sha
1313

1414
---
1515

16-
17-
# Why should we care?
1816
The current digital age fundamentally changes how we as individuals and a society collect and distribute information. With these changes come powerful new approaches to influence the public agenda. However, the quality of shared information varies widely, and can have wide-reaching consequences. As the sophistication and accessibility of AI tools continue to expand, the authenticity of digital content has been increasingly called into question. Our team stands at the forefront of the challenge to safeguard digital space, as we assess the nature of content and how it proliferates online.
1917

18+
# Topics
19+
2020
{% include sub_research-directions.html category="poli-sci" %}
2121

2222
# Core Team Members

0 commit comments

Comments
 (0)