Skip to content

Conversation

@harlanenciso112
Copy link

@harlanenciso112 harlanenciso112 commented Mar 21, 2025

Hi @Roshanjossey,

I read the issue where you mentioned your interest in adding a restriction to control offensive or NSFW content, so I decided to help with that.

How I implemented it:
I used a pre-trained model from Hugging Face specialized in detecting offensive language.
Implemented a function that analyzes the input content and assigns a probability score to potentially inappropriate messages.

Fixez #27 "One additional constrain I'd like for this is to have checks for offensive or NSFW content."

xarical added a commit to xarical/code-contributions that referenced this pull request Jun 21, 2025
- Add toxicity-check.yml workflow with check_toxicity.py script for automatic content moderation from Roshanjossey#108 (resolves Roshanjossey#27)
- Refactor toxicity-check.yml and check_toxicity.py to use the GitHub CLI and use gemma-9b-it served by Groq (per Roshanjossey#27 (comment)) respectively
- Rename toxicity-check.yml and check_toxicity.py to auto-pr-merge.yml and check_pr.py respectively

Co-authored-by: harlanenciso112 <harsanenciso@gmail.com>
xarical added a commit to xarical/code-contributions that referenced this pull request Jun 21, 2025
- Add toxicity-check.yml workflow with check_toxicity.py script for automatic content moderation from Roshanjossey#108 (resolves Roshanjossey#27)
- Refactor toxicity-check.yml and check_toxicity.py to use the GitHub CLI and use gemma-9b-it served by Groq (per Roshanjossey#27 (comment)) respectively
- Rename toxicity-check.yml and check_toxicity.py to auto-pr-merge.yml and check_pr.py respectively

Co-authored-by: harlanenciso112 <harsanenciso@gmail.com>
@xarical xarical mentioned this pull request Jun 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant