Understanding DistrictZero's Proactive Support System
Explore how DistrictZero actively monitors student well-being using AI-driven tools to provide early alerts, enhance mentorship, and proactively support students. Learn about the process, the roles of mentors and administrators, and our commitment to fostering a supportive environment before situations escalate.
Last updated About 1 year ago
DistrictZero is actively monitoring student well-being, allowing mentors and advisors to focus on meaningful interactions without stress or worry.
Here's how our system operates:
Intelligent Detection:
Our advanced AI moderation system, powered by OpenAI's Omni Moderation Model, continuously analyzes student communications for indicators related to harassment, threats, hate speech, illicit content, self-harm intent, or violent expressions.Student-Focused Notifications:
If a potential safety or well-being concern is detected, students receive a supportive notification indicating that authorized mentors or administrators may provide additional assistance.Authorized Mentor and Owner Alerts:
Only designated mentors and organization administrators are notified via email and within the application dashboard, ensuring timely and focused responses without unnecessary disruptions.Detailed Alert Management:
Authorized personnel can review comprehensive details, update alert statuses, and manage responses effectively through the dashboard, ensuring swift and sensitive handling of student needs.
Important Disclaimer:
DistrictZero is not a medical or triage service. Our role is to proactively detect concerning patterns or signals in student interactions, providing early insight into potential issues. Students always retain access to their campus wellness and support services, available 24/7 through the navigation menu.
Our goal is not disciplinary but supportive—helping mentors and administrators respond to student needs before situations escalate.
More Information via OpenAI API Technical Documentation:
Example{
"id": "modr-970d409ef3bef3b70c73d8232df86e7d",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"sexual/minors": false,
"harassment": false,
"harassment/threatening": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"violence": true,
"violence/graphic": false
},
"category_scores": {
"sexual": 2.34135824776394e-7,
"sexual/minors": 1.6346470245419304e-7,
"harassment": 0.0011643905680426018,
"harassment/threatening": 0.0022121340080906377,
"hate": 3.1999824407395835e-7,
"hate/threatening": 2.4923252458203563e-7,
"illicit": 0.0005227032493135171,
"illicit/violent": 3.682979260160596e-7,
"self-harm": 0.0011175734280627694,
"self-harm/intent": 0.0006264858507989037,
"self-harm/instructions": 7.368592981140821e-8,
"violence": 0.8599265510337075,
"violence/graphic": 0.37701736389561064
},
"category_applied_input_types": {
"sexual": [
"image"
],
"sexual/minors": [],
"harassment": [],
"harassment/threatening": [],
"hate": [],
"hate/threatening": [],
"illicit": [],
"illicit/violent": [],
"self-harm": [
"image"
],
"self-harm/intent": [
"image"
],
"self-harm/instructions": [
"image"
],
"violence": [
"image"
],
"violence/graphic": [
"image"
]
}
}
]
}The output has several categories in the JSON response, which tell you which (if any) categories of content are present in the inputs, and to what degree the model believes them to be present.
We (OpenAI) plan to continuously upgrade the moderation endpoint's underlying model. Therefore, custom policies that rely on category_scores may need recalibration over time.
Content classifications
The table below describes the types of content that can be detected in the moderation API, along with which models and input types are supported for each category.