A new report from the Center for Democracy and Technology (CDT) found that computer technology — like artificial intelligence (AI), content filtering, and student activity monitoring — poses risks to the rights and privacy of LGBTQ+ students.
The report surveyed 1,029 grade nine to 12 students, 1,018 parents of grade six to 12 students, and 1,005 grade six to 12 teachers from July to August. It found that school computer blocks on “explicit adult content” often stop students from accessing useful information on LGBTQ+ and race-related content. Software that monitors student computer activity — including typing and web browsing — has been used to out students to teachers and parents or subject them to disciplinary action, including reporting students to the police.
The software has outed one student and also incorrectly notified a transgender teen’s parents after they wrote about their past suicidal thoughts.
An estimated 75% of students said that school filtering and blocking technology has made it more difficult for them to complete school assignments. Approximately 33% of teachers said that the technology is used to block LGBTQ+ web content.
Get the Daily Brief
The news you care about, reported on by the people who care about you:
However, this technology often blocks online content beyond what is legally required to be restricted from children. Nearly 50% of teachers said such technology is blocking students from seeing web content that would “help them learn as a student” or “grow as a person.”
Approximately 88% of all teachers also said that schools monitor students’ online activity, and 40% said they’ve witnessed an increase in schools monitoring students’ personal devices.
This software is meant to predict whether individual students are at risk of dropping out or are adequately prepared for college, to track students’ physical location through their phones or school-provided devices like laptops, and to determine if a student is cheating on an exam. The software can also share student data such as grades, attendance, and discipline information with law enforcement; monitor what students post publicly on their personal social media accounts; and analyze student data to predict which one might bully other students or commit a crime, violence, or self-harm.
Among students, 19% said that such monitoring software has “outed” either themselves or someone they know – a 6% increase from the 2021-2022 school year. Nearly 66% of teachers said that such monitoring software had resulted in students being disciplined, and 38% of teachers said that software alerts resulted in students being contacted by law enforcement officials.
While generative AI has increasingly been used to help create writing, only 43% of teachers said that they’d received substantive training on AI technology. Despite this, 50% of teachers said they’d witnessed students being accused of using AI to write their assignments.
“One of the main risks of AI is that it will exacerbate existing inequities and limit educational opportunities for students, especially the most vulnerable,” the report’s authors wrote, noting that disparities seemed to increase even more for disabled students in special educational settings. “Fortunately, robust and well-established civil rights frameworks can offer clarity (and enforcement) to ensure that these risks do not become even more common.”
While such software is ostensibly used to “supervise students online, maintain campus safety, shape educational experiences, and meet other student needs,” only 31% of parents and 38% of students said that their school asked for their input on how to responsibly use student data and technology.
“What’s disheartening is that another year has gone by, and students at Title I schools, students with disabilities, and LGBTQ+ students continue to bear the brunt of irresponsible data and technology use and policies in the classroom and at home,” said Elizabeth Laird, Director of the Equity in Civic Technology Project at CDT. “This is alarming given that schools say they use technologies to keep all students safe and enhance their learning experience. As students enter the age of AI, they need better from their schools.”
In response to the report, several civil rights organizations — including the American Civil Liberties Union (ACLU) and the LGBTQ+ student advocacy organization GLSEN — sent a letter to the U.S. Department of Education to issue guidance on how schools can identify and prevent tech-based discrimination against protected classes of students.