The School District of Palm Beach County employed the Austin, Texas-based Lightspeed Systems, instituting a pilot program of Lightspeed Alert, one of the company’s products, at the start of the 2024–2025 school year. The district is testing the program on ten schools, including Dreyfoos. Lightspeed Alert allows 24-hour monitoring of student device use and scans for possible threats with the help of artificial intelligence (AI) filters.
According to their official website, “Lightspeed Alert is an at-risk student identification solution that monitors and analyzes online activity for signs of self-harm, violence, and bullying.”
Students and staff were briefed on the concept of AI monitoring their digital activity on a singular slide in a school-wide presentation. The slide stated that “EVERYTHING (a student does) on a school district computer and/or using (their) school district login is monitored,” outlining web browsing history, Google Suite applications, Google search history, and website and social media activity as some of the areas monitored. Personal devices that are logged into district portal accounts are also monitored, whether they are used at school or home.
In a statement given to The Muse via email, Lightspeed Systems stated that their monitoring is in accordance with an act to prevent minors’ exposure and engagement with explicit content online, mentioning that “(monitoring activity on district devices is) required by federal law under the Children’s Internet Protection Act (CIPA).”
Lightspeed Alert does not track online activity through school network use, but rather through access to the student portal on any device or network.
“When you go on your portal at home and you access your student account, it’s your school district account,” technology coordinator Edward Maniaci said. “The district has the right to monitor that internet traffic, even if it’s at a home on a home device.”
Text that is identified as a potential threat to the school or students is flagged and reported to school administration. Being flagged does not always mean that the student will face consequences from administration, though. Strings sophomore Isabella Fernandez, who believes she was flagged for the mention of guns in a history assignment, was not taken out of class, presumably because the document was identified as an assignment.
“All of a sudden, I saw (someone’s Google icon) come into my Google Doc,” Fernandez said. “I thought it was my teacher, but then I hovered my cursor over it, and it was Officer (O’Sullivan). It scared me, but I told myself ‘Nothing’s going to happen, I’ll be fine.’”
According to the Lightspeed Systems website, “Lightspeed Alert (can) significantly reduce false positives” by avoiding flagging references to harmful chemicals, suicide, or violence within school assignments and required reading by understanding the full context of the concerning content. Assistant Principal William Clark said that the system looks at potential alerts contextually before flagging them, even for art-related assignments.
“The system does its best to take (into account) the context of what’s going on as well,” Mr. Clark said. “So a lot of them are automatically able to weed out immediately because a student’s doing an assignment. Sometimes students don’t even see or hear what we’re doing in the background. When we don’t have that context of something that’s typed in, we have to do something more intensive to find out what’s taking place.”
While the program uses AI technology to identify the context of the alert, situations such as Fernandez’s are inevitable. Mr. Clark said that the administrative team takes “every single one” of the reports seriously until the investigatory process can prove that the threat is invalid.
“It’s flagging certain keywords that are associated with potential harm and then the actual company itself, the human vetter, looks at the context,” Keith Oswald, the district administrator in charge of instituting Lightspeed Alert, said. “What we have to be careful of is if people are writing something that could cause harm to themselves or somebody else.”
During the school’s use of the pilot program, Lightspeed has already detected and reported potential threats. In English teacher Peggy Mellon’s class, two anonymous students typed what they thought to be a joke on one of their laptops and then quickly deleted it. The “joke” was perceived as a threat by Lightspeed and was then confirmed as a valid report by an administrator.
“Within 90 seconds, (administration) came here and took them out of my class with all their stuff,” Ms. Mellon said. “In one way it’s very comforting, because they were here in 90 seconds, and while that student had only been in my class for 20 minutes, and I’m pretty sure didn’t mean anything towards me personally, if it were to ever happen where it was students seriously making a threat, they would know immediately and be able to lock it down. I feel safer with it in the classroom, but it feels very ‘Big Brother.’”
While some flagged material may have been written with humorous intent, any report is potentially punishable. Mr. Oswald said that he believes they need to “tweak” the program to recognize frequently used student language that may be perceived as a threat but is most likely a product of generational culture.
“Even if it’s a joke without intention, you can’t know (just by reading it), so the student may be prosecuted criminally (as well as face) school consequences,” Mr. Oswald said. “The world’s different now.”
According to a study by the US Secret Service National Threat Assessment Center, about three-quarters of school attackers displayed “concerning behaviors” online prior to the attack. Lightspeed System’s website states that these online indicators could have been identified by their system. While some of these behaviors theoretically may have been caught by the system, the claim cannot be applied to the specific software.
“In the past, it seemed like there were warning signs for certain things — certain situations that had happened — that they could have been able to stop or neutralize before things got to that extreme level,” Mr. Maniaci said. “They just don’t want anything to fall through the cracks anymore.”
The school has its own system to split up the identification and clarification alerts, with a counselor and an assistant principal assigned to each art area. Lightspeed Alert training was issued to administrators by Principal Blake Bennett to understand the program being piloted on campus. Administrators watch for alerts on their Lightspeed Alert dashboard during school and after work hours end.
“It might take a lot of extra work to be able to do this, but (it’s) extra work to help support and keep students safe,” Mr. Clark said. “I think that’s why a lot of us are in the business that we’re in working as administrators. We want to educate. We want to be able to provide you guys with the best possible high school experience, and we want to keep you all safe.”
Concerning alerts can require parental involvement and even behavioral analysis, which are a series of interviews conducted by administration with a student, his/her parents, and his/her teachers to discuss the student’s situation. In some cases, the situation may cause a referral for additional counseling. Sessions with co-located mental health counselor Madison Zeigler, who handles individual counseling by referral, or school behavioral health professional Tawanna Pollock, who handles the day-to-day check-ins, may be recommended.
“I just want to make sure everybody has the mental health support that they need,” Principal Bennett said. “If anybody needs anything, and this (Lightspeed) is how we find out about it, then we are able to give a student support, and that’s what matters.”
Work in art area courses have also been mistakenly flagged. Topics that may be flagged could be the focus of work in creative writing pieces, speeches, monologues, character analyses, or visual and digital art pieces. Research on these topics, as well as the inclusion of these themes in the works themselves, could cause Lightspeed Alert to falsely flag text.
“I have spoken with my department members about the potential impact that these programs could have on the creative writing program because I know that it will be flagging things in this class,” communications teacher Brittany Rigdon said. “I’ve been teaching creative writing for over a decade, so I know the types of subjects that come up. As a creative writing teacher, I don’t believe it’s my job to censor students’ creativity. People write, especially fiction, about many different topics, and they have to be able to do that.”
Speech and debate, which is research-heavy by nature, may also trigger alerts. Speech and debate competitor and communications senior Matix Parker said that topics debated such as weapons or military budgets are often controversial and may cause alerts.
“When you’re looking up specific words like guns or bombs or anything like that that would be deemed a threat to campus and would have to be investigated, (the investigation) ultimately feels like a waste of time because it’s for a debate class,” Parker said. “It’s not a manifesto. It’s just research.”
Debaters in the National Speech and Debate Association vote on Resolutions, or Public Forum debate prompts, every two months. Schools without programs like Lightspeed may vote for topics that would be perceived as concerning content to the system, causing debaters like Parker to have to research these topics with the possibility of their work being flagged.
“It is that lack of transparency with how the system works that makes it really confusing because I would rather not have to walk on eggshells when I’m looking up something about immigration or crimes,” Parker said.
Another program that tracks student activity, Gaggle, is similar to Lightspeed Alert because it utilizes AI to detect “concerning” language. Gaggle identified 235,000 potential threats in one year. Approximately 1,500 schools use the program, according to Gaggle spokesperson Shelby Goldman.
According to former Lawrence High School journalism students on the Lawrence Times Paper in Kansas, Natasha Torkzaban, Morgan Salisbury, Jack Tell, and Maya Smith, Gaggle interfered with their “privacy, free speech, and intellectual property rights.” In an investigative article, the former editors claimed “students often are confused” about the program and were misled to believe that Gaggle can monitor everything.
With Lightspeed, some students, like vocal senior Kamari Dew, have similar concerns about the amount of information students have been provided with in regards to the level of surveillance.
“I think it’s a good thing that they can look at our browsers and stuff simply thinking about the safety aspect, but I just wish they let everyone know just how much they had access to,” Dew said. “It just seems like a misunderstanding waiting to happen.”
Parker shares similar thoughts, calling the lack of knowledge “scary.”
“I don’t like how much (the system is) seeing, but I would have less of an issue with it if we just knew exactly what was being collected,” Parker said. “Can the system now see everything you’re doing on your phone? Is it just (when you are on) a school computer? Is it just on a certain WiFi? Could a VPN bypass it? It’s the fact that we know literally nothing as students, and the safest option is to just assume that it sees everything, which I think creates a lot more problems than it solves.”
Gavin Leser • Dec 11, 2024 at 12:25 pm
I love this story!