The Watched Classroom
Surveillance Capitalism and the Neoliberal University
Amazon is coming to your campus. It is arriving not as a vendor but as the operating system on which the university now runs.
In 2018, several U.S. universities began piloting Alexa-enabled classrooms, allowing faculty to control projectors, temperature settings, and room configurations through voice command. Students could ask Alexa for office hours, financial aid information, and library schedules—the promotional materials framed this as convenience.
What the materials did not say is that every voice command, every query, every pause was being logged, processed, and stored on Amazon’s servers, governed not by FERPA but by institutional policy and Amazon’s terms of service. That arrangement was the opening bid. What has followed makes the Alexa classroom look modest: by 2026, Amazon Web Services had transitioned from a smart-speaker novelty to the invisible cloud infrastructure powering everything from campus financial-aid portals to AI-driven student advising.
The Fiscal Architecture of Modern Surveillance
Between 2008 and 2018, state legislatures cut per-student funding to public universities by an average of 16 percent in inflation-adjusted terms. The apparent recovery since then is largely an accounting artifact: enrollment declines during the pandemic reduced the per-student denominator, and federal stimulus funds temporarily inflated state budgets. As of 2024, twenty-two states have not yet restored per-student funding to 2008 levels, with Arizona remaining more than 40 percent below its pre-recession baseline.
Public universities did not respond to this sustained fiscal strangulation by lobbying aggressively for restoration. They responded by adopting the vocabulary and the logic of their adversaries: efficiency, accountability, market discipline, and revenue diversification. Adjunct labor replaced tenure lines. International and out-of-state students, who pay premium tuition, were recruited to subsidize domestic programs. Online learning platforms (Blackboard, Canvas, Desire2Learn) were adopted not because they produced better learning outcomes, but because they reduced the cost per credit hour.
This same rationale extended to surveillance platforms: proctoring software, AI advising chatbots, and predictive analytics tools were purchased not to improve learning but to automate functions previously performed by full-time faculty and professional advisers, displacing academic labor while generating data as a secondary revenue stream. Each of these moves indexed the university to market imperatives and opened the door to the next vendor, the next platform, the next surveillance tool sold as an educational service.
The pandemic accelerated what was already underway. When campuses closed in 2020, the data-collection apparatus that had been building for a decade suddenly operated at full scale, on every student, in every course. When campuses reopened, the apparatus remained in place. The emergency measure became permanent architecture.
Surveillance Capitalism Enters the Classroom
Shoshana Zuboff’s concept of surveillance capitalism defines the underlying framework of the modern university. In her 2019 analysis, Zuboff argues that the dominant mechanism of platform capitalism is not the sale of products but the extraction of behavioral data from human experience: voice recordings, search queries, purchase patterns, location data, all processed into predictive products and sold to vendors who wish to shape future behavior. Google, Facebook, and Amazon built their fortunes on this model. Universities adopted it, often without examining the cost of adoption.
Zuboff’s original account describes behavioral data extracted to sell targeted advertising. Universities are not in the advertising business. They extract and hoard behavioral data for distinct purposes: risk management, retention metrics, credentialing power, and institutional control. The vendor accumulates records of student behavior, learning patterns, academic engagement, and personal circumstances. The student accumulates course credit. One party to this transaction gains the knowledge to predict, intervene in, and shape the other’s future, while the other party is awarded a degree.
Ivan Manokha’s application of Foucauldian analysis extends this framework. Manokha argues that digital platforms have created a condition of permanent visibility, a digital panopticon, in which individuals are not coerced into surveillance but induced into it. Users surrender personal data in exchange for access to services—a transaction that, for university students, carries no genuine option to refuse: declining the LMS, the proctoring platform, or the AI advising tool means forfeiting access to education itself.
The transaction masquerades as consent. Haggerty and Ericson’s concept of the surveillant assemblage further describes how this extraction operates: not through a single observer monitoring physical bodies, but through distributed, decentralized data collection across internet searches, credit card transactions, smartphone signals, and facial recognition cameras. No single observer is necessary. The system surveils by aggregation.
Universities have embraced this model completely. What the current generation of artificial intelligence tools has done is accelerate the extraction and deepen its reach.
Proctored, Profiled, and Presumed Guilty
The most aggressive surveillance platforms in universities are those produced by the online proctoring industry. Companies such as ProctorU, Examity, and Proctorio grew substantially during the pandemic years and have retained significant market share.
These platforms require students to grant access to their computer screens, webcams, and microphones before taking examinations off campus. The invasiveness is extensive. Students must display their student identification and pan their webcams across their physical workspace before an examination begins. During the exam, software monitors browser activity and flags copying or the opening of additional tabs. Eye-tracking algorithms log instances of off-screen gaze, treating sustained off-screen focus as a behavioral indicator of potential cheating. Facial recognition matches the student’s face to identification photographs and performs random identity checks throughout. Keystroke dynamics (the speed and rhythm of typing) are recorded at the start of the semester and compared to examination behavior to verify identity.
The racial bias in these systems has been documented. The University of California, Los Angeles, discontinued its campus facial recognition program, which used Amazon’s Rekognition software, after the system misidentified 58 of 400 photographs, with false positives concentrated among students and faculty of color. The surveillance apparatus operating in the online examination is not race-neutral. Neither are its consequences. A flagged exam triggers further scrutiny, and further scrutiny in a system built on algorithmic misfires does not fall equally on all students.
The disability justice dimension compounds this record. Proctoring algorithms are programmed to flag behavior that deviates from a narrowly defined norm, such as extended off-screen gaze, irregular typing rhythms, and audible vocalizations. Students with autism, ADHD, physical disabilities requiring mobility adjustments, or conditions that involve reading aloud are flagged by these criteria as a matter of course. The accommodation letter that grants a student extended time in an in-person examination offers no protection against an algorithm that treats the student’s disability as evidence of dishonesty. Several institutions have faced ADA compliance challenges as a direct result of deploying proctoring software, a consequence that vendors’ promotional materials do not mention.
Automated Accusation
The arrival of ChatGPT in late 2022 introduced a new front in the surveillance of student work, and universities responded with a familiar reflex: purchase a technology product to manage a technology problem. Turnitin, already deployed at more than 16,000 academic institutions globally to detect plagiarism, launched an AI writing detection tool in April 2023. The deployment was swift and executed without institutional oversight.
Turnitin activated the feature with less than 24 hours’ advance notice to institutional clients, with no option to deactivate it and no disclosure of how the detection algorithm functioned. Vanderbilt University, after months of testing, turned off the tool entirely, citing unresolved concerns about accuracy and false positives. Several University of California campuses declined to adopt it. Even OpenAI, the company that produces ChatGPT, shut down its own AI detection product after it correctly identified only 26 percent of AI-written texts while falsely flagging 9 percent of human writing as AI-generated.
Research compiled by UCLA’s Humanities and Technology program indicates that ESL submissions are up to 30 percent more likely to be falsely flagged compared to those of native speakers. This bias is not an anomaly. Stanford researchers found that AI detection tools flagged writing by non-native English speakers as AI-generated 61 percent of the time, while native English speaker papers were flagged at near-zero rates.
Neurodivergent students face the same exposure: researchers at the University of Nebraska’s Center for Transformative Teaching have documented that students with autism, ADHD, and dyslexia face elevated false positive rates because their writing patterns, characterized by consistent terminology and repeated phrasing, superficially resemble the statistical signatures that detection algorithms associate with artificial intelligence. Turnitin acknowledges a variance of plus or minus 15 percentage points in its detection scores, placing a result of 50 percent AI-generated anywhere between 35 and 65 percent.
Universities and colleges are nonetheless deploying this instrument in academic misconduct proceedings. The University of Kansas, MIT Sloan, and a growing number of other institutions have concluded that AI detection scores cannot serve as stand-alone evidence of academic dishonesty: the error rates are too high and the biases too well-documented. The institutions that continue using these tools without safeguards are not protecting academic integrity. They are automating accusation.
Administrators who deploy these tools invoke academic integrity as justification: without surveillance, the value of the degree is debased by unchecked cheating. The argument deserves a direct answer. The evidence does not support the premise. Studies of remote examination conditions have not established that cheating rates are significantly higher than in proctored in-person settings. What the integrity argument accomplishes is to shift the burden of proof onto students, presuming dishonesty until the algorithm clears them, while insulating the institution from accountability for the racial and disability biases the tools demonstrably produce. Integrity, in this framing, is a justification for surveillance rather than an educational value.
Algorithmic Triage: Who Gets Flagged Before They Fail
Predictive analytics platforms extend the surveillance apparatus into academic advising, encoding racial inequities into automated triage decisions. The predictive analytics industry uses machine learning to identify students deemed “at risk” of dropping out and to direct advising interventions accordingly. This model is represented by platforms such as EAB Navigate and Civitas Learning, which are currently deployed at hundreds of institutions.
Investigative reporting by The Markup, drawing on documents obtained through public records requests, found that at least four of seven universities examined had incorporated students’ race as a variable in EAB Navigate’s predictive models. Two of those institutions designated race as a “high-impact predictor,” meaning it accounted for more than 5 percent of the variance in a student’s predicted risk score.
Researchers have documented that predictive models in higher education consistently overestimate failure rates for racially minoritized students. The algorithms are trained on historical data that reflects existing institutional inequities. Encoding those patterns into automated risk scores perpetuates and systematizes them. The premises underlying these platforms frame historically underrepresented students as individually deficient rather than institutionally challenged. A student flagged for infrequent library visits or irregular LMS logins receives an automated intervention message. What the algorithm does not account for is whether that student is working two jobs, caring for a family member, or navigating a campus climate that triggers disengagement. The student bears the blame, and the institution remains unexamined.
Generative AI and the Division of Learning
Zuboff’s concept of the division of learning is fully realized in the era of generative artificial intelligence. Large language models (ChatGPT, Microsoft Copilot, Google Gemini) are now being integrated into learning management systems, advising platforms, and academic support tools at a pace that has outrun any serious institutional deliberation about their implications.
When a generative AI system produces course content, summarizes readings, drafts feedback on student work, or simulates advising conversations, the knowledge being produced is not neutral. It reflects the training data on which the model was built, the priorities of the company that built it, and the contractual terms under which the university licensed it. Students and faculty receive outputs whose provenance they cannot audit and whose parameters they did not set. They have no meaningful access to the inputs, the model architecture, or the decisions that shaped what the system will and will not say.
Moreover, the promised labor-saving is illusory. The extraction does not stop at the point of interaction. When universities license generative AI tools embedded in their learning environments, the student work processed by those systems—essays, discussion posts, advising conversations, assessment responses—feeds back into the model’s training data. The student’s intellectual labor serves as raw material for the next version of the product that the university purchases. Zuboff identified this circuit in its early form: behavioral data extracted from users is processed into products sold back to shape those same users. In the university, the circuit is complete. The student writes. The platform learns. The institution pays for the upgraded model.
To this point, Silicon Valley has lately rediscovered a complication from Victorian economics: the Jevons paradox. Writing in 1865, the economist William Stanley Jevons observed that efficiency improvements in steam engines did not reduce coal consumption. They lowered production costs, expanded the industry’s reach, and drove demand higher.
The same pattern governs AI surveillance in the university. Detection tools sold as labor-saving devices do not reduce the burden of academic integrity review. They flag more students, generate more cases, and create more demand for the human oversight they were marketed to replace. Each efficiency gain becomes a justification for broader deployment, deeper data collection, and tighter vendor contracts. Data collection and market saturation are the goals of surveillance capitalism, and the Jevons paradox ensures both will intensify. The result is Zuboff’s division of learning at an institutional scale: the student and the faculty member interact with outputs whose provenance they cannot audit and whose parameters they did not set, while the platform relentlessly accumulates the data.
Because this system is designed to perpetually expand, it will not contract on its own. The cycle of surveillance must be broken from the outside; the only available remedies are regulatory and fiscal.
Defunding Created This. Reinvestment Can Dismantle It
The Family Educational Rights and Privacy Act (FERPA) requires meaningful strengthening. FERPA, enacted in 1974, was not designed to govern the extraction of behavioral data by machine learning systems, biometric proctoring platforms, or generative AI tools embedded in learning environments. Its protections for student records do not map cleanly onto the data flows generated by these systems. Updating FERPA to cover behavioral and biometric data, to require affirmative consent for third-party data sharing, to mandate algorithmic transparency from vendors, and to establish enforceable penalties for violations would impose real accountability on an industry that has operated in a regulatory vacuum.
Students at CUNY, UC San Diego, and other institutions have already organized successfully against Proctorio contracts, demonstrating that the surveillance apparatus is neither inevitable nor irreversible when students and faculty act collectively to contest it.
Several states have moved further than federal law on biometric data protection. Illinois’ Biometric Information Privacy Act (BIPA), enacted in 2008, requires affirmative written consent before any private entity collects biometric identifiers—fingerprints, facial geometry, voiceprints, keystroke dynamics. Courts have found that proctoring platforms collecting this data without consent are liable under BIPA, and several class-action suits against ProctorU and Proctorio are currently working through Illinois courts. FERPA reform modeled on BIPA’s consent-and-liability framework would extend these protections nationwide.
However, regulatory reform only treats the symptoms. The more fundamental intervention is fiscal. State legislatures defunded public higher education and forced institutions into a market logic that made surveillance technology attractive. Administrators adopted proctoring software because remote instruction at scale, driven by enrollment pressure and cost-reduction imperatives, made monitoring a market solution. Reversing this trajectory requires restoring state investment so that institutions are no longer structurally dependent on data-extracting vendors to manage their operations. The watched classroom is the product of political choices. When those decisions harm students, they can be reversed.
This essay is a substantially revised and expanded adaptation of a paper I originally presented at the 2020 SASE conference: "Neoliberalism, Surveillance Pedagogy, and the Corporatization of Higher Education." Society for the Advancement of Socio-Economics (SASE). University of Amsterdam, The Netherlands, July 18-20, 2020. [Virtual Conference]

