Honor Rush profile image Honor Rush

The hidden agenda of edtech: rethinking AI in education and learning

The hidden agenda of edtech: rethinking AI in education and learning

As digital technologies reshape classrooms across the globe, the use of education platforms powered by algorithms and data-driven systems has become routine. Behind the promises of personalised learning and student success, however, lies a growing web of surveillance and corporate control—especially in charter schools, where public accountability is often minimal and private interests dominate.

The Lovepost spoke to Adrienne Williams, Research Fellow at the Distributed AI Research Institute and a former charter school educator, about breaches of students' personal data through education technology, AI, and the many dilemmas of computer-based personalised learning.

Chan Zuckerberg Initiative (CZI) (a private philanthropy founded by Mark Zuckerberg and Priscilla Chan) has embedded itself in education through its funding and development of the Summit Learning platform, now known as Gradient Learning. Though operated by an independent nonprofit, Gradient Learning continues to receive major financial and technical support from CZI, with Priscilla Chan serving as chair of its board. The platform is widely used in charter schools and marketed as a tool to improve student outcomes through mentorship and personalised education. However, Williams has seen first-hand how such programs often serve corporate data interests over students' needs.

Williams warns that the issue extends far beyond charter schools, as edtech products, children’s games and even medical records are all vulnerable to data collection and breaches. “Almost everything [collects data]. Because you have the apps themselves, but then you also have data brokers, and you have no legislation pretty much anywhere in the world that guarantees data protection,” she explains. Even for those who actively try to avoid digital surveillance, she says, data collection is inescapable.

She further argues that governments have not only allowed but actively enabled mass data extraction, creating an environment where tech companies can exploit legal loopholes without consequence. At the heart of the problem, she suggests, is a political system in which lawmakers and elected officials are more invested in corporate interests than in public safety.

According to OpenSecrets, education technology and services companies have spent millions lobbying federal lawmakers to oppose regulations and help draft the very laws that govern student data and digital learning. At the state level, for-profit charter school operators and edtech firms often wield even greater influence; for instance, K12 Inc. has spent nearly 1 million USD lobbying Indiana legislators, helping to create policies that expand charter access and shield corporate operations from scrutiny. 

As Williams sees it, politicians either don’t understand the technology, or they’re too financially entangled with the industry to regulate it effectively. Without leaders willing to challenge that influence, she sees little hope for reform: “I want to say there are solutions, I know there has to be, but none of them will work [with] greed . . . being in the driver’s seat.”

Over the years, instead of strengthening privacy protections, policymakers have gradually dismantled the very guardrails meant to keep these industries in check. “Since 2000, all they’ve been doing is weakening those guardrails,” Williams says.

Those guardrails include laws like HIPAA and FERPA, which were originally designed to protect sensitive health and education data. But both have been steadily eroded or become obsolete in the face of digital surveillance. HIPAA, for example, only applies to traditional healthcare providers—not the mental health screeners or wellness apps now in classrooms. FERPA, once a stronghold of student privacy, was significantly weakened in 2011 when a regulatory change allowed schools to share student records with private companies for vaguely defined “educational purposes,” without requiring parental consent. Williams points to these kinds of loopholes as proof that tech companies are not just exploiting weak laws, they’re benefiting from a system that has been deliberately structured to serve corporate interests.

The consequences are already visible. In recent years, data breaches have exposed the personal information of millions of students (including names, grades, disciplinary histories and learning disabilities) often from platforms that were never properly vetted by families or schools. A shadow industry of data brokers trades in education-related profiles, using them for marketing, credit scoring or political targeting. 

EdTech is a lucrative industry, with the global market projected to surpass 348 billion USD by 2030. Williams says much of its growth is driven by data, not improved learning. As more platforms enter classrooms, she urges a closer look at how that data is being used, and who stands to benefit.


Thank you for meeting today, Adrienne. I’d like to start with your time as an educator, where you worked at a charter school. Can you tell me more about that experience and what you see as the issues?

I was teaching at a [charter] school. Instead of the state and the government being in charge of them and running them, corporations are in charge of them and running them. And so, especially in California, every dumbass celebrity, every tech CEO, every VC firm and banker has their own charter school, and they're also all invested in the edtech.

It's always been kind of the known rule that there's no money to be made in education. Now, all of a sudden, there's a ton of money to be made in education, and they haven't done anything different. You have to ask, why? Where's the money coming from all of a sudden? And it's the data, because data is the new currency [of] these industries. 

The school I worked at was called Summit Public School. It was [founded by Diane Tavenner and later backed] by Mark Zuckerberg and his wife through CZI, along with the CEO [at the time], Diane Tavenner. I did not know about any of this when I started. And their whole thing is, we want to give this away to everybody. We want to get the platform [for] online learning. If you hear that, if you hear the idea of personalised learning . . . that's a red flag. There's no such thing as personalised learning. The only thing that creates personalised learning is people: teachers. Teachers that know their children. A computer programme cannot personalise learning, and ours didn't. My kids did the exact same projects and the exact same lessons.

When I was working there, what really blew me away was that they had that specific charter network. There's 11 Summit Public Schools up and down the West Coast, and [what] set them apart, and what's scary is their whole system is being used now as the gold standard in the US. Their whole [point of difference] is that not only do they have this personalised online learning platform, but they also have mentors.

When my seventh graders first came in, I was their mentor, [and]  you were supposed to have these [one-on-one meetings] with your kids once a week, with every child, and when you did that, you were supposed to type all those notes in your [work-issued] computer.

If you listen to Priscilla Chan, she says mentors are for socio-emotional learning, which is a . . . buzz word that nobody can actually define for you . . . kind of like AI—you say the word AI, and people think of 100 different things. 100 different people think about 100 different things. And the same with socio-emotional learning. You tell a group of educators that and everybody's going to come up with a different idea of what that means.

The former CEO, Diane Tavenner, now she was the one that actually founded the school. She would say, "No, mentors are just for college readiness". But I had said, 12 year olds don't [care] about college. That's not where their head is at.

Not at all, not at all.

The thing is, they didn't really expect us to talk to seventh graders about [college]. I don’t think they really cared about college. They fudged their numbers and forced every 12th grader to apply to certain colleges. [For students who didn’t plan to attend, the school would pay for an application to a university with a 100% acceptance rate.] Then they would go and say our Summit programme has a 100% acceptance rate. But they didn’t follow them through college to see how many graduated, even though CZI says we follow them through school, through college, out of college and through their work career. If they’re 50, they’re still going to be following them?

Adrienne Williams, Research Fellow at Distributed AI Research Institute and a former charter school educator.

So if it wasn't to do with college, where did this idea of mentorships in charter schools come from? 

I think [the purpose] was for two things. One, it's a way to get around hiring actual therapists, actual counsellors, actual school nurses, those types of people. My kids were talking about eating disorders, self-harm, sexual abuse, abortions, relationship issues, parents getting deported. Like in any other setting, I should have been a therapist. These were things that were way above my pay grade.

[Two: data collection]. I'm supposed to be typing that so and so has an eating disorder or so and so was raped, into my work-issued computer that goes directly to Meta engineers. So I know I'm not the only one, [but] I refuse[d] to write my notes in that computer, because it just killed me that in any other setting, those would be therapy notes, and they'd be protected by HIPAA laws, but they found this loophole where we can get all your info. And so I think people have this sense of, well, if you're not doing anything wrong, what does it matter?

Except now in the US, we have a situation where, literally, it may be illegal to the point of prison time if you get an abortion. And here I am a teacher, not even thinking about it, writing in my computer every single girl that's told me that she's gotten an abortion. It's really scary to me what they're allowing these tech companies to get away with. 

[Editorial note: In most school settings, personal student information is protected under FERPA; however, FERPA has weaker privacy protections and allows schools to share data with outside vendors or platforms under broad exceptions. Adrienne is suggesting that by having teachers, not licensed therapists, record sensitive disclosures into EdTech platforms, charter schools may be bypassing stronger privacy safeguards normally required in medical or counselling contexts.]

Can you walk us through exactly how a company would extract and use this student data? What happens to that data from the moment it’s recorded to when it influences an AI-driven system?

I can't tell you what happens to the student data the minute it is recorded to when it influences an AI-driven system. All I know is that Mark Zuckerburg has admitted to using student data to build LLMs [Large Language Models]. Part of the issue is that these companies believe in "Open Source". Which means they want public data to be free, easy and available to them at no cost. But, they are notoriously secretive about the data their companies utilise and what is created from it.

When I still worked at Summit Public Schools, one of the corporate employees said that they were collecting more student data than she had ever seen a district collect, and she had worked for other districts. She said they were storing what the company was calling "Quality Proxy" files. This was student data that they had no use for in the present moment, but felt that it could be useful in the future, so they kept it. The problem is that we don't know what they are doing with our children’s info, or how much they have been able to enrich themselves from it.

Why do they care about somebody on such a personal, individual level?

Because they make psychological profiles on everyone, and that's how they make these algorithms work for you. The more they know about you, the more they know who to send [to] you for advertisements, the more they know who to send [to] you for new programs and systems, the more they know how to suggest a TV show or a movie. It also helps them build these LLMs, because they need data, and they pretty much exhausted all the data on the internet.

Who stands to gain the most from this system, and who do you think is being exploited the most? What are the power dynamics at play here? 

Large corporations, specifically tech companies, benefit most. Citizens of poor countries, refugees, prisoners' families (and prisoners), the disabled, activists and the poor are being exploited the most. The power dynamic is that governments have handed over the responsibility of governing to tech bros and corporations. We have very few grown ups in the room. Most of our "leaders" so badly want to be seen as cool, that they are trusting corporations to put health, safety and truth over profit. That is just never going to happen, and is a true dereliction of duty to the citizens of the world.

If this level of AI-driven profiling continues unchecked, where do you see it leading?

Workers will lose jobs and quality jobs will be harder and harder to find. Our public education systems will be dismantled. Misinformation will be so prevalent that it will be difficult to know what is real. This fear that AI is going to rise up like the Terminator and destroy us, feels silly to me. What I foresee is that AI will continue to drive a wedge between groups based on nationality, race, gender, class, etc., and we as a human race will take the bait and willingly destroy ourselves.

November 2019 | Co-founders and co-CEOs of the Chan Zuckerberg Initiative (CZI) Priscilla Chan and Mark Zuckerberg speaking to a CZI employee. | Photograph by CZI

How do you see the future of AI in schools, and what do you believe needs to be done to protect student data? Would you say charter schools should take a similar approach in teaching methodology? 

I honestly think charter schools should be eliminated. In the US, they’re one of the major reasons why our public school system is crumbling. Corporations use public funds to resource their schools. We’ve allowed them to take over health care, prisons, and education, leaving us sicker, in more legal trouble and less educated. Which is why the health insurance CEO, Brian Thompson, who was . . . murdered, [received] little to no sympathy from the American people.

I believe the path to protecting student data is in allowing everyone to own their own data. We also need new leadership. Our current leaders only enforce laws on low and middle class Americans. There are laws on our books that protect student data, but they’re not enforced. It actually feels like a pretty hopeless situation.


There is no simple solution for the ethical crisis at the intersection of AI, education and data collection, but ignoring it is no longer an option. What Adrienne Williams makes clear is that charter schools, especially those backed by tech interests, are using student data as a key source of value, not necessarily to improve learning, but to serve corporate interests, including surveillance, profiling and potentially feeding AI systems. This data isn’t limited to academic performance—it includes intimate, often traumatic details about students’ lives, gathered through programs marketed as mentorship or personalisation.

Families often don’t know what’s being collected, where it’s stored or who it’s being shared with. And as Williams points out, existing laws offer little protection in these digital ecosystems, especially when corporations control the platforms and the pipelines. The result is a system where surveillance is disguised as support, and where profit takes precedence over privacy.

To move forward, we must start by enforcing the laws we already have, demanding new ones that reflect the realities of AI and eliminating the legal grey zones that allow sensitive student data to flow unchecked. Charter schools, and the companies behind them, must be held to the same standards of transparency, consent and accountability as any public institution. Educators must be empowered to resist systems that ask them to become data collectors. Because if AI is to have a place in education, it cannot come at the cost of children’s dignity, safety or autonomy. What’s at stake is not just how we teach—but what we teach students about their own worth in a world run by algorithms.

Editorial notes and clarifications:

The following notes offer additional context and fact-checking related to topics discussed in the interview with Adrienne Williams.

1.  On Meta, CZI and the use of student data for AI development

While there is no public evidence confirming that Meta (formerly Facebook) engineers currently access sensitive student data, Facebook staff did work on the Summit Learning platform (now Gradient Learning) between 2014 and 2017. After Facebook’s formal exit, the Chan Zuckerberg Initiative (CZI), continued developing and supporting the platform.

CZI retains full access to all student, teacher and parent data entered into the system. As Gradient’s own Help Centre states: CZI works as a technical service provider to the Summit Learning Platform and needs access to student, teacher, and parent data as part of its role. While Mark Zuckerberg has not publicly stated that student data has been used to train large language models (LLMs), and CZI publicly claims to safeguard student data, pledging on its site to “never sell or rent student personal information” and to “never share any student information with Facebook”, these promises apply only to personally identifiable data. The same policies clarify that CZI may retain and use de-identified student information to develop and improve its products. Critics say this amounts to a privacy loophole: “The public hears ‘we don’t share your personal info,’ but the reality is, they’re still sitting on a massive pool of student data stripped of names and IDs—and that’s often all a company needs,” says a spokesperson for Student Privacy Matters.

Because US privacy laws like FERPA do not regulate de-identified data, CZI is not required to notify families how that information is stored, used, or shared with third-party developers. With de-identification offering far less protection than most people assume, advocates warn that the line between research and exploitation is growing dangerously thin.

2.  On data collection practices described as “Quality Proxy” files

The term “Quality Proxy” does not appear in official Summit or CZI documentation. However, internal industry practices often involve storing large volumes of behavioural or educational data for undefined future use. Contracts associated with Summit Learning confirm that de-identified data may be retained indefinitely and used for analytics or the development of new products.

3. On CZI’s and Meta’s attitudes toward open data

CZI promotes an “open science” philosophy, supporting the free exchange of research data. However, critics argue this openness disproportionately benefits large tech companies that mine publicly accessible data while keeping their own models and datasets proprietary. This asymmetry raises ethical concerns, especially in education, where student data may be freely given under the guise of philanthropy or personalisation.

4. On the scope and transparency of student data collection

Summit Learning collects a significant volume of student data—including demographics, academic performance, disciplinary records, behavioural indicators and personal mentorship notes. A review of Summit contracts by the advocacy group Student Privacy Matters found that this information often extends far beyond what parents were told, and includes race, disability status, college test scores and even students’ personal narratives and communications with teachers.

The platform’s privacy policy allows for this data, once de-identified, to be retained indefinitely and used for purposes beyond immediate classroom needs, such as training algorithms or developing commercial tools. Some contracts also contain confidentiality clauses that limit what school officials can disclose to families, raising concerns about informed consent, overreach and the lack of transparency surrounding how student data is handled.

5. Misuse and risks of de-identified student data in education platforms and the way forward

While true anonymisation of student records sounds reassuring, it’s often brittle. In practice, “de‑identified” education data can be re‑identified by cross‑referencing with other data sets or using sophisticated algorithms. Privacy experts warn that it’s now “easier and easier” to re-link students to records once thought anonymous. Worse, third parties can still build detailed “profiles” of students (without names) and use them in decision‑making. For example, lenders or insurers might plug a student’s school, grades or even online learning behaviours into AI underwriting models, a practice critics call “educational redlining.” In fact, a 2020 report showed that using school or major as criteria can systematically disadvantage Black and Latino borrowers, producing interest‑rate spreads of up to 6 percent simply based on the college attended. In short, even without names, aggregated education data can produce biased predictions that echo old patterns of geographic or racial redlining.

Multiple lawsuits and investigations show EdTech firms mining student information (often without consent) for profit. A recent class‑action charges major platforms with harvesting pupils’ work into commercial “dossiers” sold to third parties. For example, PowerSchool (used by 80 percent of US districts) is being sued for secretly collecting student and parent data and selling it on. Google, which dominates US K-12 via Chromebooks and Google Workspace, is also under fire: a proposed class action (filed April 2025) alleges Google embeds hidden trackers in Chrome that build each child’s unique fingerprint, letting the company track students across apps without parental consent. 

In the UK, privacy watchdogs likewise flagged common tools like Google Classroom and ClassDojo: a recent report claims these free platforms collect children’s academic work and usage patterns in ways that likely violate data‑protection laws.

At present, US law treats truly de‑identified student information, data stripped of direct identifying details, as outside FERPA’s protections. However, scrutiny is growing. In 2023, Florida passed the Student Online Personal Information Protection Act (SOPIPA), which prohibits educational apps from creating student profiles or targeting students with ads and also bars selling or renting student data. Meanwhile, over 40 states have enacted K-12 privacy laws that require vendors to sign strict data-protection agreements with schools. At the federal level, lawmakers are debating “COPPA 2.0,” which would expand protections to teens under 17 and forbid targeted advertising to minors, though this is still in development.

Globally, regulators are pushing back too. In Europe, authorities have repeatedly raised concerns about school use of Google’s services under the General Data Protection Regulation (GDPR). For example, the Dutch Data Protection Authority warned schools in 2021 to stop using Google Workspace for Education unless data‑protection issues were addressed. In Germany, some municipalities followed by banning Chromebooks and Google Workspace entirely in July 2022 due to data-export risks. In France, CNIL fined Google 50 million euro in 2019 for GDPR violations related to data transparency and consent—though this fine wasn't specific to education, it shows regulators' broader scrutiny. 

Honor Rush profile image Honor Rush
Honor values learning from others’ perspectives and life experiences. She believes each person has a story to tell and writing connects a global community.