California community colleges spend millions on AI chatbots that give students wrong answers
California community college districts are spending millions of taxpayer dollars on AI chatbots from vendors like Gravyty and Gecko - ostensibly to help students navigate admissions, financial aid, and campus services. A CalMatters investigation found the bots routinely serve up inaccurate or flat-out wrong answers instead. Three districts reported annual chatbot costs ranging from $151,000 to nearly half a million dollars. At Fresno City College, the student government vice president said her school's mascot-branded chatbot repeatedly botched basic campus questions. The OECD found it noteworthy enough to log in its AI Incidents and Hazards Monitor.
Incident Details
When the Bot Can't Answer the Question
California's community college system is enormous - 116 colleges serving roughly 1.8 million students, many of them first-generation college students navigating a complicated admissions and financial aid process for the first time. The promise of AI chatbots was straightforward enough: give students instant answers to the questions they'd otherwise have to wait in line or on hold to ask a human. Questions like "When is the FAFSA deadline?" or "Where is the financial aid office?" or "How do I register for classes?"
The reality, according to a CalMatters investigation published in March 2026, is that the chatbots California's community colleges are paying for frequently can't answer those questions correctly - or at all.
Millions In, Wrong Answers Out
Multiple California community college districts contracted with AI chatbot vendors Gravyty and Gecko to deploy student-facing chatbots across their campuses. District officials told CalMatters these systems handle thousands of conversations each month, many outside regular business hours, and were intended to reduce the volume of phone calls and in-person visits students need to make.
Three community college districts that responded to CalMatters' survey reported annual costs ranging from approximately $151,000 to nearly half a million dollars per district. To put that in context: these are public institutions funded largely by taxpayer dollars, and the students they serve are disproportionately from low-income backgrounds. Every dollar spent on a chatbot that gives wrong answers is a dollar not spent on a human advisor who could have given the right one.
The Verge summarized the situation neatly: the chatbots answer "general questions correctly but struggled with more specific ones." In educational advising, the specific questions are the ones that actually matter. A student doesn't need an AI to confirm that the college exists. They need help figuring out whether they qualify for a fee waiver or how to transfer their credits to a four-year university.
"The Chatbot Is Outdated"
The CalMatters investigation found that these chatbots frequently provide inaccurate, outdated, or outright incorrect information to students. The problems weren't edge cases or exotic queries - they were the bread-and-butter questions that any student services office should be able to handle in its sleep.
At Fresno City College, student government vice president Reanna Carlson described her experience with the college's chatbot, branded "Sam the Ram" after the school mascot. Carlson said Sam the Ram repeatedly gave her unclear or incorrect answers to basic questions about campus services. When the person who holds an elected student leadership position can't get straight answers from the school's own chatbot, it's a strong signal that rank-and-file students are having an even rougher time.
At East Los Angeles College, computer science major Pablo Aguirre - who also works as an IT intern at the Los Angeles community college district office - told CalMatters that improvements to the chatbot couldn't come soon enough. When your own IT staff are publicly acknowledging the system needs fixing, the system needs fixing.
A Systemic Problem, Not a Glitch
It would be tempting to write this off as a few buggy bots that just need a software patch. But the pattern CalMatters uncovered suggests something more fundamental: the chatbot vendors appear to have shipped products that weren't adequately trained on the specific, frequently-changing information that community college students actually need.
One reason AI chatbots struggle in higher education - and in many other industries - is that institutional information changes constantly. Course catalogs, financial aid deadlines, enrollment policies, campus service hours - none of these are static. Often the latest information conflicts with previous canon, or even other apparently-current information. A human would notice this conflict or confusion in a subject they recall often discussing with students in the last few weeks, whereas an AI chatbot would confidently assert both conflicting pieces of information, non-deterministically and unpredictably. A chatbot trained on last semester's data is already outdated on day one of the new term, and if the vendor isn't continuously updating and validating the chatbot's knowledge base against current institutional data, the bot is a confidently wrong encyclopedia from six months ago.
This is a well-documented failure mode for AI chatbots deployed in customer-facing roles. The bot handles the easy queries just fine ("What's the campus address?"), building an illusion of competence. Then it falls apart on the questions where accuracy actually matters ("Am I eligible for the California College Promise Grant?" or "Can I still add a class after the drop deadline?"), delivering outdated or fabricated answers with the same confident tone it used for the easy ones.
For students who don't know any better - particularly first-generation students who lack a built-in network of people who've navigated the system before - a wrong answer from an official-looking campus chatbot is indistinguishable from a right one. If the bot tells them the wrong financial aid deadline, they miss the actual deadline. If it gives them incorrect transfer credit information, they take the wrong classes. These aren't hypothetical harms. They're the kind of real consequences that compound into students taking longer to graduate, paying more than they should, or giving up entirely.
The Accountability Gap
One of the more revealing aspects of this story is the accountability gap. The chatbot vendors - Gravyty and Gecko - sold these districts on the promise of automated student support. District administrators signed off on the purchases, presumably on the basis of demonstrations that showed the chatbots performing well under controlled conditions. But once deployed in the wild, who's actually responsible for the wrong answers?
When a human advisor gives a student incorrect information, there's a clear chain of accountability: the advisor, their supervisor, the institution. When a chatbot does it, the blame gets diffused across the vendor, the institution's IT department, whoever was supposed to keep the training data current, and the AI model itself. This diffusion of responsibility isn't accidental - it's one of the features of AI deployment that makes it attractive to budget-conscious bureaucracies. The cost of failure gets socialized across a dozen parties while the cost savings get claimed by the administrators who approved the contract.
The OECD logged this incident in its AI Incidents and Hazards Monitor, which suggests the international AI governance community views it as more than a local IT hiccup. When an intergovernmental organization representing 38 member countries is cataloguing your chatbot failures, the problem has graduated from "customer service issue" to "public policy concern."
The Bigger Picture
California's community college chatbot debacle is a microcosm of a pattern unfolding across the public sector: institutions under budget pressure adopt AI tools as a way to do more with less, vendors oversell what their products can reliably do, and the people who bear the consequences are the very ones the technology was supposed to help.
The CalMatters investigation was syndicated widely - US News, The Verge, LAist, and several other outlets picked it up - generating the kind of attention that tends to produce reactive fixes rather than systemic change. Some districts have reportedly signaled they plan upgrades, which is encouraging language that could mean anything from "we're renegotiating with the vendor" to "we've added it to a five-year strategic plan."
What would actually fix the problem is straightforward, if unglamorous: continuous human oversight of chatbot accuracy, regular testing with real student queries, clear escalation paths to human advisors when the bot doesn't know the answer, and contractual requirements that hold vendors accountable for answer accuracy rather than just conversation volume. In other words, the same boring quality-control work that makes any information system reliable - the exact work that buying an AI chatbot was supposed to eliminate in the first place.
For the millions of students in California's community college system who are just trying to figure out how to enroll, pay for school, and graduate, the lesson is one that AI's biggest boosters keep having to relearn: a bot that answers quickly is only useful if it also answers correctly.
Discussion