
It is said that in ancient China, when someone cheated, both examinees and examiners could face death if found guilty. Yet, academic dishonesty has persisted for thousands of years.
Once seen as a threat to academic integrity, Long Beach State and the CSU system are now embracing AI tools like ChatGPT.
On March 28, the CSULB AI Steering Committee emailed the Long Beach State community to announce that students, faculty and staff would have free access to the ChatGPT Edu program starting April 2.
A variant of the company’s standard chatbot, the education-based AI is meant to help students ethically interact with non-human-based tutoring, reviewing and brainstorming.
The program is now available on the CSULB apps dashboard.
With nearly 30% of students already using these programs in their coursework and a new tech partnership to integrate artificial intelligence into education, CSULB now faces the challenge of fostering innovation while preserving academic honesty in an increasingly AI-driven world.
Where will classroom AI go as it grows?
On Feb. 4, the California State University system announced a “first-of-its-kind” partnership with leading technology and Artificial Intelligence companies. This collaboration is to better educate and train students and employees on ethical AI use.
An email sent by CSU Chancellor Mildred Garcíanto and addressed to Cal State “community members” said the aim is to provide a dedicated, education-based AI platform at no cost to students and employees across all 23 campuses.
Garcíanto said the initiative is focused on positioning CSU as a leader in AI use.
Partner companies and organizations include OpenAI, Adobe, Microsoft, Google’s parent company Alphabet, IBM, Intel and the office of Gov. Gavin Newsom.
Open AI is the company behind the generative AI chatbot ChatGPT.
The announcement surprised students, faculty and staff at CSULB, including Jennifer Fleming, a professor and chair of the Journalism and Public Relations Department.
Fleming is concerned the CSU “rolled something so large in such a wide scale” without consulting with faculty or staff.
Due to how quickly technology develops, she worries that the AI platforms California spent money on could soon become outdated.
Will Murray, Chair of Mathematics and Statistics at Long Beach State, said tools like PhotoMath and ChatGPT have solved lower-level math problems for a while.
He believes these programs have advanced over the past couple of years, with some generating valid mathematical “proofs.”
“I think the challenge is sort of making students aware that it is not a substitute for human thinking, and it is not a substitute for sorting their own development of their quantitative reasoning skills and their logical reasoning skills,” Murray said.
How does the university use AI?
Faculty across campus can opt-in to utilize Turnitin’s “originality checker” through a built-in plug-in on Canvas to check student-written coursework for plagiarism.
Turnitin’s FAQ website states the program is designed to detect syntax differences between humans and popular AI models like ChatGPT’s writing. The detection tool promises a less than 1% false positive rate.
Associate Dean Trace Camacho works for the Office of Student Conduct and Ethical Development at CSULB and said he does not think the new initiative should affect the university’s current rules regarding ethical AI use.
“While this policy encourages use of AI, it’s still up to the discretion of the faculty how they choose to incorporate it or not incorporate it,” Camacho said. “So while this initiative encourages the use, it doesn’t really change our policy because our policy comes down to what is or isn’t allowed per the individual faculty.”
In 2025, the CSU required all applied doctoral and professional programs to have an official AI policy.
What qualifies as wrongful use of generative AI in coursework
Currently, it’s up to individual professors and departments at Long Beach State to establish regulations on proper and improper use of AI, per the Feb. 4 announcement.
First introduced in September 2023 and periodically updated, CSULB’s Student Guidelines on Generative AI require students to understand and follow their instructor’s stance on generative AI before submitting work.
If permitted, students must cite all AI usage through the Beach’s suggested citation formats; if the policy is misunderstood or if citations are incorrect, unauthorized usage can lead to a failing grade, discipline or even expulsion, per CSULB’s academic integrity policy.
If the Standards for Student Conduct regulations are breached, then any “academic dishonesty cases that occur in the classroom shall be handled by faculty members.”
While CSULB’s multi-step protocol for disciplinary decisions, Section Statement 08-02, has remained the same since 2008, the classifications of cheating and plagiarism have been adjusted amidst a rising national and local trend in academic dishonesty.
Why are AI guidelines included in the academic dishonesty policy?
Global academic institutions first began reporting a drastic increase in students cheating during the unprecedented shifts to online learning amidst the COVID-19 pandemic.
CSULB results reflect similar numbers.
Minutes from CSULB’s Academic Senate meeting on Feb. 1, 2022, stated, “Academic integrity cases have increased in the digital environment.”
The Executive Committee cited the 2019-20 academic year as having 190 cases of students reported for academic dishonesty.
The 2020-21 university year had 261 reports, and the fall 2021 semester alone had 174 reports.

The number of students reported for academic dishonesty hit a high in the 2020-21 academic year. Graphic by: El Nicklin
Fifty of these fall 2021 reported cases came from two online sections of the computer science course, CESC 328—an accusation 13 students fought and the Current reported on.
Past false positives:
The 50 students were initially accused of plagiarism or collaboration over a programming assignment titled “Coin Stacking,” assigned to about 400 students enrolled in both sections under professors Darin Goldstein and Ali Sharifian.
The accused students who were given an “F” on the assignment had their codes flagged from two AI systems, Measure Of Software Similarity AI and the other unspecified, before being examined by four coding experts.
In a letter from Goldstein to the Academic Integrity Committee accusing student Mark Fastner, who chose to fight the charges of academic dishonesty, Goldstein said the AI system used to flag students for cheating had a 5% chance of error.
Antonella Sciortino, the Associate Dean of Academic Programs, confirmed the coding experts all “agreed independently, beyond a reasonable doubt [that] cheating occurred” after evaluating the AI’s flagging.
Accused students continued to fight the charges until the summer of 2022, when Fastner told the Current he hired a lawyer to speak with Shawna Mckeever, a university counsel.
Shortly after, CSULB officials announced that grade appeals would be accepted and charges would be dismissed for the accused CESC 328 students.
Guidelines for the future:
With no changes planned from the OSCED, Camacho said he trusts individual instructors to regulate their courses and is excited about the potential opportunities the AI rollout and its tool could bring for students.
“So I think anything that promotes responsible, and that would be the word; [a] responsible, rigorous review in the way our students use AI in an ethical way is important,” Camacho said.
Camacho hopes that if students have reservations or are confused, they will have a conversation with the faculty for the given class.
“I think anything that encourages more just discussion between students and faculty is always good,” Camacho said.