Our Work Regarding AI
One year after the CSU announced the $17 million AI Initiative without faculty consent, CFA members continue to advocate for ethical and enforceable safeguards governing the use of artificial intelligence (AI).
At the bargaining table, we introduced a new stand-alone article on AI that affords faculty protections and opportunities. Our proposal includes protections for using or refusing to use the technology, professional development resources to adapt pedagogy to incorporate the technology, and further protections for faculty intellectual property.
We also proposed defining generative AI as referring “to a subset of AI that learns patterns from data and produces content based on those patterns, and may employ algorithmic methods (e.g., Azure AI, Bard, ChatGPT, Dall-E, Grok, Llama, MidJourney, Vertex AI, etc.).”
Despite other AI programs being free, the CSU’s AI Initiative contracts with OpenAI for millions to provide ChatGPT Edu to all faculty, staff, and students throughout the system. San Francisco State professors Martha Lincoln and Martha Kenney in a petition say ChatGPT Edu isn’t designed, trained, or optimized for education. The professors say that experts argue the technology diminishes the quality of teaching and learning, introduces new forms of discrimination, and endangers students’ mental health.
The contract between the CSU and OpenAI is set to expire June 30. The petition urges Chancellor Mildred García not to renew the contract and to use the savings to protect jobs at CSU campuses facing layoffs.
Beyond the issues with ChatGPT Edu, CFA members continue to be concerned about the initiative, which we filed an unfair labor practice charge over in March 2025, arguing that CSU management failed to meet and confer over faculty rights and the impact of the initiative. The initiative is an incursion of private companies’ interests into infrastructure and workforce development goals of the CSU.
The CSU AI initiative created the AI Workforce Acceleration Board, which has representatives from the AI industry, state government, and the CSU who identify and advocate for AI skills needed in the workforce and provide guidance and opportunities for internships and jobs. The board includes managers and leaders from companies like OpenAI, Adobe, Amazon Web Services, Anthropic, Google, IBM, Instructure, Intel, Microsoft, Nvidia, and Soundings.
One example of how CSU campuses have been rolling out AI in classrooms is CSU Long Beach using an AI transcription program for students with disabilities. Faculty are also trying out a variety of AI applications through the Artificial Intelligence Educational Innovations Challenge, including using AI to create theatrical works and draft communications assignments.
At some Southern California campuses, students are using Futurenav Compass, an AI career exploration and placement platform developed by Educational Testing Service (ETS), an education and talent organization for which Chancellor García is a board member. The CSU made the tool available Fall 2025 at seven CSU campuses: CSU Dominguez Hills, CSU Fullerton, CSU Long Beach, CSU Los Angeles, CSU Northridge, Cal Poly Pomona, and CSU San Bernardino.
While these programs seem productive in some ways, we have grave concerns regarding a lack of faculty input on these initiatives, intellectual property rights, academic freedom, and privacy, not to mention that profit-seeking tech companies seem to be quickly infiltrating our campuses at nearly every level. We will be closely monitoring faculty and student experiences and respond accordingly.
At least some campuses, including Cal Poly San Luis Obispo, Cal Poly Maritime Academy, CSU Fullerton, CSU Long Beach, and San Jose State, have deeply troubling contracts with the company Flock Safety. Flock Safety is a surveillance technology company that produces automatic license plate readers (ALPRs), drones, and other video and audio surveillance equipment. Many agencies, including ICE, DHS, and local police have made use of Flock’s AI products to profile people.
It’s important for our members to fight for AI safeguards because it is an anti-racism and social justice issue that affects our working conditions. Historically as well as contemporaneously, tools of surveillance have been used against those already marginalized by racism, sexism, xenophobia, anti-LGBTQ animus, ableism, and activists fighting to reform those oppressive systems. Any further surveillance is a potentially dangerous acceleration and incursion into our lives.
AI tools can be problematic because of their environmental impacts, privacy and data security issues, racism and social justice concerns, accuracy issues, problematic labor relations use, plagiarism potential, and research on it affecting people’s critical thinking skills and emotional regulation.
We are also working to create AI protections in the state legislature. We are currently working with state Senator Sabrina Cervantes on Senate Bill 928, which intends to protect CSU employees from the encroachment of AI. Cervantes introduced the intent bill on January 29.
As the legislative session and bargaining continue, we will keep you updated on our progress to create ethical and enforceable safeguards governing the use of AI. In the meantime, you can sign the petition urging Chancellor García not to renew the contract with OpenAI here.
Cancel ChatGPT Edu. Invest in Humans.
Join California Faculty Association
Join thousands of instructional faculty, librarians, counselors, and coaches to protect academic freedom, faculty rights, safe workplaces, higher education, student learning, and fight for racial and social justice.