A computer chip lighting up blue on a blueprint

Credit: iStock/matejmo

Read the report

Responsible Artificial Intelligence: Recommendations to Guide the University of California’s Artificial Intelligence Strategy

University of California President Michael V. Drake, M.D., has adopted a set of recommendations to guide the safe and responsible deployment of artificial intelligence in UC operations.

With Drake’s action, UC becomes one of the first universities in the nation to establish overarching principles for the responsible use of artificial intelligence (AI) and a governance process that prioritizes transparency and accountability in decisions about when and how AI is deployed.

Stuart Russell headshot
Stuart Russell
Courtesy photo

“Artificial intelligence holds great potential, but it must be used with appropriate care and caution,” President Drake said. “The Presidential Working Group on Artificial Intelligence has given us a road map for deploying this promising technology in a way that protects our community and reflects our values, including non-discrimination, safety and privacy.”

Artificial intelligence refers to machines, computer programs and other tools that are capable of learning and problem-solving. “Increasingly, AI is being deployed in higher education settings to enhance efficiency, refine decision making and improve service delivery. Yet its use can pose significant risks,” said Stuart Russell, professor of computer science at UC Berkeley and co-chair of the working group. “Unrepresentative data sets or poor model design, for example, can unwittingly exacerbate problems of discrimination and bias.”

To navigate that thorny terrain, President Drake charged an interdisciplinary panel of UC experts in August 2020 with developing recommendations for appropriate oversight of AI in university operations. The group was made up of 32 faculty and staff from all 10 campuses, and reflected a wide range of disciplines, including computer science and engineering, law and policy, medicine and social sciences.

Alexander Bustamante headshot
Alexander Bustamante
Courtesy photo

As part of its efforts, the working group conducted interviews with dozens of experts and stakeholders across UC and administered a survey to campus chief information officers and chief technology officers to better understand how AI is implemented and the governance and oversight mechanisms in place, said Alex Bustamante, UC’s chief compliance and audit officer, who co-chaired the working group.

“Overwhelmingly, survey respondents were concerned about the risks of AI-enabled tools, particularly the risk of bias and discrimination,” Bustamante said. “The UC community wanted appropriate oversight mechanisms. Our recommendations establish a governance process for ethical decision making in campus procurement, development and monitoring of AI.”

UC will now take steps to operationalize the Working Group’s key recommendations:

  1. Institutionalize the UC Responsible AI Principles in procurement and oversight practices;
  2. Establish campus-level councils and systemwide coordination to further the principles and guidance from the working group;
  3. Develop a risk and impact assessment strategy to evaluate AI-enabled technologies during procurement and throughout a tool’s operational lifetime;
  4. Document AI-enabled technologies in a public database to promote transparency and accountability.

Working group co-chair Brandie Nonnecke, the founding director of the CITRIS Policy Lab at UC Berkeley, and an expert in AI governance and ethics, said Drake’s adoption of the recommendations is likely to draw intense interest across the higher education sector.

“We are one of the first universities, and the largest public university system, to develop a governance process for the use of AI,” Nonnecke said. “My hope is that these recommendations inspire other universities to establish similar guardrails.”

Brandie Nonnecke in a vineyard
Brandie Nonnecke
Courtesy photo

Universities are beginning to adopt AI-enabled tools for tasks ranging from chatbots that answer common admissions questions to automated scanners that review resumés from job applicants. But tools are often purchased and deployed on an ad hoc basis, Nonnecke said, and lack the kind of systematic process for gauging fairness, accuracy and reliability that UC is now putting in place.

“It’s good we’re setting up these processes now. Other entities have deployed AI and then realized that it’s producing discriminatory or less efficient outcomes. We’re at a critical point where we can establish governance mechanisms that provide necessary scrutiny and oversight,” she said.

The working group focused its research and findings in four areas where the use of AI in UC operations posed the greatest risk for harm: health; human resources; policing and campus safety; and the student experience. Within each of those areas, they looked at how AI is currently deployed or will likely be deployed at UC and made recommendations.

“We go through functional use cases to identify how we can use our principles to guide effective strategies to mitigate negative effects of AI,” Nonnecke said. “This work was important in showing UC how to effectively translate principles into sound practices.”

“We’re also calling for a public database of all AI that pose greater than moderate risk to individual rights. Transparency in use of AI-enabled tools is critical to ensuring our actions are accountable to the ethics and values of our community.”