Laura Counts, UC Berkeley Haas
While conducting research on how AI was changing daily work at a U.S. technology company, UC Berkeley Haas doctoral student Xingqi Maggie Ye noticed a pattern that raised a provocative question: What if AI is intensifying work rather than reducing it?
Ye’s eight-month ethnographic study, co-authored by associate professor Aruna Ranganathan and featured in Harvard Business Review, points to exactly that dynamic. In their observations and interviews with employees of the 200-person company, the researchers found that generative AI didn’t free up time — it expanded what workers felt capable of, and willing, to take on.
“…Employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so,” Ye and Ranganathan wrote of their in-progress research. What initially felt like excitement, experimentation, and momentum had quietly accumulated into something that was harder to sustain, the study suggests.
We asked Ye, Ph.D. ’28, to walk us through what she and Ranganathan found, why it should concern employers, and what organizations can do before that cycle takes hold.
UC Berkeley Haas doctoral student Xingqi Maggie Ye
Q: What sparked this project?
A: We did not start with a fixed hypothesis about whether AI reduces work or increases it. We simply wanted to understand, in a grounded way, how generative AI was shaping everyday work practices. As I spent time observing and talking to people, a pattern around work effort began to emerge that didn’t quite align with the dominant narrative. That was the point when we realized there was something interesting to theorize.
Q: Can you briefly walk us through your study design and methodology? What were you observing over those eight months?
A: Our study is based on an eight-month ethnography at a technology company where employees had broad access to generative AI tools. I was on site regularly, observing work in real time—how people structured their days, how they moved between tasks, which tools they used for different kinds of work, how those tools fit into their routines, and so on. I attended meetings and participated in everyday conversations to understand how AI was being discussed, normalized, or debated within the organization. In addition, I conducted more than 40 semi-structured interviews across functional groups. In those interviews, I asked people to walk me through their workflows and reflected with them on what changed after AI entered the picture, including what they now attempted that they wouldn’t have before, how they allocated their time, and how they felt at the end of the day.
Q: You found that AI “intensifies” work rather than reduces it. What does that look like in practice? What were the main ways you saw this play out?
A: In our study, intensification took three main forms in practice. First, people began taking on work that previously would have belonged to someone else or might not have been attempted at all. The scope of what counted as “my job” widened. Second, because AI makes it easy to start and continue tasks, work seeped into moments that used to function as pauses. People would send prompts during lunch, before meetings, or in the evening when an idea came to mind. This dissolved some of the natural stopping points in the workday. Third, workers increasingly kept multiple threads alive at once. They would run AI processes in the background while reviewing code, drafting documents, or attending meetings. Some even ran multiple AI agents simultaneously. This created a rhythm where both the human and the machine were constantly in motion.
Q: Some employers might see it as a win that employees are doing more work voluntarily. Why should they be concerned about the patterns you observed?
A: I can see why some organizations might see this as a win. If employees are proactively taking on more and moving faster, that can look like the productivity promise being realized. The challenge is that what appears to be a productivity boost in the short run can become harder to sustain. As task scope expands and multiple AI-assisted workflows run in parallel, the workday becomes denser and more cognitively demanding. Because this expansion often feels self-driven and even exciting at first, expectations can gradually reset, and what was once extra effort becomes standard performance. That’s where a vicious cycle can form: increased capability leads to increased output, which leads to higher expectations, which then pressures further expansion. Over time, constant switching and reduced recovery can impair judgment and increase errors, and organizations may struggle to distinguish genuine productivity gains from unsustainable intensity.
Q: What most surprised you in these findings?
A: What surprised me most was the contrast between how people described their moment-to-moment engagement and how they described their overall experience. In micro moments of prompting, iterating, and experimenting, people talked about momentum and a sense of expanded capability. But when they stepped back and reflected on their broader work experience, a different tone sometimes emerged. They described feeling busier, more stretched, or less able to fully disconnect. That contrast suggests that intensification can feel positive in the short bursts that make up the day, while the cumulative effect creates strain over time.
Q: You propose the concept of an “AI practice” as a solution. Can you explain what that means and give some concrete examples of how organizations might implement it?
A: When we talk about building an “AI practice,” we mean being intentional about the rhythm and boundaries of AI-enabled work rather than simply accelerating because the technology makes it possible. In practical terms, that might include building in intentional pauses—brief, structured moments before major decisions to surface a counterargument or explicitly link a choice to organizational goals, so speed doesn’t crowd out reflection. It also involves sequencing: instead of reacting to every AI-generated output as soon as it appears, teams can batch non-urgent updates, protect focus windows, and let work move forward in coherent phases rather than in a constant state of interruption. And finally, it requires human grounding, such as protecting time for check-ins, shared reflection, and dialogue, so work doesn’t become entirely solo and tool-mediated. The intention is not to slow innovation, but to ensure that productivity gains remain aligned, thoughtful, and sustainable over time.
You can read the original version of this article at the UC Berkeley Haas website: https://newsroom.haas.berkeley.edu/ai-promised-to-free-up-workers-time-uc-berkeley-haas-researchers-found-the-opposite/