
What We Learned When Law Students Worked Alongside an AI CoachIn the Fall of 2025, Adjunct Professor Olga Mack taught a course at UC Law San Francisco on Product Counseling, which examined the unique role product counsel plays in the intersection of law, technology, and business. Coach Frankie, an AI-driven product law coach, was piloted in the course with law students working through realistic product-law scenarios. In this article, Professor Mack shares her findings from this innovative pilot program:
February 16, 2026
What We Learned When Law Students Worked Alongside an AI Coach
When people talk about AI in legal education, the conversation often drifts toward efficiency. Faster research. Automated outlines. Better issue spotting. Those benefits matter, but they miss the more interesting question we explored during the Coach Frankie pilot in UC Law’s Product Counseling course: how does AI change the way future lawyers learn judgment?
Coach Frankie, an AI-driven product law coach, was piloted with law students working through realistic product-law scenarios. The aim was not to replace instruction or accelerate grading. It was to observe how students collaborate with an AI system when learning skills that require judgment, tradeoffs, and communication under uncertainty.
The pilot focused on three core competencies that practicing product lawyers recognize immediately: understanding the product lifecycle, making and framing risk decisions, and translating legal analysis into business-aligned advice. What emerged was not a story about AI doing more work, but about AI needing to behave differently.
Collaboration mattered more than correctness
The most consistent signal from the pilot was this: students performed better and stayed more engaged when Coach Frankie acted like a collaborator rather than an answer engine.
When Coach Frankie asked clarifying questions, reflected students’ reasoning, and let them attempt answers before offering feedback, students leaned in. They thought through tradeoffs. They revised their framing. They treated the interaction as a working session rather than a quiz. This pattern aligns with what human factors research describes as an equal-partnership mode of collaboration, where the human remains actively involved in shaping the outcome.
By contrast, when Coach Frankie became overly directive or repetitive, engagement dropped. Students described those moments as frustrating or confidence-eroding, even when the underlying legal guidance was sound. The takeaway for practitioners is familiar: people do not learn judgment by being told the answer. They learn it by wrestling with the problem.
Structure helped, but only when it evolved
Early in the pilot, structured scaffolding made a clear difference. Checklists, lifecycle maps, and staged exercises helped students orient themselves, especially when grappling with unfamiliar product contexts. Clear structure reduced cognitive load and allowed students to focus on reasoning rather than guessing what the exercise was asking for.
Feedback was most effective when it explained why an answer mattered in context, not just whether it was correct. Students consistently responded well to explanations that tied legal risk to business impact or stakeholder priorities. This mirrors real practice, where legal advice only lands if it connects to decision-making.
But structure had a shelf life. As students gained confidence, too much scaffolding began to feel constraining. Retention and satisfaction were strongest when early structure gradually gave way to autonomy. The lesson is not that structure is bad, but that it must adapt. Static instructional design does not match how people actually learn complex professional skills.
Realism beat repetition
Another clear signal was that realism mattered more than volume. Students consistently preferred fewer, richer scenarios over a higher number of repetitive multiple-choice questions. Engagement increased when exercises included stakeholder pushback, ambiguous tradeoffs, and incomplete information, the conditions that define real product counseling work.
Students were more tolerant of difficulty than of artificiality. Bugs, repeated prompts, or shallow variations undermined trust faster than hard questions did. This is an important distinction for anyone building or deploying AI systems in training environments. Users will forgive the challenge. They are less forgiving of experiences that feel mechanical or inattentive.
For practitioners, this echoes what happens in onboarding and mentoring. One realistic conversation teaches more than ten abstract hypotheticals. AI does not change that dynamic; it amplifies it.
One mode does not fit all learners
Perhaps the most important insight from the pilot was that not all students wanted the same kind of AI interaction. While many thrived in a collaborative, scaffolded mode, a subset of more advanced students wanted less guidance and more sparring-style engagement. They wanted ambiguity. They wanted pushback. They wanted to test their judgment without guardrails.
This divergence was not a flaw in the pilot. It was a signal. Experience level and agency preference matter, and AI systems that assume a single optimal interaction style will inevitably underserve part of their audience.
For legal education and professional training, this suggests that the future is not a single “AI tutor” model, but differentiated learning modes that adapt to where a learner is and how much control they want. That mirrors practice itself. Junior lawyers need structure and feedback. Senior lawyers need challenge and reflection.
What this means for practitioners and educators
The UC Law pilot reinforces a broader lesson that extends beyond the classroom. AI is most effective when it supports human judgment rather than attempting to substitute for it. Designing AI as a collaborative partner requires more thought than designing it as an answer machine, but the payoff is deeper learning and more durable skill development.
For law schools, this opens the door to using AI to teach what has always been hardest to teach: how to think, how to frame advice, and how to operate under uncertainty. For practicing lawyers and legal leaders, the findings map closely to how effective mentoring already works. The best mentors do not hand down conclusions. They ask questions, surface tradeoffs, and help others build judgment over time.
The Coach Frankie pilot was not about proving that AI belongs in legal education. It was about understanding how it belongs. The answer, at least from this cohort, is clear. When AI respects human agency and adapts to the learner, it becomes a powerful tool for developing the skills that matter most in practice.

Adjunct Professor Olga Mack
© 2026

