Generative AI in Legal Education: A Two-Year Experiment with ChatGPT
Discover how integrating AI into legal education enhances student performance and prepares them for the future of law.
AI tools are increasingly used by professionals—especially lawyers—to search for relevant clauses, review legal documents, and summarize thousands of pages with remarkable speed. Yet legal educators worry that heavy reliance on AI may weaken students’ ability to construct legal arguments and engage in deep analytical thinking. Is AI helping students learn more effectively or undermining their ability to think? I believe this is a really important issue to discuss in the AI era. This research directly examines that concern.
Paper reviewed: Schrepel, Thibault, Generative AI in Legal Education: A Two-Year Experiment with ChatGPT (August 22, 2025). Law, Innovation and Technology (forthcoming, 2026), Available at SSRN: https://ssrn.com/abstract=5401422 or http://dx.doi.org/10.2139/ssrn.5401422
Summary
A two-year experiment with ChatGPT in legal education reveals that structured AI training significantly improves student performance, particularly in open-ended tasks. The study suggests that law schools should integrate AI into their curricula rather than imposing blanket bans.
Key Findings
- The study conducted a two-year experiment with ChatGPT in a legal education setting, dividing students into three groups: one prohibited from using AI, one exposure to AI without guidance, and one receiving structured training on using AI effectively.
- Structured training on AI usage resulted in the highest performance in both years, particularly in open-ended legal reasoning and drafting tasks.
- The gap between the structured training group and others narrowed in the second year as overall familiarity with AI increased among students.
- AI had minimal impact on factual recall and multiple-choice test scores, indicating that memorization-based tasks remain largely unaffected by AI use.
- The experiment suggests that prohibiting AI leaves students at a disadvantage, while integrating AI into legal education enhances performance.
Business and Policy Implications
- Law schools should resist imposing one-size-fits-all policies on AI usage and instead allow professors the freedom to experiment with different approaches.
- Investing in AI literacy for both students and professors is crucial for law schools to remain competitive and relevant.
- Assessments should be redesigned to focus on higher-order skills and authentic tasks that AI cannot easily replicate, such as oral examinations, in-class simulations, or multi-stage projects.
- Long-form writing assignments, like master theses, may need to be reevaluated or replaced with alternative assessments that better test skills valuable in an AI-driven world.
Introduction
The integration of generative AI, such as ChatGPT, into legal education is a topic of growing debate. While some see AI as a tool that can enhance legal reasoning and writing, others warn of its potential to undermine skill acquisition and academic integrity. This report explores the potential and challenges of using generative AI in legal education through a two-year experiment.
Background and Context
Generative AI tools have become increasingly accessible to students, offering both opportunities and challenges for legal education. The literature is divided on the impact of AI, with some studies highlighting its potential to assist in legal drafting and reasoning, while others raise concerns about shortcut learning and degraded skill acquisition. The experiment aimed to test the impact of different teaching methodologies around ChatGPT on students' legal writing and reasoning skills.
The study involved 66 students in the first year and 164 students in the second year, divided into three groups: one with no AI usage, one with minimal AI guidance, and one with structured AI training. The task involved improving a provision of the European AI Act, with assessments conducted through multiple-choice tests and take-home exams.
The findings suggest that structured AI training led to the highest performance, especially in open-ended legal reasoning tasks. However, the advantage of structured training diminished as overall AI familiarity increased among students in the second year. The study also found that AI had little impact on factual recall, as measured by multiple-choice tests.
The implications of this study are significant for legal education. Law schools should consider maintaining pedagogical freedom, allowing professors to experiment with AI integration in their teaching. There is also a need to invest in AI literacy for both students and faculty to ensure they can effectively use AI tools.
The study suggests that assessments should be redesigned to focus on skills that are harder for AI to replicate, such as critical thinking and nuanced judgment. Long-form assignments may need to be rethought in light of AI's capabilities, potentially being replaced or supplemented with alternative assessments that test valuable skills in a world with AI.
Overall, the experiment highlights the potential benefits of integrating AI into legal education when done thoughtfully. It underscores the importance of balancing the use of AI with the development of critical skills that remain essential for legal professionals.
Main Results
The experiment conducted over two years (2024 and 2025) within the "Law of AI" course provides valuable insights into the impact of different teaching methodologies involving ChatGPT on students' legal writing and reasoning skills.
Key Findings
- Structured AI Training Outperforms Other Methods: Students who received structured training on using ChatGPT (Group 3) consistently outperformed their peers in both years, particularly in open-ended legal reasoning and drafting tasks.
- Minimal Variance in Multiple-Choice Tests: The scores on multiple-choice tests, which assessed factual recall and conceptual understanding, showed minimal variance across the three groups in both years. This suggests that AI usage had little impact on memorization-based tasks.
- Significant Difference in Take-Home Exams: The take-home exam results, which required open-ended legal analysis and critical reasoning, revealed a more pronounced divergence among the groups. Group 3 (structured AI training) achieved the highest scores in both years.
- Convergence of Scores in 2025: The performance gap between groups narrowed in 2025 as overall familiarity with generative AI spread among students. This indicates that while structured AI training still offered some advantage, the benefit became less dramatic as students became more familiar with AI tools outside the classroom.
Observations on Classroom Dynamics
- Group 1 ("No AI"): Students prohibited from using ChatGPT focused primarily on superficial language-level refinements and struggled to generate deeper revisions independently.
- Group 2 ("Minimal AI Guidance"): Students provided with ChatGPT suggestions without formal guidance largely adopted AI-generated edits at face value, often without understanding the legal implications. Some demonstrated critical thinking, but the lack of guidance resulted in inconsistent engagement.
- Group 3 ("Structured AI Training"): Students who received training on effectively using ChatGPT engaged in an iterative dialogue with the AI, testing multiple formulations and critically evaluating the outputs. This approach led to deeper conceptual clarity and stronger revisions.
Methodology Insights
The experiment was designed to assess the impact of different teaching approaches to ChatGPT on students' ability to refine legal text. The methodology involved dividing students into three groups:
- Group 1: "No AI" - Students were prohibited from using ChatGPT.
- Group 2: "Minimal AI Guidance" - Students were provided with ChatGPT-generated suggestions but received no formal training on using the tool.
- Group 3: "Structured AI Training" - Students received comprehensive training on legal prompt engineering and critically engaging with ChatGPT outputs.
This design allowed for a comparison of the effectiveness of different approaches to integrating AI into legal education.
Importance of the Methodology
The experiment's methodology is significant because it:
- Tests Different AI Integration Strategies: By comparing three distinct approaches, the study provides insights into how various teaching methods affect student outcomes.
- Highlights the Value of Structured Training: The results underscore the importance of teaching students not just to use AI tools but to critically evaluate and improve upon AI-generated outputs.
- Offers Practical Implications for Legal Education: The findings suggest that law schools should invest in AI literacy training for both students and faculty to maximize the benefits of AI integration.
Analysis and Interpretation
The experiment's results have several key implications for legal education:
- AI Literacy is Crucial: The performance of Group 3 highlights the importance of teaching students how to effectively use and critically evaluate AI tools.
- Balancing AI Use with Critical Skills: While AI can enhance certain aspects of legal education, it is essential to balance AI usage with the development of critical skills that remain vital for legal professionals.
- Rethinking Assessments: The findings suggest that assessments should be redesigned to focus on skills that are harder for AI to replicate, such as critical thinking and nuanced judgment.
- Pedagogical Freedom and Experimentation: The study emphasizes the need for law schools to maintain pedagogical freedom, allowing professors to experiment with different AI integration strategies.
Future Directions
The experiment's results point to several areas for future research and development:
- Long-Term Impact of AI Integration: Further studies could investigate the long-term effects of integrating AI into legal education on students' skills and professional preparedness.
- Broader Applications of AI Literacy: The importance of AI literacy extends beyond legal education. Future research could explore how to effectively teach AI literacy across various disciplines.
- Evolving Assessments and Curricula: As AI continues to evolve, legal education will need to adapt by rethinking assessments and curricula to ensure they remain relevant and effective.
By continuing to explore the integration of AI into legal education, educators can harness the potential benefits of AI while ensuring that students develop the critical skills necessary for success in the legal profession.
Practical Implications
The study on generative AI in legal education has significant practical implications for law schools, educators, and students. The findings suggest that AI can be a valuable tool in enhancing legal education, but its integration requires careful consideration.
Real-World Applications
- Personalized Learning: AI can facilitate personalized learning experiences for students, allowing them to learn at their own pace and receive tailored feedback.
- Efficient Research: AI-powered tools can aid students in conducting legal research more efficiently, freeing up time for higher-order thinking and analysis.
- Enhanced Engagement: AI can increase student engagement by providing interactive and responsive learning experiences.
Strategic Implications
- Faculty Training: Law schools should invest in training their faculty to effectively integrate AI into their teaching practices.
- Curriculum Adaptation: Law schools should consider adapting their curricula to incorporate AI literacy and critical thinking skills.
- Assessment Redesign: Assessments should be redesigned to focus on higher-order skills and authentic tasks that AI cannot easily replicate.
Who Should Care
- Law Schools: Law schools should be aware of the potential benefits and challenges of integrating AI into their curricula.
- Educators: Educators should be trained to effectively use AI in their teaching practices and to critically evaluate its impact on student learning.
- Students: Students should be aware of the potential benefits and limitations of AI in legal education and develop the skills necessary to effectively use AI tools.
Actionable Recommendations
Specific Actions
- Maintain Pedagogical Freedom: Law schools should resist the temptation to impose one-size-fits-all approaches to AI integration, instead allowing professors to experiment with different methods.
- Invest in Faculty Training: Law schools should invest in training their faculty to effectively integrate AI into their teaching practices.
- Treat Pedagogical Experiments as Scientific Evidence: Law schools should systematically document and evaluate AI-related pedagogical experiments to identify best practices.
Implementation Considerations
- Contextual Factors: The effectiveness of AI integration will depend on various contextual factors, including the specific AI tools used, the student population, and the educational setting.
- Ongoing Evaluation: The impact of AI integration should be continuously evaluated and assessed to ensure that it is meeting its intended goals.
Conclusion
The integration of generative AI into legal education has the potential to significantly enhance student learning outcomes. By understanding the practical implications, strategic implications, and actionable recommendations outlined in this study, law schools and educators can harness the benefits of AI while ensuring that students develop the critical skills necessary for success in the legal profession.
Summary of Main Takeaways
- AI can be a valuable tool in enhancing legal education, but its integration requires careful consideration.
- Law schools should invest in training their faculty to effectively integrate AI into their teaching practices.
- Assessments should be redesigned to focus on higher-order skills and authentic tasks that AI cannot easily replicate.
Final Thoughts
The future of legal education will likely be shaped by the continued development and integration of AI. By embracing this change and adapting their teaching practices accordingly, law schools and educators can ensure that students are well-prepared to succeed in a rapidly evolving legal landscape. As the field continues to evolve, it is essential to prioritize evidence-based experimentation and ongoing evaluation to ensure that AI integration is effective and beneficial.