Case Study: Pennsylvania Supreme Court Adopts AI Policy
by
October 2, 2025
Courts across the country are grappling with issues posed by generative AI. We’ve seen several high-profile cases of lawyers being sanctioned for passing off AI’s work as their own. Often including citations to cases and laws that don’t exist. However, some courts are establishing acceptable bounds of AI use. Most recently, the Pennsylvania Supreme Court has adopted an internal AI policy for court personnel. Troutman Pepper Locke discusses the policy and how, ultimately, the responsibility for completeness and accuracy lies with the users:
“Before using GenAI, court personnel must become and remain knowledgeable about GenAI’s “capabilities and limitations,” like hallucinations, biases, and inaccuracies. So, how can court personnel leverage approved GenAI tools? They may use such GenAI tools to assist with a broad range of tasks, including summarizing documents, conducting preliminary legal research, and drafting and editing their own work. But the user remains ultimately responsible for the completeness and accuracy of their work product. Pennsylvania courts may also “provide interactive chatbots or similar services to the public and self-represented litigants.”
This is a perfect example of a policy that addresses the responsibility challenge I discussed earlier this week. AI companies have admitted that errors in AI outputs are a statistical certainty. The only way to combat such errors is human intervention. By allowing personnel to use AI but making them directly responsible for the outcomes, the Pennsylvania Supreme Court has ensured that staff are accountable for their AI use. The guidelines also address privacy concerns and require any AI use to be facilitated through approved vendors. Given the highly sensitive nature of the courts, this policy is a good example of how to protect privacy and ensure accuracy when working with AI.