As artificial intelligence continues to evolve, the legal community is wrestling with a fundamental question: Should machines be allowed to make judgment calls once reserved solely for lawyers?
The legal industry has long been one of tradition, precedent, and human interpretation. Yet this week, OpenAI and PwC announced a partnership that could forever alter the face of corporate legal services. PwC will now begin using ChatGPT-4o to assist in client interactions, document review, and legal research. For some, it’s revolutionary; for others, alarming.
From a corporate law standpoint, this raises two major concerns:
- Liability – Who is responsible if an AI-generated clause ends up costing a company millions?
- Confidentiality – Can firms trust that the use of LLMs won’t risk privileged information being mishandled?
These questions aren’t just theoretical—they are already playing out in boardrooms across the country. General Counsels are now tasked with determining how far AI can go without overstepping the lines of professional ethics or opening the door to future litigation.
We’re entering an era where machine intelligence might not just assist the law—it may shape it. The onus is on us, as future legal professionals, to ensure that these innovations are deployed with caution, transparency, and a firm grip on what should never be automated: judgment.