Spartan Lawyer Winter 2019


Associate Dean Daniel D. Barnhizer, professor of law and the Bradford Stone Faculty Scholar, teaches in the areas of contracts, commercial law, business organizations, and taxation. He theorizes about the social implications of AI with his father, David Barnhizer, with whom he recently co-authored The Artificial Intelligence Contagion (Clarity Press, 2019).

Dean Barnhizer is also a competitive kettlebell lifter and an avid outdoorsman. He escapes from the law to his secret woodworking lair where he enjoys crafting cigar box guitars and ukuleles.

I have a great job. From my scholarship to my teaching, I find legal academia exhilarating and rewarding. (Well, perhaps it’s a stretch to describe my administrative duties as “exhilarating,” but you take my point.) The “professor of law” designation extends well beyond my tasks; it’s a key component of my identity.

Work, at its most fundamental level, can provide us with an identity: not only with things to do, but with problems to solve, difficulties to overcome, achievements to which we can aspire. To experience satisfaction, humans need to work, strive, and achieve.

In this era of increasing automation and artificial intelligence (AI), I see a troubling trend toward treating employees not as human beings, but as fungible (and comparatively inefficient) units of labor. AI has limitless promise for increasing productivity, but we already see the costs associated with this promise. When technology replaces our professional identities, it will also replace many of the human interactions and activities by which we ordinarily define our worth. It’s a complex subject: AI, robotics, and automation will change not only how we produce things, but how we see ourselves and how we organize our society.

Capitalism and markets have delivered the greatest increase in wealth and welfare in human history. Technological developments over time have radically reduced production costs, and markets adapt rapidly to incorporate changes in production and business models. But such developments also come with costs – Henry Ford’s Taylorist automation of assembly line production destroyed jobs. But in the process of creative destruction described by Joseph Schumpeter, the blacksmiths of earlier eras soon traded handwork in the smithy for tool and die manufacture in the factory. The problem is that Schumpeter’s model is descriptive of past technological revolutions – it’s unclear that the process of creative destruction will continue to operate in a post-scarcity economy.

If we lawyers believe our work is somehow immune to the contagion of artificial intelligence, it’s time to reconsider. The ancient (and costly) business model of law – individual practitioners copying the decisions that have been made before – will not continue as we’ve known it. Lawyers should certainly use AI and other forms of automation to supplement our practices, but I’m concerned that it will eventually supersede human judgment. Expert systems, combing through millions of cases with incredible speed and accuracy, could easily lead judges and lawyers to replace their human judgment with reliance on these automated expert systems.

There are advantages, of course; big data and machine learning will increase access to justice, making the system less biased and more consistent. But I believe that we will miss the humanity of our practice – rigorous argument against a skilled opponent, the application of compassion, the ability to change our minds when presented with new information – once it is lost.

Rather than decrying these fast, sweeping, and inexorable changes, we need to start deciding not just what we want to do, but who we want to be in the coming decades.