The Akron Legal News

Login | December 27, 2024

Vorys attorney teaches the ‘do’s and don’ts’ about AI Chats

RICHARD WEINER
Technology for Lawyers

Published: April 14, 2023

Akron-based attorney Charles F. Billington, of the Cleveland office of Vorys, Sater, Seymour and Pease, spends a lot of his time talking to his clients about the dangers of artificial intelligence (AI).
In particular, as an employment lawyer, his advice focuses on AI chats like ChatGPT (3.5 and now 4) and the many ways these tools can be used and abused in the office.
AI chat can be useful for a number of internal company processes, said Billington, but it also carries a number of dangers.
On the positive side, it can be used for screening or attracting potential talent, creating various company materials (to an extent), integrating it into customer service platforms to communicate with customers or creating marketing materials.
But each one of those can come with a downside, and Billington said he is consulting with clients to find ways to use this technology without being dragged down by its limitations.
Billington, who goes by “Chaz,” received his juris doctorate from Cleveland State University Cleveland-Marshall College of Law after receiving his bachelor’s degree from The University of Akron.
After law school, he practiced at Vorys from 2008-2011, then at Ogletree Deakins in the employment law section from 2011-2015, HR at AmTrust 2015-2018, as assistant general counsel at Parker-Hannifin in the HR group for a year, and then back to Vorys in 2019.
Billington said that the first danger of using AI chatbots is that employees may see them as replacements for lawyers.
“People in human resources may think that (chatbots) are a substitute for a lawyer, in, for instance, writing an employment contract,” he said. “An HR person can ask a chatbot to write an employment contract and it will come up with something that looks like an employment contract.”
But each contract has to be personalized, Billington said, by HR and by attorneys and administrators.
“It may be a good start,” he said. “But it’s not enough.”
In this field, the ground under your feet moves too quickly for anyone to think there is some kind of settled legal position.
When ChatGPT 3.5 was released a few months ago Billington, like something around 100 million other people (according to OpenAI, ChatGPT’s parent company), gave it a tryout.
“I used it to generate three documents (generally used by HR): A contract, a waiver and a policy handbook,” he said. The generated documents were “a good place to start—maybe 80% of what I would need.”
He noted that the inherent problems with using an AI chatbot are at least twofold. The first, as noted above, is a false feeling of security a worker might feel in thinking that that the bot had created something that the company could use.
The second, and perhaps even more insidious problem, is the potential inherent bias of the programmers who set the parameters for training the chatbot leaking out into the chatbot results.
For an example of this, Billington talked about an HR bot which evaluated resumes.
The bot was trained to recognize preferred job candidate qualities from the face of these resumes. But the bot was trained by being fed only male resumes.
So that bot overwhelmingly chose males as successful job seekers.
That prior story was on a small scale.
Open AI said that ChatGPT4 is trained into a trillion parameters, on some licensed data and some data created by people who were neither informed of nor compensated by Open AI.
Who is training that bot? And on what? ChatGPT, for instance, is trained on a number of forums where people of color, women and other minorities are not appropriately represented.
As of this writing, virtually one new AI chatbot a day finds its way onto the internet.
Nobody in HR or chasing the law in this area can keep up.
Nobody.
It is enough to just know what is out there; it is impossible to know what all the implications are at this point.
Further, the information within those parameters may contain any amount of bad data.
In fact Google’s newly-released AI bot, Bard, has a disclaimer on the bottom of the search page the says that it “may display inaccurate or offensive information….”
There is, in fact, a boutique law firm/ data science firm in New York called BNH.AI (www.bnh.ai) which has developed a tool to test AI bias risks—particularly in the HR realm. In New York.
Billington also discussed a third potential problem with AI chatbots: Employee misuse.
For instance, if an employee is searching for information that a company needs and is using an inaccurate chatbot, the possibilities of inaccurate information being put into a decision-making data field are endless.
Further, a chatbot could create marketing materials using copyrighted pictures or text.
Or the opposite could happen, where an employee uploads proprietary company information to the AI.
There is no end in sight to the growth of AI chatbots, with all of there intended and unintended powers.
There is, likewise, no end to employer’s needs for legal advice in this arena.
Chaz Billington III can keep himself busy in this area of law for a long time.


[Back]