One of the problems with generative AI is that there are so many possibilities and inconclusive data. How often does it hallucinate? Each system claims certain odds, but who is certain? I’ve been mulling over a few hypotheticals where AI has a defined failure rate and the harms are clear and predictable. Of course there are dangers to generative AI beyond the hallucination rate, but I’d like to ignore all of that to puzzle over three variables: time, harm, and benefit.
Assume there are 100 private-school students and twice a year they write a term paper independently. This year a new generative AI product will allow the students to write a term paper in one hour instead of twenty hours. The product is strictly banned by the students’ school. However, the school can only detect the use of the product in one paper out of a thousand, when the product hallucinates so distinctly that the error could only have come from generative AI. If all 100 students use AI to write their bi-annual term papers for the next five years, those 100 students will write 1000 term papers, and likely one of them will be caught cheating. He or she will likely be expelled, but the other 99 students will have saved 190 hours each and 18810 hours total over those five years without any negative consequences (except a continued inability to independently generate their own term papers). Is this the right choice if these students spend their 18810 hours vandalizing property and drinking frappuccinos at the mall? What if they spend their 18810 hours learning anatomy and preparing to study medicine?
Is it fair to expel that single student if all indications suggest that 99 of the other students have also used AI (based on the mall’s increased sales of frappuccinos)? Are we willing to have justice which only descends on one guilty person out of one thousand? What if there is an additional group of twenty brave (or under-resourced) students who forgo generative AI and spend their lives in the typical monotonous drudgery of unending schoolwork rather than socializing at the mall? Will they grow up as hardworking successful CEOs, or will they have lower grades and no chance to succeed in business compared to their classmates with AI-assisted high grades and years of free time for peer networking over coffee? Before generative AI we already knew that the students with more money for writing assistance and leisure time for collegial coffees will have a better chance of financial success in life. Does AI make this inequality worse?
Now assume there are 10 lawyers and that AI can allow them to do a twenty-hour task in one hour. But like the students, one time in a thousand, the AI will be so blatantly incorrect that it will be discovered as a mistake which could only be made by AI. Do we disbar that lawyer who is caught in a reckless overreliance on AI? What if those ten lawyers can now represent 1000 people in a few months, and before they began using AI they could only have helped 50. Do the 950 extra people served outweigh the one person whose case is ruined by the AI’s mistake? Are we more sympathetic to the guilty lawyer if the clients in question are needy individuals rather than corporate clients? Could a corporate client insist that they were willing to pay 20 times more in order to have legal representation free of AI? Could the legal profession accept the proposition of selling two tiers of legal services, one much more efficient but more error-prone, and one slower but more sure? How would these tiers be regulated?
It would be naive to assume that in our current system all representation is equal. We already have tiers, and the most important line of demarcation is the divide between those who can pay for an attorney and those who attempt to represent themselves. Will AI decrease the cost of legal services or increase representation? It’s too soon to tell.
Likely there is some middle ground between these extremes of all-AI and no-AI. Perhaps AI, when properly double-checked by a human, will make lawyers and students ten times faster rather than twenty times faster but without dangerous AI hallucinations. Even if this is the best solution, someone will always choose to gamble on “unsupervised” AI. The benefits of being faster will outweigh the risk of being caught. There are no easy answers.
The post Some Math Problems With AI appeared first on Slaw.