Brian Gilmore Posted December 8, 2022 Posted December 8, 2022 Anyone else tried playing around with the ChatGPT AI system by asking employee benefits questions? Not perfect, but you can definitely see where this is heading. https: //chat.openai.com/ C. B. Zeller 1
Luke Bailey Posted December 20, 2022 Posted December 20, 2022 Brian, I'm impressed and not impressed. Basically, all I've seen so far from ChatGPT is intelligent cutting and pasting of what is out on the internet. Granted, it's a real achievement for it to figure out what information is relevant to the question and to scrape it from the internet and cut and paste it into an intelligible answer, but this is newsletter type stuff, not an actual solution to a hard problem. I read the NY Times article on ChatGpt a week ago and the example that really struck me was the algebra question. The NY Times article linked to a post on Twitter: "A line parallel to y = 4x + 6 passes through (5, 10). What is the y-coordinate of the point where this line crosses the y-axis?" Chat GPT begins by explaining the problem and that we need to find a parallel line and do some algebra, about as well as a middle school math teacher at the chalkboard, but then boldly spits out a humorously wrong answer. ChatGPT is just regurgitating, cleverly to be sure, the stuff it scrapes off the internet. Try asking it one of the more difficult questions that you have received on BenefitsLink over the last year and see what you get. Christine Roberts 1 Luke Bailey Senior Counsel Clark Hill PLC 214-651-4572 (O) | LBailey@clarkhill.com 2600 Dallas Parkway Suite 600 Frisco, TX 75034
Bri Posted December 20, 2022 Posted December 20, 2022 A line parallel to y = 4x + 6 must be of the form y = 4x + C for some constant C as parallel lines have identical slopes. Since (5,10) is on this line, C is clearly -10 as the line must be y = 4x - 10 to get that ordered pair to work. And the line crosses the y-axis when x = 0, so y = 4(0) - 10, or at y = -10 *beep* Bill Presson and Luke Bailey 1 1
Brian Gilmore Posted December 20, 2022 Author Posted December 20, 2022 Yeah my point was a week or two in it's pretty decent at providing decent answers to basic EB questions. Imagine a year or two in? Or a decade? I would guess that clients will be more interested in what the bot has to say than our input over that kind of horizon. Or at least we'll constantly be double-checked and confronted by any differences in the AI analysis.
Luke Bailey Posted December 20, 2022 Posted December 20, 2022 2 hours ago, Brian Gilmore said: Imagine a year or two in? Or a decade? That's what they were saying about self-driving cars a decade ago, Brian. And that is a much lighter lift. 2 hours ago, Brian Gilmore said: I would guess that clients will be more interested in what the bot has to say than our input over that kind of horizon. I'd be willing to bet on that, Brian. 2 hours ago, Brian Gilmore said: Or at least we'll constantly be double-checked and confronted by any differences in the AI analysis. Will double the work. As I told my son, who has done AI research for one of the major software companies and is currently finishing up law school, and who tested ChatGPT for legal research and thought it would be helpful to folks fresh out of law school as long as they checked each answer and only used ChatGPT as a start (which I agree with), if someone had told me that 20 years ago there would be something out there like ChatGPT (or Google or Microsoft or Apple translate) that could so flawlessly mimic the workings of a mediocre human mind, I would not have believed it. I would have thought that language was too complicated. On the other hand, I fully expected 20 years ago that by now we would have AI that could diagnose illnesses like House . To my chagrin, both of those predictions were wrong. Brian Gilmore 1 Luke Bailey Senior Counsel Clark Hill PLC 214-651-4572 (O) | LBailey@clarkhill.com 2600 Dallas Parkway Suite 600 Frisco, TX 75034
arshad Posted December 31, 2022 Posted December 31, 2022 It's important to note that GPT is a machine learning model, which means that it uses statistical techniques to learn from the data it is trained on. As a result, the quality and characteristics of the training data can have a significant impact on the performance of the model. In future we can see better results - I hope Luke Bailey 1
QDROphile Posted December 31, 2022 Posted December 31, 2022 On 12/20/2022 at 10:14 AM, Luke Bailey said: I would guess that clients will be more interested in what the bot has to say than our input over that kind of horizon. I am reminded of my conviction that we got section 409A as a consequence of “consultants” claims that our advice/interpretation about nonqualified deferred compensation rules was too conservative. The quoting function is illustratively mechanical in attributing Brian Gilmore’s statement to Luke Bailey. Luke Bailey 1
Christine Roberts Posted January 31, 2023 Posted January 31, 2023 I was curious about ChatGPT and threw it a few EB questions, below. With further prompts it would probably have gotten me closer to what I was looking for, which was a discussion of fundedness and the DOL trust non-enforcement policy. I haven't sorted out how I feel about this device. I do know that my mom won't use ATMs, and I think that the uptake of legal information from AI will be rapider with each generation. Whether it will ever fully replace legal advice and strategy remains to be seen. Brian Gilmore, Bill Presson and Luke Bailey 3
Bantais Posted July 7 Posted July 7 On 12/20/2022 at 3:20 AM, Luke Bailey said: Brian, I'm impressed and not impressed. Basically, all I've seen so far from ChatGPT is intelligent cutting and pasting of what is out on the internet. Granted, it's a real achievement for it to figure out what information is relevant to the question and to scrape it from the internet and cut and paste it into an intelligible answer, but this is newsletter type stuff, not an actual solution to a hard problem. I read the NY Times article on ChatGpt a week ago and the example that really struck me was the algebra question. The NY Times article linked to a post on Twitter: "A line parallel to y = 4x + 6 passes through (5, 10). What is the y-coordinate of the point where this line crosses the y-axis?" Chat GPT begins by explaining the problem and that we need to find a parallel line and do some algebra, about as well as a middle school math teacher at the chalkboard, but then boldly spits out a humorously wrong answer. ChatGPT is just regurgitating, cleverly to be sure, the stuff it scrapes off the internet. Try asking it one of the more difficult questions that you have received on BenefitsLink over the last year and see what you get. If you want to see how it handles truly tough or niche questions, you could also try bouncing them around on https://overchat.ai/. A lot of folks there push these AI tools to their limits in more specialized discussions, which can be eye-opening. That’s a fair take, and honestly a pretty balanced critique. I think it’s important to keep in mind what these language models actually are: sophisticated pattern-matchers that generate plausible text based on enormous amounts of training data. They don’t truly reason or understand, so they can easily give you a beautifully worded — but completely wrong — answer. Where they do shine is quickly organizing general information, drafting summaries, or helping think through straightforward problems. For anything involving rigorous logic or deeper subject expertise (like the algebra example you mentioned, or complex compliance scenarios from BenefitsLink), they’re still hit or miss and always need human oversight. I’d say the technology is impressive for what it is, but you’re absolutely right that it’s not a drop-in replacement for actual expertise — at least not yet.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now