Too much has been written about ChatGPT. It’s an extraordinary piece of technology that lets you describe how to recode a variable in Stata in the poetic style of Sir Gawain the Green Knight. Folks have discussed the many situations where ChatGPT is extraordinary and where it falls short. I thought I’d share one exercise that illustrates its shortcomings.
I thought that perhaps it would be a good source to identify answers for when and where specific pieces of legislation were passed. These should be part of the training data ChatGPT uses, and law passage either did or didn’t happen. So we shouldn’t be in the realm of bullshit and right in this technology’s wheelhouse.
I occasionally write about Right to Work laws, and so thought I’d use these as a test case. There are two states that prove problematic: Texas and Indiana. Indiana briefly passed Right to Work from the mid 1950s to the early 1960s. They then repealed the law, only to pass it again in 2012. Texas functionally passed Right to Work in 1947 (see https://lrl.texas.gov/scanned/Housejournals/50/03041947_29_600.pdf, page 610 for the Rep side). In 1993 Texas made some major adjustments to the law, formalizing it and fleshing it out. But I think you’d have a hard time arguing that Texas did not have Right to Work prior to 1993 (some pro-RTW websites list Texas as non-RTW until 1993).
I asked ChatGPT about the timing of Right to Work passage. Here’s what happened.
A few things:
In simple cases, like Florida, ChatGPT was correct.
For Indiana, ChatGPT twice claimed with confidence that Indiana had no RTW law prior to 2012. It was only when I corrected it with my own pre-existing knowledge that the program updated its claim. I’m not sure why it would state with certainty a claim that was obviously false, especially since it seems like it had the specific information in its database. If I didn’t know that Indiana passed RTW briefly in the 1950s, I think it’d be reasonable to think this state was a simple case like Florida. And I’d come away with an incorrect understanding of the world.
Look at the hard pushback regarding Texas. Even when I cite a great article by Marc Dixon, ChatGPT does a funny dance: “I’m so sorry. You’re right and I’m wrong. Texas followed your argument, although I, ChatGPT remain correct and Texas had no RTW until 1993.” That’s super weird. It makes me wonder if I could do similar pushbacks with incorrect information and get the same equivocating answer: “I’m so sorry, you’re correct, Washington State has been a RTW state since 1945, although the state has no RTW law. I’m sorry for my error.” I also wonder why it’s so adamant to stay with its, at best, questionable answer.
Whether or not a state passed a Right to Work law is pretty cut and dry. I admit that the Texas case is quite ambiguous, but I don’t see how you could not look at the link I posted above or the article I cited in the chat and not think, “OK, Texas definitely had everything that is RTW passed in 1947, perhaps with a different name.”
I think this is a pretty serious issue for ChatGPT. Whether a law passed or not is exactly the kind of knowledge and answer set that this program should be ideally used for. This really kicks out one of the main benefits of this program for me. Folks have described it as a master bullshit generator, and I really think that’s true. Unless you already have the sufficient stock of knowledge to assess its answers, don’t use ChatGPT to learn about true/false pieces of history.