13 Comments
Feb 2, 2023·edited Feb 2, 2023Liked by Arvind Narayanan

Terrific piece, though I'd argue that truth is actually not completely irrelevant to fiction and other art. And that you might have nailed the reason why those ChatGPT Nick Cave lyrics aren't going to haunt anyone's dreams. They're just not coming from a place of truth.

Without that magic ingredient of working off of the truth, stories or songs don't really truly connect with life. That's probably why if you get ChatGPT to generate a horror story about a Panera Bread visit gone wrong, the result is wacky and impressive, but at the same time is sort of meaningless forgettable BS'ing that won't stick with you the way something would that was trying to dig into the truth of being a human being.

Expand full comment

A typical English user of ChatGPT would ask a question about a subject that is well known to him from American or British culture, find some mistaken detail in ChatGPT’s response, and say: “There, I am better than you!” OK, so, ChatGPT would get a C for that question, while this English user would get an A or B. What if you now ask ChatGPT a similar question, but related to a foreign culture, (let’s say) Indian or Indonesian. ChatGPT would likely still get a C for that question, while this English user would likely get an E or F. What if you now ask ChatGPT a similar question, but written in a foreign language, (let’s say) Vietnamese, and which the respondent has to answer in French. ChatGPT would still get a C, while this English user won’t understand a single word of the Vietnamese question, let alone try to answer it in French. The point is: users are focusing on the 0.3% of situations where they might get a higher grade, and being completely obtuse about the 99.7% where ChatGPT is irreversibly outclassing them and outperforming them.

Expand full comment

I enjoy your post(s) and agree with most of it. However, I do wonder if your perspective on learning is focused on high level education and/or students that are ambitious and genuinely want to learn. I am an associate professor at a business school, and have tested my last 3-4 years of exam question on the bot. It generally performs quite poorly and fails the clear majority, yet would pass some of the questions. It might even get a C on one of the questions (out of 25 or so). So, it is clearly not a worry in the sense that it can lead students to a high performance nor genuine learning. I'd also argue it shouldn't be able to fully answer questions at the master's level. On the other hand, at the bachelor level of a an average university, this tool can (and will soon) be able to do just well enough to pass. This can be partially solved by making exams onsite and disallowing the use of the internet. Yet, not everything happens on-site. The positive angle is that the bot helps assessing if an exam question should be used or not - but a substantial number of students, who just want a diploma, will exchange tedious learning activities with letting the bot answer for you. And in will work, at times.

Expand full comment

I guess The bigger risk with these large LLMs will likely be realized in the next decade or so. By then, there might be more and more texts flooding the Internet that were actually "written" by these LLMs drowning the original texts/ true information sources and these models won't be able to discern the true content from their own garbage.

The subsequent LLMs/ AI models trained on these data will simply degenerate further in accuracy/ creative writing and generate even more bullshit content. And ad infinitum.

Expand full comment

I think I would quibble with the statement "where truth is irrelevant, like writing fiction." Fiction without underlying truth is indeed, irrelevant- which is why AI has been largely unhelpful for providing truly quality creative output.

Expand full comment

Translation and interpretation are two different things. Writing is a reductive code for language, which is multi-modal. Knowledge is socially constructed and individual epistemologies are rooted in our ontologies. It is not clear how a subset of code for performed language is considered "truth", nor how the authors have validated their claims around language, translation, and interpretation. Oversimplifying these things is a habit that gets regurgitated in scholarly writing, and then is fed into the limited dataset of AI, so that it is codified in the ether. These mistakes are then passed on in classrooms. See also: Sign language gloves.

Expand full comment

Thanks to Memex 1.1 for recommending this great read. Some really interesting links here, too. Thanks Arvind and Sayash.

Expand full comment

Doesn't that definition fit propaganda as well?

Expand full comment

"OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever."

So then... What's it's verbal IQ? XD

BTW, were you the one who wrote about chatbots in Substack comments? Because I wish I had bookmarked that guy.

Expand full comment