I was in a bunch of bad rock and roll bands in the ‘00s. I wrote a bunch of songs at the front end, but when I joined up with some new fellas around 2007 we started this “cover” band called Tiny Purple Fishes. Over the next few years lots of my interests and ideas around music and authorship crystalized—I recognized a passion for punk covers, early rock music medleys, and reinterpretation of songs. Partly because I didn’t like my own songs, and partly because we found it fun and accessible, we leaned in to playing other people’s songs—oftentimes creating our own key-match medleys and new imaginings.
But when we’d try to gig around New Orleans and otherwise folks would kinda compartmentalize us as a ‘cover band’ which really bothered me. Sorry if you’ve been party to one of my many rants about how “actually—we used to keep things like songwriting, performing, recording, etc. relatively separate. It’s only with Dylan and the Beatles that we tend to expect artists to do all of those things at the same time.” I tried to lean more and more into the re-interpretation camp, probably because I have a contrarian streak.
Now, why am I reflecting on this in a leadership dialogue blog post? Well, if you’ve been following my (admittedly haphazard, naive and chaotic) Twitter for the last few months, I’m sure you’ve noticed my dabbling with artificial intelligence—primarily chatbots through Poe.com’s neat applications.
Before I tell you more about what I’ve learned and what I’ve been doing with chatbots, I’ve gotta mention that I’ve been somewhat of a Ken Goldsmith acolyte for a long time now—I was kinda raised on Ubuweb and conceptual poetry and artistry, from Duchamp to Warhol and way further. I mention this because Goldsmith’s ideas of “uncreative writing” keep coming up for me with use of large language models and chatbots. I also want you to note that Nicolas Bourriaud’s books “Relational Art” and “Postproduction” come up for me a lot here as well—with an emphasis on the artistic practice of remix that’s so clearly permeating our popular culture by these 2020s.
Below is my current thinking on misinformation and chatbots, authorship and chatbots, context with chatbots, and chatbots as close reading tools.
Misinformation and Chatbots
I’ll just tell you must honest-to-God first thought here—y’all believe everything you read on the internet? What’s really been prominent for me after several months of experimenting with a variety of bots—and several months navigating the wild discourse around AI—is that for some reason folks seem to forget the dubious quality of pretty much all other text and media we encounter on the internet. Are you aware of search-engine optimization (SEO)? Are you swimming in a veritable sea of correct information out here? Do you regularly believe—without cross-reference or fact checking—the blogs, microblogs, tweets, news stories, etc. that you’re reading? Help me to understand how the facticity of chatbot output is any less dubious that most media we encounter. Actually, it’s probably a bit more reliable because I’d usually give a person the benefit of the doubt rather than a machine.
This makes me think about the Wikipedia bugbear thing that’s haunted me since my days of high-school language arts teaching. I’m not an expert here, but when I was in the classroom from around 2010-2020 folks regularly appended writing assignments with “don’t use Wikipedia for research”—even though these selfsame educators were Googling for their own research, oftentimes relying on Wiki’s as a starting point.
I bring this up because I think the same thing is happening right now with chatbots. While, sure, the genesis of Wiki may have opened the door for at least unreliability, studies have shown that crowdsourced editing and otherwise have made it relatively accurate. That’s why people start research there. This sorta mythology from the jump will, I’m sure, stick with chatbots. What I mean is that folks will continue to say they’re unreliable even when evidence shows they are—probably more reliable than a human person, and probably already as of this writing (at least depending on how reliability is measured).
There are a couple of different approaches we can take here: (1) never use chatbots because we’re convinced they’re misinformation machines; or, (2) continue to tinker with chatbots and test their validity and reliability. I suppose framing it this way tells you my position. One thing I’m regularly doing with a variety of bots (ChatGPT, Claude, etc.) is testing this premise that they make up citations. I’ve found that if I’m asking it for sources—esp with GPT4 and Claude+—it’s harder for me to find a source that is made up than one that isn’t. I’ve asked it to produce lists of references around a topic and cross-referenced them, and golly that’s been just tremendous for my research into dialogue and leadership.
What I’m saying here is—don’t do the Wiki thing and parrot this idea that it’s all misinformation. Treat the output of chatbots the same way you’ve been treating media—critically and dialogically: interrogate it, cross-reference, check claims, look for preponderance. The idea that we can’t do those things with chatbots seems subliminal in the discourse—and it’s wrong!
Authorship and Chatbots
When I was hella into Goldsmith, Ubu and conceptual writing, I wrote this crazy essay on authorship. I think I was reading about like Petrarch and Dante and ideas around authority and authoring—just another one of those etymological jaunts that illuminate cultural history. As I mentioned at the jump here I’ve been fascinated with ideas of authorship and the emergence of a sort of cultural expectation of originality—related to these demands we oftentimes see nowadays of artists to write their own work, perform it and record it. Just a cursory glance at early popular music—blues, country, jazz, gospel, etc.—shows you that folks were always borrowing: forms, phrases, melodies, etc. Anybody who has studied Dylan knows this.
There’s almost a fetishization of the purity of original authorship—and in popular music it’s not even as old as my dad. These ideas come up for me with growing frequency in the AI debate. Look, I love Walter Kirn, but he just tweeted something today (July 3rd) to the tune of like—chatbots are plagiarism machines, all they’re doing is regurgitating somebody else’s content in a different form, etc. Like I said—I love him, but he is wrong!
Let’s surrender a little bit of this primacy of the author thing and recognize that it’s a pretty ambiguous threshold between what has influenced and informed someone’s work, and what comes out of the other side. I think—whether we like it or not—ideas of copyright are going to need major revisions after chatbots. And evidence of this abounds in the landscape of popular music where (and I get there’s attribution and compensation happening) so many popular songs now are legitimate remixes of old choruses, previous melodies, pastiches of pieces and parts—it’s Bourriaud’s Postproduction.
Context with Chatbots
Now for something a little more practical than theoretical. Please tell me if you see somebody out there chatting and writing and discoursing around context with chatbots. Have you used one? Oftentimes when you return you’ll get a little message about the context being cleared. This is absolutely critical to consider if you’re working with chatbots as a thought partner.
At some point I had a little dialogue with a chatbot about context. The major difference between dialoguing with a person and a chatbot at this point in time is the coherence of its memory of what you’ve already talked about. Actually, talking about coherence here is misleading—they’re deliberately designed to “time-out” and drop the context after a period of time. Please let me know if you’re a daily GPT premium user and there is some kind of feature over there with extended context. What I mean by that is—does it remember everything you both have uttered over the last several days, weeks, etc.? This is what a human-dialogic partner does, but a chatbot doesn’t do by design—the rationale the bot suggested here is at least twofold: to prevent nefarious use (which surprised me), and because it’s so complicated and resource-intensive to have a bot consider all of that.
So keep this in mind the next time you’re dialoging with a chatbot—it’ll drop the context window if you’re inactive for a bit (sometimes this seems like less than ten minutes, but I haven’t done a systematic examination of this). Lots of the power of chatbots is in the dialogic piece, so if you’re looking to workshop something with a chatbot you’ve gotta be ready to do it in that go and recognize you’re not going to be able to pick the thread up again in the future (again, if you know a software that does this, tell me about it because I want to use it).
There’s an underlying architecture of tokens that large language models and chatbots use to manage their “context.” I think GPT3, 3.5 and 4 may be somewhere in the thousands of tokens range. But this is why it was fascinating a month or two ago when my Poe.com subscription added this Claude-instant-100k bot. The 100k is the token size. This means, in layman’s terms, the amount of ‘context’ it can consider. If you’ve used a chatbot you probably have run into the thing around “this message is too large,” especially if you’re feeding it text and asking it to summarize or rephrase. The 100k token window makes it possible to put small books through it, and pretty much any reasonably lengthy academic article, text, etc. And this is where my final section for today emerges:
Chatbots as Reading Tools
I’ve been not-at-all doing my dissertation for an EdD educational leadership program here in Louisiana for a few years. One of the most difficult things for me as a relatively broad and theoretical thinker is not becoming dazzled by the bigness of the picture. In other words, it’s been very difficult to see the forest for the trees. I had a great conversation with my dad last week—he’s an amateur researcher into soil composition and compost, and in my opinion an expert gardener. What he shared was that it’s so time consuming and difficult to sift through the scholarly research, and when you get into the granular (trees, so to speak), it’s hard to piece together a broader picture—sometimes you even forget the details of the last tree.
This is what I’ve found so helpful about chatbots as a thought partner for my educational research into dialogue. Everybody has a framework and a conceptualization of dialogue, a five or ten part list of recommended, amorphous interventions or best practices. Chatbots have helped me wrap my mind around the recurrent themes in the dialogic literature—and the advent of this 100k bot tool has accelerated my learning and literature reviewing. I think it’s somewhat under-talked-about—this capacity of chatbots to help out as close reading tools.
Here’s what I’ve been doing with research: I’ll find an article that I’ve been meaning to read (we have twin four-year-olds here, when am I gonna read 20 pages about paradox?), I take the text and feed it to Claude-100k with some instructions:
Read this article and tell me the following:
What are the primary aim and research questions of the study?
What is the theoretical framework underpinning the research, and how does it connect to existing literature?
What research design, methods, and data collection techniques were used, and are they appropriate for addressing the research questions?
What are the size and characteristics of the study's sample and participants, and are there any potential biases or limitations related to them?
…
Then I ask it to go back and tell me “ten things you think are the most interesting and useful for understanding leadership, dialogue and coaching.” Then I use GPT4 to “synthesize all of this into a ten paragraph summary and cite the source listed at the beginning.” Let me remind you now of what I theorized about above—just like everything else I read or encounter across the internet and otherwise, I’m always cross referencing things. What I can do with this output it check the validity of specific points. It helps me zoom into the most critical pieces of the scholarship from my expertise-POV.
The last thing I do with the bots is ask them to “go back to the article and recommend three sources from its list of citations if I’m interested in leadership, dialogue and coaching.” I’ve done this dozens of times and only ended up with a handful of hallucinated sources. Actually, it’s helped me find more prominent and cited resources, and oftentimes points out books that don’t quite show up in other research repositories like Google Scholar.
Finally
This is a breezy and reflective week for me—one where pretty much everyone is off for the 4th of July, and one that’s usually a meditative one for me as my birthday falls after the fireworks. I hope some of this slapdash and haphazard theorizing and reflecting on artificial intelligence, large language models, and use of chatbots is helpful for some of you out there! Moreso I hope y’all can point me to more people talking about context, authorship, misinformation and close reading with chatbots. I hope you’re having a great summer! Godspeed
A nice read and I do agree with your call to use generative AI critically, as a tool and a need for a high level of media literacy. But we also got to be practical here - most won't. And the amount of AI generated content will soon swamp so much. I also, don't agree with your stand on authorship. It's not like remixing at all (something I support, have cheerlead). Rather, it absolves the author of any work, claim, art. It thinks for ourselves. So in that sense, it is a giant, push a button, one-off plagarism machine. Google's button in docs (labs) "Help me write" is really a very steep slippery slope into abandoning creative thought. Generative AI will take over the starting process and from there - is there any true authorship? I think not.