What does it mean when AI gets booed at SXSW?
And can better AI prompting skills make for better human interactions?
Back story: SXSW Interactive took place last week, and there were reportedly many hundreds of panels and sessions and talks about AI. This is unsurprising. SXSW made a sizzle reel about AI featuring some of SXSW’s recurring top tech speakers. The over-arching message was that AI is here, and it’s better to get ahead of the trend than to be left behind. Some folks seemed a little more enraptured by the tech, with one featured talking head (and VP at OpenAI), Peter Deng, saying he thinks AI will make us “more human.”
This sizzle reel was shown before screenings over on the Film Festival side of SXSW, and in a move that maybe was surprising, the promo got booed. Repeatedly.
Let’s just say it: It was tone-deaf to show this “get used to it” video coming on the heels of a year when people in the Film industry lost months of work, where that industry’s recovery is still feeling slow, and where one of the primary points of contention was around AI.
But there was something else that struck me about this negative response. It was that this advice to just go with it was being given by people who already have the privilege and position to succeed in this new era, and that felt almost cruel. [Tangent: It reminded me, oddly, of when NYT op-ed writer Ezra Klein wrote an essay stating that Joe Biden should just “do the right thing” and drop out of running for President, mostly because of his age. Not, by the way, because Klein thinks Biden’s age made him unable to do the job, but because of, basically, optics. There was an immediate and visceral negative reaction from many directions, and I think this reaction was fueled by a similar perception that Klein will likely be fine no matter what happens in November, whereas a lot of other people feel like they will not be fine; they feel like the outcome is very important not just as a thought exercise, but personally.]
I totally understand the visceral negative reaction of the Film crowd to this gung-ho “get with it, or get left behind” messaging. But my brain did a few loops on the “it will make us more human” comment. Because I see a grain of truth there when it comes to communication. Stay with me here.
If there’s one consistent piece of advice given when it comes to making the most of AI tools, it is that one must write very clear, specific, well-defined prompts. “Prompt engineer” is a job title of the future (probably already of right now). Thinking back 20 years, it’s similar to the typical guidance on how to get more relevant search engine results. Be specific; add words that can make your search narrower; use Boolean search parameters to outline specifics and carve out exceptions, presumably getting you closer and closer to what you really want. That’s pretty much the same that we’re hearing about how to get the best results from tools like ChatGPT. Be specific. Be clear. Add more defining parameters until it’s clear that the tool is operating within the window you want it to.
Join us for this month’s Optionality member webinar on Caregiving: The Silent Energy Drain. Featuring speakers with both expertise and lived experience, Karen Chong from AARP, Kim May from Caregiver Wisdom, and Liz O’Donnell from Working Daughter, moderated by Optionality co-founder, Elisa Camahort Page.
You know what that also sounds like to me? Good advice on how to set clear objectives for your direct reports. How to give feedback that can be internalized and acted upon, both praise and constructive criticism. How to avoid unnecessary misunderstandings and crossed wires. How to strip away jargon and inside baseball language. How to speak more inclusively.
When speaking with humans. Yes, particularly in digital formats like text and email, but even in conversation.
I offer a LinkedIn Learning course on Telling Stories That Stick, and much of the course is focused on the prep work you should do before you ever start telling your story…such as assessing your audience. Ask yourself: Who is your expected audience? What are their needs? What do you want them to takeaway? What is the format in which you must work to deliver your story? How do you make sure your takeaways address their needs and are appropriate for the format? The story you tell may change significantly depending on your answers to all of these questions and more.
Now, none of this implies that this will help AI replicate a boundless imagination, craft wildly creative juxtapositions, or add new verbiage to the universal (or at least Internet) lexicon based on portmanteaus, slang, colloquialisms, idioms, or other flights of linguistic fancy. But it might help us clean up and clear up the admittedly more prosaic, but necessary communications we engage in on the daily.
So, no, AI may not make us make us more human, but maybe it will help us be better communicators with our fellow humans.
Jory and I explore this topic further in this week’s Conversationality, coming this Thursday to Premium members, and we’ve got more AI-focused content coming. So we’re wondering: What do you make of all this? Are you using AI? Do you feel threatened? Or like it will support your work and help you be more efficient? Do you feel a different level of clarity is required to get what you want from AI tools? Can you imagine communicating with that level of clarity to humans, or would it make conversations hopelessly stilted? Leave us a comment, and let us know!