generative glitter is a weekly newsletter focused on how to use generative AI to work better—from theory to techniques.
This week’s issue ties together some similar ideas from disparate sources about human creativity and outsourcing cumbersome tasks. Let’s go!
Jermaine Murray writes in Mashable about how human input is still relevant with generative AI at work.
He writes:
The new "cheat code" for a career will be finding ways to incorporate AI into your workflow to enhance what you're already able to produce.
His idea is that automation of repetitive and laborious tasks by generative AI will allow white-collar workers “to focus on […] tasks that require creativity, critical thinking, and emotional quotient.” For him, in one word, “freedom.”
Speaking of creativity
Paul McDonagh-Smith from MIT Sloan School of Management also talks about complementarity between humans and AI and the importance of human creativity when using generative AI tools.
He says:
Boosting your creativity quotient [the number and diversity of ideas] will optimize the use of large language models and generative AI.
This resonates with me after reading about Charlie Brooker’s (creator of Black Mirror series) experiments with ChatGPT.
Booker explains in an Empire Magazine interview that the result of asking ChatGPT to “generate Black Mirror episode” didn’t go so well—“it comes up with something that […] is shit.” But, after reflecting, it provided enough of a spark to get him to realize that he needed to think outside of the box.
The role of AI here is as a collaborator that can help you think about the problem differently.
An HR/people issue
Ethan Mollick, a professor at the Wharton School of Business, discusses wide-ranging topics about the significance of generative AI in the workplace and the classroom in an interview with the McKinsey Global Institute. There are lots of gems in this interview.
Contrasting generative AI with other emerging technologies, Mollick highlights that usable generative AI is already here. This means that organizations can take advantage of AI capabilities—like writing code, processing data, and generating marketing strategies—today.
Importantly, he emphasizes that Large Language Models are not enterprise software. They don’t work like traditional, deterministic programs and they are not easily constrained or regulated. He suggests that, instead of thinking of it from an IT/strategy perspective, organizations should consider generative AI more from an HR perspective. That means thinking about who gets to use it and what they can do with it.
For impacts on work, he emphasizes that AI is good for outsourcing tasks (rather than entire jobs) and for increasing productivity. What happens with the increased productivity? The directions that organizations take from here depend on organizational goals and values.
Mollick’s ideas are similar to what McDonagh-Smith and Murray write about above (and similar to what we discussed in issue #03). He identifies a few diverging paths:
More productive workers can work less to maintain their output.
More productive workers can produce more to fill their time.
Organizations can reduce the number of employees to save costs.
Organizations can let workers replace unenjoyable tasks with new creative work.
In his entrepreneurship and innovation classes at Wharton, Mollick requires students to use generative AI. Since Mollick sees the tool as a productivity booster for students to outsource difficult or routine tasks, he expects increased output and higher quality work from his students (“If someone’s not sending you a well-written email, then they didn’t care enough to use AI“).
I think the classroom example is relevant as a generalizable case study, and I’m guessing that many other organizations will try to follow this path as they catch on to the opportunity.
Problem formulation > prompt engineering
Have you been following the headlines around the internet about high-paying prompt engineering jobs? You might want to wait before you switch your title on LinkedIn.
Oguz Acar from King’s College London writes in Harvard Business Review about why prompt engineering is fleeting as a specialization (or even as a specific technical skill).
The core idea is that focusing on prompting mechanics is not worth the effort because current (e.g., GPT-4) and future models will need less guidance.
Instead, he argues that problem formation is the root skill to learn. In his words:
Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries.
Ethan Mollick, again in the McKinsey interview above, echoes this sentiment when talking about prompt engineering “I’m doing the same set of stuff [prompt engineering]. But you can get 80 percent of the way there to getting the most out of these systems by just dealing with it like a person that you’re in charge of.”
My take-aways:
First, instead of trying to position yourself as a “prompt engineer,” maybe think about how you can more in your current role with generative AI. And that starts with focusing on the problems you want to solve.
Second, maybe focus more on experimenting and don’t worry too much about getting it perfect on the first try. Eventually you and the LLM will both get better at communicating with each other.
Employee perspectives
BCG, a management consultant, surveyed 13,000 people to ask about AI (in the broad sense) recently. They found an interesting conclusion about generative AI:
Employees recognize the need for training and upskilling that this new era will require, but few have actually received it.
The conclusions of their analysis are that leaders should:
Ensure spaces for responsible experimentation— since regular use creates better outcomes and greater comfort.
Invest in regular upskilling— since learning about generative AI is not a one-off effort.
Prioritize a responsible AI program— since employees need reassurance about safety.
I’m thinking about: how can employees support this agenda? My thoughts for the corollary points, as an employee I can:
Ask for support to experiment
Investigate resources for learning
Encourage conversations about responsible use
What do you think? If organizations expect workers to use generative AI to do their jobs better, how should they handle training and upskilling?
—Aaron