Large Language Models (LLMs), like ChatGPT, have captivated the imagination of millions with the possibility of generating high quality text, images, videos and more at the push of a button. Over the past few years, these models have exponentially improved, but even with billions and billions of parameters, it’s important to recognize that LLMs are still “unreliable narrators” incapable of independent research to verify the factual integrity of their own statements.
If I personified LLMs, I’d say they’re more like your charming and long winded Uncle Ben. He talks a great game and “knows something about everything,” but you definitely need a second and third opinion before you follow any of his advice. Sometimes he hallucinates. Sometimes he doesn’t actually know much about the topic, but sure sounds like he does!
As with any tool, you should know when and how to use LLMs so you don’t get yourself in trouble…like this lawyer who decided to use ChatGPT to write a legal argument.
What are Large Language Models?
Large Language Models are language models created by neural networks processing huge bodies of unlabeled text. They simulate the way a human brain works. The main purpose of this “digital brain” is to predict the next best item in a sequence and it’s “trained” to do that by looking at billions of possible examples.
For written language, that looks like the text completion tool you see in word processing software and email editors. In that context, it’s only suggesting a word or simple phrase at a time whereas ChatGPT, trained on billions of parameters, can spit out pages of coherent prose.
Another example, for image generation, LLMs predict the best sequence of pixels based on the words in your prompt.
The more data an LLM is trained with and the more parameters, it seems the more accurate it gets at certain tasks which is why GPT-4 is so much more powerful than GPT-3 or 2.
What can LLMs do?
It may sound odd, but we don’t actually know the full extent of what LLMs are capable of doing and that’s why many at the forefront of AI research are demanding new regulations and restrictions on the technology. In the wrong hands, LLMs could very easily be used as a digital weapon that the world is not ready to combat.
But in the context of email marketing, LLMs have the ability to generate, edit, summarize, label and style text in ways that aren’t always factually correct, but that are often easier to read and more engaging.
There’s huge potential for LLMs to build off of your existing email content to create new content or to develop completely new approaches to email communication in a few days that would normally take weeks and maybe even months to develop.
What are the limitations of LLMs?
LLMs have a few limitations.
They can be unreliable. Even if you ask an LLM the same question several times, they will most likely give you a completely different answer every time. That can be a good thing when you want a variety of different ways to talk about your latest webinar topic. It can be a bad thing when you want to solve a math problem or explain the meaning of Hamlet.
A whole new field called Prompt Engineering is growing out of the advent of LLMs where professionals study and formalize approaches for communicating with these models to consistently get the best and most accurate output.
At Motiva, we’ve been systematically experimenting with LLMs since their early versions and we’ve developed tactics for prompt engineering that greatly increase quality and accuracy. And we are building that knowledge into our platform to help you quickly get the most out of LLMs to improve your content development.
Another limitation is explainability.
LLMs are essentially a massive brain, but unlike people, they can’t explain why and how they do what they do. They may try to explain it in terms of how a human would do the task, but they can’t explain the reasoning behind their own conclusions. And even if LLMs could provide an explanation, most wouldn’t understand it because it’s a complex mathematical model working on a massive scale.
That’s why, especially in a business context, you need ways of manually or automatically verifying the output of LLMs so you can ensure accuracy and explain the reasons behind certain choices. This is another way Motiva can help you quickly take advantage of this generative power – we can help you quickly ensure quality and accuracy through additional reporting so you feel confident using the output of an LLM in your email campaigns.
At Motiva, we’re working on the future of LLMs (check out Motiva Generator!) and making them easily accessible, and more reliable, so you can save hours of work on content development, testing, and optimization.
© All Rights Reserved.
Made w/ ♥ at The Guild.