Skip to player
Skip to main content
Skip to footer
Search
Connect
Watch fullscreen
Like
Comments
Bookmark
Share
Add to Playlist
Report
Brainstorm AI Singapore 2024: How to write effective AI prompts
Fortune
Follow
7/31/2024
Teodora DANILOVIC, Prompt Engineer, Autogenai
Category
🤖
Tech
Transcript
Display full video transcript
00:00
So, in the short time I have with you all today, I want to demonstrate how smart prompting
00:04
leads to smart outputs and hopefully give you a few practical tips and techniques that
00:09
you can take away and use for your own prompt creation process.
00:14
So I work at Autogen AI, where we help organizations around the world write more winning bids,
00:19
tenders, and proposals by leveraging large language models and linguistic engineering.
00:24
I get a lot of questions around what prompt engineering actually is and what I do in my
00:28
role, so I thought it would be useful to clarify that.
00:30
Can I get a quick show of hands, who here has used ChatGPT or any other similar platform
00:36
for their prompting?
00:38
Wonderful, everyone.
00:39
Perfect.
00:40
So, what most of you have been doing in those instances is called prompt crafting.
00:46
Prompt crafting is when you interact real time with a model and give it a prompt for
00:50
that individual instance.
00:52
You receive useful and relevant responses, but you wouldn't necessarily expect that prompt
00:56
to work on any other piece of text that anyone else would use.
01:00
So prompt engineering is curating prompts which produce replicable, reliable outputs
01:05
to fulfill a specific function, whilst continuously objectively measuring and improving them.
01:11
It's about setting up frameworks that scale well in the future with any unknown input
01:15
all the time.
01:19
There are many prompting techniques out there, but these are some of the most popular.
01:23
Today, I will demonstrate a few of these as we consider their benefits and their drawbacks,
01:28
and we'll do this by focusing on one task, which is extracting and classifying things
01:32
from a data set.
01:36
I'll be using AutoGenerate's platform today to demonstrate these prompts, so the feature
01:40
that I'll use, it offers multiple options of outputs for each prompt, so you'll see
01:45
three options on the right-hand side.
01:47
Up here, we have a zero-shot prompt.
01:50
It's an instruction with no examples.
01:52
It's what everyone does the first time they interact with the large language model.
01:56
It works well most of the time, but it does have a few drawbacks.
02:00
Sometimes it can lack a nuanced understanding of the task we're trying to achieve, and we'll
02:04
see that in practice today.
02:06
So in this example, I've asked it to classify a piece of text, and that piece of text has
02:12
a debatable output.
02:17
So as a human, I would expect the product arrived late, but the quality is excellent,
02:23
to generally be a positive statement.
02:25
Although it has some positive and some negatives in it, it's weighted heavier on the positive
02:29
statement.
02:31
We can see that the model has given neutral for all three outputs.
02:37
So for this case, zero-shot prompting doesn't encourage that nuanced understanding, and
02:41
I probably wouldn't trust it for even larger sets of data.
02:44
Here, the model lacks understanding of what I mean by each of those sentiments.
02:48
What does positive mean for me?
02:49
What does neutral mean for me?
02:53
In the previous example, I didn't have enough context.
02:55
So one way of providing it with more context is offering examples of what you want.
02:59
This is called multi-shot prompting, a shot being an example.
03:04
Chain-of-thought prompting means asking the model to think step-by-step and show its reasoning.
03:13
So I've given it pretty much the same prompt, but I've given it three examples of what
03:17
I think a positive, negative, and neutral statement is.
03:20
You can see, as we transform the text, the outputs are far more nuanced.
03:24
The first one says positive.
03:26
The second one, after explaining its chain of thought, where I've instructed it to think
03:31
step-by-step, also concludes to be positive in the end.
03:34
It's far closer to what we're looking for.
03:38
One thing I will say is that you should be wary of bias with multi-shot prompting.
03:43
In this example, the model might take it to mean that positive statement must always
03:47
talk about product quality, or it always must mention that the site was confusing, or that
03:51
it always has to have one positive and one negative side to it.
03:55
If you're using multi-shot prompting for larger sets of data, you have to make sure your examples
03:59
cover all bases.
04:01
This can be quite difficult to do, as you have to think of every way that something
04:04
can be interpreted.
04:06
Chain-of-thought also helps in this scenario, because if I can see the model's thought process,
04:10
I can see where it went wrong, and it can help with my model debugging.
04:14
Multi-shot here was really great for improving this task and giving the model some more understanding
04:18
of my intention.
04:19
But sometimes I want to do something a bit more complicated, and I don't want it to give
04:23
me just a one-word classification for something.
04:26
Let's look at how we can introduce even more complexity with some prompting techniques.
04:33
Prompt chaining, or multi-step prompting, is best for complex reasoning tasks that cannot
04:37
be instructed in one go.
04:39
It ensures you're working on the best piece of text at each stage, and it doesn't leave
04:42
room for model inconsistency.
04:45
It makes sure that the potentially conflicting instructions don't interfere with one another.
04:50
Let's say I want a more complex analysis of some sentiment on a larger body of text.
04:54
As a human, I might break this process down into a few different steps.
04:58
First, classifying the statements as a whole, then extracting themes from the statement,
05:03
and then grouping those themes.
05:04
This type of breakdown also works great with prompting.
05:10
For the sake of brevity, I'm just showing the output of the first prompt that I gave
05:13
the model.
05:15
The prompt instructed it to classify a list of customer feedback into sentiments using
05:19
the same multi-shot prompt that I showed you guys before, except on a larger set of data.
05:24
There's around 25, 26 pieces of customer feedback here.
05:27
This is the first prompt in this chain.
05:33
The second prompt in the chain is a really good zero-shot prompt.
05:36
Up at the top here, you can see we've asked it to identify all the themes in the following
05:41
list of customer feedback.
05:43
Your response should be an exhaustive list of accurate and relevant themes with a number
05:47
beside them indicating how many times it appeared.
05:50
Do not write the same theme out twice.
05:53
Feedback colon.
05:55
So I think it's important to note some key parts of that prompt.
06:01
So I've asked it to identify all the themes, and then I've repeated myself and said it
06:05
should be an exhaustive list.
06:07
Repetition is always good.
06:08
I've said that it's a list of customer feedback, which provides the model with context of what
06:12
I'm talking about.
06:13
I've asked it to be accurate and relevant, so it's clear that I want the themes only
06:17
to do with customer feedback.
06:19
And then I've said how I want the structure to look.
06:22
You can see with all of the options, the structure is exactly how I want it to look.
06:26
The answers are exactly what I'm looking for.
06:28
So I'm happy with this.
06:30
Let me bring it over into the editor.
06:33
So as we move on to the third prompt in the chain, you can see how with each step, the
06:37
output is a combination of everything that came before it.
06:41
Context which is implicitly woven into the answer gets carried on to the next prompt.
06:49
The prompt that I've given it here, the third prompt in the chain, is classify the following
06:52
themes into positive, negative, neutral, or other categories.
06:56
The response should have the sentiment as a heading followed by the themes which fall
07:00
under that sentiment.
07:01
Under each theme, write a brief justification of why it falls under that sentiment.
07:05
And then I've given the list of themes.
07:07
As you can see on the right-hand side, this is far more nuanced, accurate, and useful
07:13
than the first thing we looked at.
07:16
This output is actually a combination of all the techniques we've looked at so far.
07:19
And we could not have gotten this type of information which is this thorough and this
07:23
accurate through only using one of the techniques.
07:29
Once you've got an output that you're happy with, the possibilities really are endless.
07:34
You can translate it into JSON.
07:38
You can play around with the tone.
07:42
You can turn it into a PowerPoint presentation and much, much more.
07:48
So to conclude this short segment, simplicity is always best.
07:53
Although a single-shot prompt wasn't nuanced or accurate enough for our example today,
07:57
it is most often the best choice.
07:59
The last three examples I showed you of turning it into JSON or translating the tone, all
08:04
of those used a really good, successful zero-shot prompt.
08:08
A prompt should be direct, unambiguous, and relevant.
08:12
And each of the techniques we've gone through today are just different ways of ensuring
08:15
that the prompt meets those requirements.
08:17
I really hope this offered some insight into what prompt engineering actually is and gave
08:21
you some tools that you can take away and implement into your prompt crafting.
08:26
I think we might have time for one quick question, if there are any questions in the audience.
08:32
Yes, we have someone here.
08:49
How do you think about it, and do you use any tools to refine your prompts?
08:55
Yeah, so that's actually a really interesting question.
09:00
You can prompt models with how you want them to refine your prompts, interestingly.
09:05
So you have to have the techniques in order to instruct the models in how you want your
09:09
prompts to look anyway.
09:11
Sometimes if I'm lacking inspiration on how to begin writing a prompt, I will definitely
09:15
use a model.
09:16
I'll be direct, unambiguous, relevant, clear about what my parameters are, clear about
09:20
what my instructions are, and often it gives me really good framework.
09:25
The only problem with models are it's sometimes a bit more nuanced than that, and you're looking
09:29
to write a prompt for a specific use case.
09:30
So I know my target audience, I know my customers, and I know what they're looking for.
09:35
And I can try to put that into prompting, but ultimately it's your subjective opinion
09:39
on whether you think it meets these metrics that you've set out.
09:42
So yes, absolutely, you can use other models, not only Claude, but any of the providers
09:48
to give you a first draft of a good prompt, and to better your own prompts.
09:55
Awesome.
09:56
I think that's all the time we have today, but if you have any more questions, I'll be
09:59
around afterwards, and please feel free to add me on LinkedIn and ask me any questions
10:04
there.
10:05
Thank you so much.
Recommended
10:42
|
Up next
Brainstorm AI 2024: OpenAI Demo
Fortune
12/10/2024
9:34
Brainstorm AI 2024: What it Takes to Build A Startup in the AI Era
Fortune
12/10/2024
1:27
PGA Golfers Reveal Who They’d Be Stranded on an Island With
Benzinga
3 days ago
0:56
Microsoft Retires Iconic Blue Screen of Death, Unveils Black Restart Screen for Windows 11
Benzinga
3 days ago
1:13
Big Tech Leads Market Rally as AI Optimism, Easing Tariff Fears Fuel Fresh Gains
Benzinga
3 days ago
1:18
Jeff Bezos and Lauren Sánchez Are Married! Couple Ties the Knot in Star-Studded Venice Wedding
People
3 days ago
0:49
Lauren Sánchez Is Living La Dolce Vita in a Retro White Suit and Head Scarf as She Gets Ready to Marry Jeff Bezos
People
3 days ago
0:44
Jeff Bezos Spotted Hours Before Marrying Lauren Sánchez as He Boards Water Taxi to Wedding Venue
People
3 days ago
0:34
Welsh Cakes Recipe
Food and Wine
5 days ago
0:31
How to Make Crushed Cucumber Salad With Tomatillo Dressing
Food and Wine
6/23/2025
0:35
How to Make Chef Roscoe's Arroz con Pollo with Mushrooms
Food and Wine
6/23/2025
9:03
Behind IBM’s Boldest AI Bet: The Quantum Test Lab
Fortune
today
0:47
Olympic medalist Katie Ledecky: "You don’t have to win the race"
Fortune
3 days ago
25:48
Fireside Chat with Will.i.am
Fortune
4 days ago
24:17
How Women Are Using Tech, Data, and Bold Ideas to Build the Brands of Tomorrow
Fortune
4 days ago
16:21
Executives Turning First-Party Data into a Strategic Advantage
Fortune
4 days ago
18:44
Champion Mindset
Fortune
4 days ago
0:41
Serena Williams says the most successful founders "show up 28 hours a day"
Fortune
4 days ago
0:22
Billie Jean King to Gen Z: "If you think you're a failure, you'll fail"
Fortune
4 days ago
0:43
Will.i.am says businesses need to avoid "dystopian" AI tactics
Fortune
6 days ago
24:19
Elite Leadership: Unlocking Growth at the Top
Fortune
6 days ago
1:17
NFL's Efe Obada on overcoming imposter syndrome: "Everyone told me that I didn't belong"
Fortune
6 days ago
1:43
Why Walgreens fell from its peak—can going private save it?
Fortune
6 days ago
1:20
Behind the scenes of Fortune Magazine's Southeast Asia 500 cover
Fortune
6/20/2025
1:05
OpenAI CEO says Meta is offering $100 million bonuses to poach his team
Fortune
6/20/2025