LLMs in Production cover
welcome to this free extract from
an online version of the Manning book.
to read more
or

7 Prompt engineering: Becoming an LLM whisperer

 

This chapter covers

  • What a prompt is and how to make one
  • Prompt engineering—more than just crafting a prompt
  • Prompt engineering tooling available to make it all possible
  • Advanced prompting techniques to answer the hardest questions
Behold, we put bits in the horses' mouths, that they may obey us; and we turn about their whole body.
—James 3:3

In the last chapter, we discussed in depth how to deploy large language models and, before that, how to train them. In this chapter, we are going to talk a bit about how to use them. We mentioned before that one of the biggest draws to LLMs is that you don’t need to train them on every individual task. LLMs, especially the largest ones, have a deeper understanding of language, allowing them to act as a general-purpose tool.

Want to create a tutoring app that helps kids learn difficult concepts? What about a language translation app that helps bridge the gap between you and your in-laws? Need a cooking assistant to help you think up fun new recipes? With LLMs, you no longer have to start from scratch for every single use case; you can use the same model for each of these problems. It just becomes a matter of how you prompt your model. This is where prompt engineering, also called in-context learning, comes in. In this chapter, we are going to dive deep into the best ways to do that.

7.1 Prompting your model

7.1.1 Few-shot prompting

7.1.2 One-shot prompting

7.1.3 Zero-shot prompting

7.2 Prompt engineering basics

7.2.1 Anatomy of a prompt

7.2.2 Prompting hyperparameters

7.2.3 Scrounging the training data

7.3 Tooling for structured outputs, workflows, and more

7.3.1 LangChain

7.3.2 Guidance

7.3.3 DSPy

7.3.4 Other tooling is available but …

7.4 Advanced prompt engineering techniques

7.4.1 Giving LLMs tools

7.4.2 ReAct

Summary