
This chapter covers
- The definition of prompt engineering
- Prompt engineering best practices
- Zero-shot and few-shot prompting
- Prompting LLMs for historical time-series datasets
I’ll bet that before actually opening this book, many—perhaps most—of you expected prompt engineering to be a primary focus. And yet, here we are in chapter 6 (halfway through the book!), and we’re only just hitting the topic. What’s the story here?
In my defense, I’d say that it’s partly about what we mean when we use the term. For some, prompt engineering covers a lot of what you’ll figure out on your own by just having fun experimenting with ChatGPT or MidJourney. It matters, but it doesn’t require a whole book.
But I’d also argue that what I’ve given you so far—and what’s yet to come in the remaining chapters—goes far beyond prompts. Sure, the phrasing you use is important, but the API and programmatic tools we’re discovering will take your prompts a lot further.
There’s one more thing going on. In my experience, as GPT and other generative AI models improve, they’re getting better at figuring out what you want, even when you provide a weak prompt. I can’t count the number of times that GPT has successfully seen right through my spelling and grammar errors, poor wording, or sometimes even outright technical mistakes. So, many of the problems that popular prompt engineering advice seeks to prevent are already easily handled by the AI itself.