Learn Generative AI with PyTorch cover
welcome to this free extract from
an online version of the Manning book.
to read more
or

16 Pretrained large
language models and the LangChain library

This chapter covers

  • Using pretrained large language models for text, image, speech, and code generation
  • Few-shot, one-shot, and zero-shot prompting techniques
  • Creating a zero-shot personal assistant with LangChain
  • Limitations and ethical concerns of generative AI

The rise of pretrained large language models (LLMs) has transformed the field of natural language processing (NLP) and generative tasks. OpenAI’s GPT series, a notable example, showcases the extensive capabilities of these models in producing life-like text, images, speech, and even code. The effective utilization of these pretrained LLMs is essential for several reasons. It enables us to deploy advanced AI functionalities without the need for vast resources to develop and train these models. Moreover, understanding these LLMs paves the way for innovative applications that leverage NLP and generative AI, fostering progress across various industries.

In a world increasingly influenced by AI, mastering the integration and customization of pretrained LLMs offers a crucial competitive advantage. As AI evolves, leveraging these sophisticated models becomes vital for innovation and success in the digital landscape.

16.1 Content generation with the OpenAI API

16.1.1 Text generation tasks with OpenAI API

16.1.2 Code generation with OpenAI API

16.1.3 Image generation with OpenAI DALL-E 2

16.1.4 Speech generation with OpenAI API

16.2 Introduction to LangChain

16.2.1 The need for the LangChain library

16.2.2 Using the OpenAI API in LangChain

16.2.3 Zero-shot, one-shot, and few-shot prompting

16.3 A zero-shot know-it-all agent in LangChain

16.3.1 Applying for a Wolfram Alpha API Key

16.3.2 Creating an agent in LangChain

16.3.3 Adding tools by using OpenAI GPTs

16.3.4 Adding tools to generate code and images

16.4 Limitations and ethical concerns of LLMs

16.4.1 Limitations of LLMs

16.4.2 Ethical concerns for LLMs

Summary