AI-Powered Developer: Build software with ChatGPT and Copilot cover
welcome to this free extract from
an online version of the Manning book.
to read more
or

7 Coding infrastructure and managing deployments

This chapter covers

  • Creating a Dockerfile with the assistance of Copilot
  • Drafting your infrastructure as code using large language models
  • Managing Docker images with a container registry
  • Harnessing the power of Kubernetes
  • Releasing your code effortlessly using GitHub Actions

There is nothing more demoralizing than having an application sit unused. For this reason, fast-tracking a well-tested application to production is the stated goal of every competent developer. Because we spent the last chapter testing our product, it is now ready for launch.

This chapter will focus on that pivotal moment of transitioning from development to product launch. During this critical phase, understanding deployment strategies and best practices becomes essential to ensure a successful product launch.

With our application successfully secured and tested, it’s time to shift our attention toward launching the product. To this end, we will use the powerful capabilities of large language models (LLMs) to explore various deployment options tailored to cloud infrastructure.

By harnessing the power of LLMs and embracing their deployment options and methodologies, we can confidently navigate the complex landscape of launching our product, delivering a robust and scalable solution to our customers while using the benefits of cloud computing.

7.1 Building a Docker image and “deploying” it locally

7.2 Standing up infrastructure by copiloting Terraform

7.3 Moving a Docker image around (the hard way)

7.4 Moving a Docker image around (the easy way)

7.5 Deploying our application onto AWS Elastic Kubernetes Service

7.6 Setting up a continuous integration/continuous deployment pipeline in GitHub Actions

Summary