FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple : Hector Martinez

FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple
by: Hector Martinez
blow post content copied from  PyImageSearch
click here to view original post



Table of Contents


FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple

In this tutorial, you will learn how to set up a Continuous Delivery (CD) pipeline for deploying a FastAPI application using GitHub Actions. We will guide you through the process of building and packaging your FastAPI application into a Docker image, pushing it to GitHub Container Registry (GHCR), and automating the deployment process. You’ll also learn about key steps such as configuring Docker, managing Docker images, and generating security and vulnerability reports with tools like Grype and Syft.

This lesson is the last in a 4-part series on GitHub Actions:

  1. Introduction to Github Actions for Python Projects
  2. Setting Up GitHub Actions CI for FastAPI: Intro to Taskfile and Pre-Jobs
  3. Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish
  4. FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple (this tutorial)

To learn how to set up Docker and GHCR for Continuous Delivery with GitHub Actions, just keep reading.

Looking for the source code to this post?

Jump Right To The Downloads Section

FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple

In this GitHub Actions CI/CD series, we began with an introduction to Continuous Integration and Continuous Deployment (CI/CD) for Python projects. We’ve covered a wide range of CI topics, from the basics of setting up GitHub Actions for CI workflows, configuring triggers for running pipelines, and automating tasks using Taskfile, to building and testing FastAPI applications. Our journey through the CI pipeline also included publishing test results and creating a release in the form of a Python wheel file, which was then uploaded to GitHub as part of the CI release process.

Each step so far has focused on improving the efficiency and reliability of our development process by automating repetitive tasks, streamlining testing, and ensuring that our application maintains high standards of code quality.


Transition to Continuous Deployment (CD)

Now that we’ve successfully covered the CI aspects of the pipeline, we turn our attention to the final piece: Continuous Deployment (CD). This lesson will focus on how to seamlessly move from code validation and testing (CI) to actually deploying your FastAPI application in a live environment (CD).

In a FastAPI project or similar web applications, continuous deployment ensures that each time a new feature or bug fix is committed, the updated application is automatically built, packaged, and deployed to a production environment. This allows developers to focus more on coding and less on the manual process of deploying changes, resulting in faster, more frequent releases.


Why Continuous Deployment Matters for FastAPI Projects

In modern software development, Continuous Deployment (CD) ensures that code changes are automatically deployed, reducing the need for manual intervention. For FastAPI projects, this is particularly important as it allows developers to focus on building new features rather than managing deployments.

Before we move forward, it’s important to note that this lesson assumes you are familiar with Docker, including concepts such as Dockerization, creating images, containers, and running applications through containers. If you’re new to Docker or would like a refresher, we highly recommend checking out our comprehensive Docker series on PyImageSearch, which covers everything you need to know to get started.

Continuous Deployment brings several benefits, including:

  • Efficiency: Automating the deployment process reduces the need for manual intervention, ensuring that new changes are consistently deployed without delays.
  • Reduced Human Error: Manual deployments are prone to mistakes. By automating the CD process, you eliminate the risks associated with manual setups, configurations, and deployments.
  • Faster Time-to-Market: CD allows your application to be updated and deployed faster, meaning bug fixes, new features, and updates can be rolled out with minimal downtime.
  • Consistent Environments: With Docker and GitHub Container Registry (GHCR), you can create standardized deployment environments that guarantee consistency between development, testing, and production environments.

In today’s lesson, we’ll set up Continuous Deployment for our FastAPI project using Docker and GitHub Container Registry (GHCR). By the end of this tutorial, you’ll understand how to implement:

  • Docker Integration: Building Docker images for your FastAPI application.
  • GitHub Container Registry (GHCR): Pushing the Docker images to GHCR for easy management and deployment.
  • Automated Deployment: Using GitHub Actions to automate the entire deployment process.
  • Security Measures: Scanning the Docker image for vulnerabilities using Grype and Syft, and generating Bill of Materials (BOM) reports.

Below is an image that represents our final CD pipeline. This visual shows how Docker and GHCR are integrated into our Continuous Deployment workflow.

As you can see from the diagram, this CD pipeline takes care of packaging the FastAPI application as a Docker image, scanning it for vulnerabilities, and pushing it to the GitHub Container Registry for secure deployment.

Now that we’ve discussed the importance of CD and what we aim to achieve in this lesson, let’s move forward by explaining the structure of the CD YAML file and how each part contributes to the deployment process.


Configuring Your Development Environment

Since this is a CI/CD-focused lesson, you might expect to configure and install the necessary libraries for running code locally. However, in this case, libraries like FastAPI, Pillow, Gunicorn, PyTest, and Flake8 — although required — are only needed within the CI pipeline itself. These dependencies will be installed automatically when the pipeline runs in GitHub Actions, meaning there’s no need to configure them on your local development environment unless you’re testing locally.

To clarify: in this guide, the requirements.txt file in your repository ensures that GitHub Actions installs all required packages for you during the pipeline execution. Therefore, you can skip installing these dependencies locally unless you’re developing or testing outside the CI pipeline.


Project Directory Structure for Following Lessons

We first need to review our project directory structure.

Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example images.

From there, take a look at the directory structure:

.
├── .github
│   └── workflows
│       ├── cd.yml
│       └── ci.yml
├── deployment
│   ├── Taskfile.yml
│   ├── docker
│   │   └── Dockerfile
│   └── scripts
│       └── clean_ghcr_docker_images.py
├── main.py
├── model.script.pt
├── pyimagesearch
│   ├── __init__.py
│   └── utils.py
├── requirements.txt
├── setup.py
└── tests
    ├── test_image.png
    ├── test_main.py
    └── test_utils.py

7 directories, 14 files

Since this lesson focuses specifically on Continuous Deployment (CD), we will not be covering all the components of the project. Instead, we’ll focus on the critical elements that drive the CD process — such as Dockerfile, cd.yml, and relevant scripts within the deployment folder. These components are essential for automating the build and deployment process, ensuring smooth delivery of your FastAPI application through Docker and GitHub Actions.

The rest of the project components, such as ci.yml, Taskfile, and the testing utilities, have already been covered in previous lessons in this series. We also previously discussed how to deploy a Vision Transformer (ViT) model using FastAPI in a dedicated post, Deploying a Vision Transformer with FastAPI, which further explores model deployment with FastAPI.

In the .github/workflows/ directory, we have:

  • cd.yml: This GitHub Actions workflow is responsible for Continuous Deployment (CD). In this lesson, we will explore how this file handles building Docker images and pushing them to a container registry like GitHub Container Registry (GHCR). Additionally, it handles tasks like cleaning up old Docker images and generating vulnerability reports for each image.

In the deployment/ directory, we have:

  • Dockerfile: The Dockerfile is used to containerize the FastAPI application. In this lesson, we’ll cover how Docker images are built from the Dockerfile and then pushed to GHCR as part of the CD process.
  • scripts/: This folder includes important utility scripts. For example, clean_ghcr_docker_images.py is used to clean up old Docker images from the GitHub Container Registry (GHCR) to manage storage effectively. We’ll show how this script is integrated into the CD workflow to automate Docker image management.

In the root directory, we have:

  • requirements.txt: This file lists all the dependencies required for the FastAPI application. It will be used during the CD process to ensure that the containerized application includes all the necessary dependencies.
  • setup.py: While not the focus of this lesson, this script defines the application’s packaging. It was discussed earlier in the CI process, where we built and published a Python wheel file.

In this post, the primary focus is on Continuous Deployment (CD), where we automate the process of building Docker images, pushing them to a container registry, and scanning them for vulnerabilities. The lesson will explain how these components work together to enable seamless deployments using Docker, GitHub Actions, and GHCR.


What Is GitHub Container Registry (GHCR)?

The GitHub Container Registry (GHCR) is a service provided by GitHub that allows developers to store and manage container images alongside their code repositories. It provides a secure, scalable way to manage Docker images, supporting both public and private images, and integrates seamlessly with GitHub’s ecosystem.

Key features of GHCR include:

  • Seamless Integration with GitHub Actions: GHCR works effortlessly with GitHub Actions, making it straightforward to set up CI/CD pipelines that automatically build, test, and push images to GHCR. This integration enables developers to manage their full deployment lifecycle within GitHub, from code to container to production.
  • Enhanced Security: GHCR offers fine-grained access controls, allowing users to set permissions at both the repository and package levels. Additionally, it supports private images, so you can restrict access to sensitive container images if necessary. GHCR also supports automated security scanning, ensuring that any vulnerabilities in your images are identified and addressed.
  • Built-in Versioning: With GHCR, each image you push is tagged, making it easy to track different versions of a container. This version control simplifies rolling back to a previous image in case of issues, helping maintain stability in your applications.
  • Efficient Image Management: GHCR integrates with Docker CLI and GitHub’s own UI, making it easy to manage, inspect, and delete images. It also allows developers to retain only the most recent versions of images, which helps optimize storage.

By using GHCR, developers can achieve a streamlined workflow that keeps their containers secure, versioned, and accessible, supporting rapid deployment and iteration of applications.

If you’d like to learn more about GHCR and its capabilities, feel free to check out the official GitHub documentation on the Container Registry.


Breaking Down the cd.yml Workflow: Automating Continuous Deployment for FastAPI

In this section, we will break down and explain each part of the cd.yml file in detail. The workflow automates the deployment of your FastAPI application using Docker and GitHub Container Registry (GHCR), ensuring secure and efficient continuous delivery of your application. Additionally, we’ll cover the image management and cleanup process, leveraging custom scripts to maintain efficiency in the registry.


Triggering the Workflow

name: CD

on:
  push:
    branches: ["main"]

The on: push section specifies the event that triggers the deployment workflow. In this case, any push to the main branch will trigger the Continuous Deployment (CD) process. This is a typical setup for deployments, as changes pushed to main usually indicate readiness for production.


Concurrency Control

concurrency:
  group: $
  cancel-in-progress: true

Concurrency control ensures that only one workflow run is active at a time for the same branch. If a new commit is pushed to the main branch while an existing deployment is still in progress, the ongoing process is canceled, preventing conflicts and saving resources.


Environment Variables

env:
  DOCKER_REGISTRY: $
  DOCKER_ORGANIZATION: $
  DOCKER_USER: $
  DOCKER_PASSWORD: $

This section defines environment variables used throughout the workflow. These variables specify which Docker registry to push images to (defaulting to GHCR), who owns the organization or user repository, and the credentials used for Docker login.

  • DOCKER_REGISTRY: The Docker registry for storing your images (default is GitHub Container Registry).
  • DOCKER_ORGANIZATION: This is set to the GitHub repository owner, allowing the correct push to the organization’s registry.
  • DOCKER_USER and DOCKER_PASSWORD: These variables provide the credentials for authenticating the Docker registry.

Job Definition: Build and Package

jobs:
  build-and-package:
    runs-on: ubuntu-latest
    name: Build and deploy Docker Image for the FastAPI Application
    steps:

The build-and-package job runs the deployment process on an Ubuntu environment and includes multiple steps to build, package, and push the Docker image for the FastAPI application.


Checkout the Repository

      - name: Checkout repository 🛎️
        uses: actions/checkout@v2
        with:
          fetch-depth: 0

This step pulls the complete repository onto the GitHub Actions runner so that the subsequent steps can access the code. The fetch-depth: 0 option ensures that the entire Git history is retrieved, which might be necessary for tagging and versioning purposes.


Set Up Python Runtime

      - name: Set up Python runtime 🐍
        uses: actions/[email protected]
        with:
          python-version: "3.9"

The Python runtime is set up here, specifying version 3.9. Since this is a FastAPI application written in Python, this environment is required for running any tasks related to Python, such as testing or dependency management.


Install Task Runner and Tools

      - name: Install Task 🗳️
        uses: arduino/setup-task@v2

Task is a task runner that helps automate actions in the workflow. In this case, it will be used for tasks such as scanning the Docker image or cleaning the Docker registry.

     - name: Install Grype and Syft 🛠️
        run: |
          curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
          curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

Grype and Syft are security tools used for vulnerability scanning and generating a Bill of Materials (BOM) for Docker images. These tools are installed here and will be used later in the workflow to scan the Docker image.


Tagging the Docker Image

     - name: Set image tags
        run: |
          TAG=$(git describe --tags `git rev-list --tags --max-count=1` 2>/dev/null || echo "v0.0.1")
          TAG=${TAG}-$(git rev-parse --short HEAD)
          echo "version=${TAG}" >> $GITHUB_ENV

This step generates a unique tag for the Docker image using the most recent Git tag and the current commit hash. This tag will later be used to version the Docker image when pushing it to the registry.


Convert Repository Name to Lowercase

     - name: Convert repository name to lowercase
         id: convert-names
        run: |
          DOCKER_IMAGE=$(echo $ | tr '[:upper:]' '[:lower:]')
          DOCKER_ORGANIZATION=$(echo $ | tr '[:upper:]' '[:lower:]')
          echo "DOCKER_IMAGE=${DOCKER_IMAGE}" >> $GITHUB_ENV
          echo "DOCKER_ORGANIZATION=${DOCKER_ORGANIZATION}" >> $GITHUB_ENV

Docker registry names must be in lowercase. This step ensures that the repository name and organization are formatted correctly before being used in the next steps.


Set Up Docker Buildx

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

Docker Buildx is an advanced Docker CLI that allows for building multi-platform images. It’s configured here to ensure the Docker image can be efficiently built and pushed.


Log in to GitHub Container Registry (GHCR)

      - name: Login to Github Container Registry 🐳
        uses: docker/login-action@v3
        with:
          registry: $
          username: $
          password: $

This step authenticates the Docker CLI to the GitHub Container Registry (GHCR) using the credentials defined in the environment variables.


Build and Push Docker Image

      - name: Build and push Docker image 🐳
        uses: docker/build-push-action@v5
        with:
          context: .
          file: ./deployment/docker/Dockerfile
          push: true
          tags: |
            $/$/$:latest
            $/$/$:$
          provenance: false

This is the core of the CD process. The Docker image is built using the Dockerfile located in ./deployment/docker/ and is pushed to the GHCR. The image is tagged as latest and also with the specific version generated earlier.

As you can see, the below image shows the final result of the Docker image being stored in the GitHub Container Registry (GHCR) after the deployment process has been completed. In this case, the github-actions-python repository’s Docker image has been pushed to GHCR under the PyImageSearch organization. The image is tagged with v1.0.2-aaba0e4 and the latest tag, which corresponds to the tags we defined in the cd.yml file.

  • The Docker pull command displayed allows anyone with access to the repository to pull the Docker image and run it locally or in other environments. This command is a result of successfully pushing the Docker image to GHCR during the workflow.
  • The Recent tagged image versions section shows the latest version of the Docker image, tagged v1.0.2-aaba0e4. This tag was automatically generated during the “Set Image Tags” step in the workflow, combining the Git tag and the commit hash.
  • The MIT License indicates that the image and repository are licensed under the MIT License, allowing for open usage and distribution under certain conditions.

Clean Up Old Docker Images

      - name: Clean docker registry 🧹
        working-directory: deployment
        run: |
          pip install requests
          pip install typer
          task clean-ghcr-repo DOCKER_ACCOUNT_NAME=$ PROJECT_NAME=$ GHCR_PASSWORD=$ KEEP=5

This step runs the clean-ghcr-repo task, which deletes old Docker images in the registry, keeping only the last 5. This prevents storage from becoming bloated with outdated images.


Scan Docker Image

      - name: Scan Docker image 🖥️
        working-directory: deployment
        run: |
          task scan TAG=$ DOCKER_REGISTRY=$ DOCKER_ACCOUNT_NAME=$ PROJECT_NAME=$

The Docker image is scanned using Grype for any security vulnerabilities. This ensures that the image being deployed is secure and free from known issues.


Generate BOM (Bill of Materials) Reports

      - name: Generate BOM Reports 📝
        working-directory: deployment
        run: |
          task scan-report-json TAG=$ DOCKER_REGISTRY=$ DOCKER_ACCOUNT_NAME=$ PROJECT_NAME=$
          task bom-report-json TAG=$ DOCKER_REGISTRY=$ DOCKER_ACCOUNT_NAME=$ PROJECT_NAME=$

This step generates BOM reports using Syft, which details the dependencies and software components of the Docker image. This is important for tracking the software composition of each build.


Store Generated Reports

      - name: Store generated report
        uses: actions/upload-artifact@v2
        with:
          name: scan-reports
          path: deployment/reports
          if-no-files-found: error

In the final steps of the workflow, after scanning and generating Bill of Materials (BOM) reports for the Docker image, the generated reports are stored as artifacts in GitHub Actions. These reports provide valuable insights into the vulnerabilities and dependencies of the deployed Docker image.

In the screenshot below, we see the result of this step. The artifact named scan-reports, which is 25.2 MB in size, contains all the scan and BOM reports produced during the deployment workflow. These artifacts are available for download and further analysis, ensuring that the deployment process meets security and compliance requirements.

Storing these reports as artifacts is an important step for auditability and troubleshooting. You can download the reports, review them for any potential issues, and store them as records of what was deployed and scanned at any given time.

This concludes the breakdown of the cd.yml file. Up next, we’ll cover the clean_ghcr_docker_images.py script in detail and explain how it helps manage and clean up old Docker images stored in GHCR.


Streamlining Docker Image Management in GHCR

In Continuous Deployment workflows, especially when dealing with containerized applications, it is common to generate multiple Docker images during the development lifecycle. Over time, this can clutter your container registry and consume storage, making it difficult to manage and slowing down the CI/CD process. This is where Docker image cleanup becomes essential.


Why Is Cleanup Necessary?

  • Storage Optimization: Every time a new Docker image is built and pushed to the registry, it consumes storage space. Without a cleanup mechanism, these images accumulate, leading to inefficient use of storage resources. Most container registries, including GitHub Container Registry (GHCR), have storage limits that can be quickly exhausted without regular cleanups.
  • Performance Enhancement: Having a large number of unused or outdated Docker images can slow down operations that interact with the registry. By removing old or unnecessary images, we streamline image lookups and pull operations, improving overall performance.
  • Cost Efficiency: In environments with paid plans for storage or strict storage quotas, removing outdated Docker images helps avoid additional costs. Regular cleanups ensure that you’re only paying for the storage of essential, up-to-date images.
  • Security and Compliance: Old Docker images may contain outdated dependencies or security vulnerabilities. Regularly cleaning up unused images ensures that only the most recent, secure versions of your application images are available, reducing the risk of deploying insecure software.

Step-by-Step Code Breakdown

This Python script is designed to manage and clean up older Docker images stored in the GitHub Container Registry (GHCR). The script is particularly useful for projects that use Continuous Delivery (CD) pipelines and produce multiple Docker image versions over time, which can accumulate and take up unnecessary storage. To prevent this, the script helps automate the deletion of old images while keeping only a specified number of recent versions.


Key Functionality
  • Fetching Docker Image Versions: The script connects to the GitHub API and retrieves the available versions of Docker images stored in the container registry for a given repository or organization.
  • Deleting Older Versions: It checks the number of existing image versions, and if this number exceeds a specified threshold (e.g., keep the latest 5 versions), it automatically deletes the older versions.
  • GitHub Authentication: The script uses an authentication token (GitHub Token) to securely communicate with the GitHub API and perform actions like fetching versions and deleting images.
  • Organization or User Repositories: The script supports both organization and user repositories, automatically detecting which type of repository it is interacting with.
import requests
import typer

# Main function where the logic starts
def main(repo: str, project: str, password: str, keep: int):
    """
    This function cleans up old Docker images in the GHCR, keeping only the most recent ones.

    :param repo: The name of the organization/user.
    :param project: The name of the project/repository in the container registry.
    :param password: The GitHub token for authentication.
    :param keep: The number of latest images to keep in the registry.
    """

The script starts by importing two essential libraries:

  • requests: Used for making HTTP requests to GitHub’s API.
  • typer: A library to build command-line interfaces (CLI). It allows passing parameters like repo, project, and keep through the terminal.

The main function contains the core logic of the script. It takes in four parameters:

  • repo: The name of the GitHub organization or user that owns the repository.
  • project: The name of the Docker project (container) to be cleaned.
  • password: The GitHub API token used for authentication.
  • keep: The number of Docker image versions to keep.

Setting Up API Headers and Base URLs
   headers = {
        "Accept": "application/vnd.github+json",
        "Authorization": f"Bearer {password}",
        "X-GitHub-Api-Version": "2022-11-28",
    }
    
    base_urls = [
        f"https://api.github.com/orgs/{repo}/packages/container/{project}/versions",
        f"https://api.github.com/users/{repo}/packages/container/{project}/versions"
    ]

The headers dictionary defines the required headers for GitHub API requests:

  • Authorization: The GitHub API token for authentication.
  • Accept: Specifies that the response should be in JSON format.
  • X-GitHub-Api-Version: Specifies the version of the GitHub API to use.

The base_urls list contains two possible URLs:

  • For fetching Docker images from an organization’s repository.
  • For fetching Docker images from a user’s repository.

Fetching Docker Image Versions
    response = None
    url_used = None
    for url in base_urls:
        print(f"Requesting URL: {url}")
        try:
            response = requests.get(url, headers=headers)
            response.raise_for_status()
            url_used = url
            break
        except requests.exceptions.RequestException as e:
            print(f"Error fetching package versions from {url}: {e}")

The script iterates over both potential URLs and attempts to make a GET request to fetch the available Docker image versions. If successful, it stores the URL used and breaks out of the loop. If a request fails, it catches the error and moves on to the next URL.


Handling the Response
   if not response or response.status_code != 200:
        print("Error fetching package versions.")
        return

    try:
        results = response.json()
    except ValueError as e:
        print(f"Error parsing JSON response: {e}")
        return

If the response from GitHub is unsuccessful (status code not 200), the script outputs an error and terminates. It then tries to parse the response as JSON, which contains the Docker image versions. If the JSON parsing fails, an error is printed.


Deleting Old Docker Images
   if len(results) > keep:
        for result in results[keep:]:
            if 'orgs' in url_used:
                delete_url = f"https://api.github.com/orgs/{repo}/packages/container/{project}/versions/{result['id']}"
            else:
                delete_url = f"https://api.github.com/users/{repo}/packages/container/{project}/versions/{result['id']}"
            print(f"Deleting artifact: {delete_url}")
            try:
                delete_response = requests.delete(delete_url, headers=headers)
                delete_response.raise_for_status()
                print(f"Delete response status code: {delete_response.status_code}")
            except requests.exceptions.RequestException as e:
                print(f"Error deleting artifact: {e}")

The script checks if the number of available versions exceeds the specified keep value (e.g., if more than 5 versions exist). If so, it deletes older versions, starting with the oldest. The appropriate delete URL is built based on whether the repository belongs to an organization or a user. The script sends a DELETE request to GitHub for each image version to be removed.


What's next? We recommend PyImageSearch University.

Course information:
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That’s not the case.

All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.

If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • 86 courses on essential computer vision, deep learning, and OpenCV topics
  • 86 Certificates of Completion
  • 115+ hours of on-demand video
  • Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
  • Easy one-click downloads for code, datasets, pre-trained models, etc.
  • Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University


Summary

In this blog post, we covered the critical final step in building a robust CI/CD pipeline for FastAPI applications — Continuous Deployment (CD). Starting with a high-level overview of why CD matters for FastAPI projects, we transitioned into the technical aspects of automating deployments using GitHub Actions.

The detailed breakdown of the cd.yml file walks through each step, from building and pushing Docker images to the GitHub Container Registry (GHCR), scanning for vulnerabilities, and managing old Docker images. Each step was explained to help you understand how the pipeline ensures your deployment is efficient and secure.

Additionally, we explored the importance of Docker image management in GHCR, using the clean-ghcr-repo script to efficiently manage storage by deleting old Docker images.

By the end of this tutorial, you’ll not only have a fully automated pipeline for deploying FastAPI applications using Docker and GitHub Actions, but you’ll also be able to pull the Docker image from GHCR, run the image container, and boom — have a FastAPI server running on your machine. You can then use the API endpoint to run MNIST inference with the Vision Transformer (ViT).

This brings the entire process full circle — from developing the code to deploy a ViT model with FastAPI, to now having a fully automated pipeline that simplifies and streamlines every step of the deployment. With this workflow, you’re set for rapid development, testing, and deployment with minimal manual effort.


Citation Information

Martinez, H. “FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple,” PyImageSearch, P. Chugh, S. Huot, R. Raha, and P. Thakur, eds., 2024, https://pyimg.co/vq25w

@incollection{Martinez_2024_fastapi-with-github-actions-and-ghcr,
  author = {Hector Martinez},
  title = {FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple},
  booktitle = {PyImageSearch},
  editor = {Puneet Chugh and Susan Huot and Ritwik Raha and Piyush Thakur},
  year = {2024},
  url = {https://pyimg.co/vq25w},
}

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!

Download the Source Code and FREE 17-page Resource Guide

Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!

The post FastAPI with GitHub Actions and GHCR: Continuous Delivery Made Simple appeared first on PyImageSearch.


November 11, 2024 at 07:30PM
Click here for more details...

=============================
The original post is available in PyImageSearch by Hector Martinez
this post has been published as it is through automation. Automation script brings all the top bloggers post under a single umbrella.
The purpose of this blog, Follow the top Salesforce bloggers and collect all blogs in a single place through automation.
============================

Salesforce