Skip to main content

Building robust CI pipelines for Flask applications

Learn how to design and implement effective continuous integration pipelines for Flask applications that automate testing, linting, and deployment processes.

Introduction

Continuous Integration (CI) has evolved from a development best practice to an essential component of modern software delivery. For Flask applications, a well-designed CI pipeline automates testing, ensures code quality, and streamlines the path to deployment. This automation not only catches bugs earlier but also enforces consistency and reduces the cognitive overhead of manual quality checks.

This guide focuses on implementing CI pipelines that integrate seamlessly with the testing frameworks covered in our previous article on Flask application testing. We'll explore various CI platforms, configuration strategies, and best practices to help you establish reliable automation for your Flask projects.

By the end of this article, you'll understand how to:

  • Choose the right CI platform for your Flask projects
  • Configure comprehensive pipelines that execute your pytest suite
  • Integrate code quality tools into your workflow
  • Optimise pipeline performance for faster feedback
  • Set up deployment automation for different environments

Choosing a CI platform

Several CI platforms are available, each with its own strengths. Let's examine the most popular options for Flask applications.

Self-hosted options

Jenkins

Jenkins is a widely-used, open-source automation server with extensive customisation options.

Advantages:

  • Complete control over the environment
  • No usage limits or per-minute billing
  • Extensive plugin ecosystem
  • Can run in your own infrastructure

Disadvantages:

  • Requires maintenance and infrastructure
  • Setup complexity is higher than cloud options
  • Security updates must be managed

Jenkins configuration typically uses a Jenkinsfile in your repository:

pipeline {
    agent {
        docker {
            image 'python:3.9'
        }
    }
    stages {
        stage('Setup') {
            steps {
                sh 'pip install -r requirements.txt'
                sh 'pip install -r requirements-dev.txt'
            }
        }
        stage('Test') {
            steps {
                sh 'pytest --cov=myapp --cov-report=xml'
            }
            post {
                always {
                    junit 'test-reports/*.xml'
                    cobertura coberturaReportFile: 'coverage.xml'
                }
            }
        }
    }
}

Cloud-based options

GitHub Actions

GitHub Actions is deeply integrated with GitHub repositories and provides a streamlined CI experience.

Advantages:

  • Tight GitHub integration
  • Generous free tier (2,000 minutes/month for free repositories)
  • Straightforward YAML configuration
  • Marketplace with pre-built actions

Disadvantages:

  • Primarily designed for GitHub repositories
  • Can become costly for large teams or compute-intensive pipelines

GitHub Actions uses workflow files in the .github/workflows directory:

name: Flask Tests

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: [3.8, 3.9]

    steps:
    - uses: actions/checkout@v2
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v2
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
        pip install -r requirements-dev.txt
    - name: Test with pytest
      run: |
        pytest --cov=myapp --cov-report=xml
    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v1

GitLab CI/CD

GitLab CI/CD is part of the GitLab platform and offers a comprehensive DevOps solution.

Advantages:

  • Integrated with GitLab's complete DevOps platform
  • Self-hosted option available
  • Robust pipeline capabilities with directed acyclic graph (DAG)
  • Container registry integration

Disadvantages:

  • More complex for simple projects
  • Free tier offers limited compute minutes (400 minutes/month)

GitLab CI/CD uses a .gitlab-ci.yml file:

stages:
  - test
  - lint
  - deploy

variables:
  PIP_CACHE_DIR: "$CI_PROJECT_DIR/.pip-cache"

cache:
  paths:
    - .pip-cache/

test:
  stage: test
  image: python:3.9-slim
  script:
    - pip install -r requirements.txt -r requirements-dev.txt
    - pytest --cov=myapp --cov-report=xml
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.xml
    paths:
      - coverage.xml

lint:
  stage: lint
  image: python:3.9-slim
  script:
    - pip install flake8 black
    - flake8 myapp
    - black --check myapp

CircleCI

CircleCI specialises in speed and is known for its performance-optimised CI.

Advantages:

  • Fast build times
  • Caching system for dependencies
  • Flexible configuration
  • Workflows for complex pipelines

Disadvantages:

  • Free tier limited to one concurrent job
  • Can be more expensive for large teams

CircleCI uses a .circleci/config.yml file:

version: 2.1
jobs:
  test:
    docker:
      - image: cimg/python:3.9
    steps:
      - checkout
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "requirements.txt" }}
      - run:
          name: install dependencies
          command: |
            python -m venv venv
            . venv/bin/activate
            pip install -r requirements.txt
            pip install -r requirements-dev.txt
      - save_cache:
          paths:
            - ./venv
          key: v1-dependencies-{{ checksum "requirements.txt" }}
      - run:
          name: run tests
          command: |
            . venv/bin/activate
            pytest --cov=myapp --cov-report=xml
      - store_artifacts:
          path: coverage.xml

workflows:
  main:
    jobs:
      - test

Structuring effective CI pipelines

A comprehensive CI pipeline for Flask applications typically includes several stages:

1. Dependency installation

The first step is installing application and development dependencies:

# GitHub Actions example
- name: Install dependencies
  run: |
    python -m pip install --upgrade pip
    pip install -r requirements.txt
    pip install -r requirements-dev.txt

Tip

Use dependency caching to speed up your builds. Most CI platforms provide built-in caching mechanisms:
# GitHub Actions caching example
- uses: actions/cache@v2
  with:
    path: ~/.cache/pip
    key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
    restore-keys: |
      ${{ runner.os }}-pip-
</div>

2. Running tests with pytest

Execute your pytest suite with coverage reporting:

- name: Test with pytest
  run: |
    pytest --cov=myapp --cov-report=xml --cov-report=term

For more comprehensive test execution, consider adding parallel testing with multiple Python versions:

# GitHub Actions matrix strategy
strategy:
  matrix:
    python-version: [3.8, 3.9, '3.10']
    os: [ubuntu-latest, macos-latest]

3. Code quality checks

Integrate linting and formatting tools:

- name: Lint with flake8
  run: |
    flake8 myapp tests

- name: Check formatting with black
  run: |
    black --check myapp tests

4. Security scanning

Add security scanning to detect vulnerabilities:

- name: Security scan
  run: |
    pip install bandit safety
    bandit -r myapp
    safety check

5. Artifact generation

Create and store artifacts for deployment or review:

# GitLab CI example
  artifacts:
    paths:
      - dist/
      - coverage-report/
    expire_in: 1 week

Integrating testing tools from the pytest ecosystem

Let's see how to integrate the testing tools we discussed in our previous article into CI pipelines.

pytest-cov for coverage reports

Configure your CI pipeline to generate and publish coverage reports:

# GitHub Actions
- name: Test with pytest
  run: |
    pytest --cov=myapp --cov-report=xml
- name: Upload coverage to Codecov
  uses: codecov/codecov-action@v2

Many CI platforms support native coverage visualisation:

# GitLab CI
test:
  script:
    - pytest --cov=myapp --cov-report=xml
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.xml

pytest-xdist for parallel testing

Speed up your test suite by running tests in parallel:

- name: Run tests in parallel
  run: |
    pytest -xvs -n auto --cov=myapp

pytest-benchmark for performance monitoring

Track performance metrics over time:

- name: Run benchmarks
  run: |
    pytest --benchmark-json=bench.json benchmarks/
- name: Upload benchmark results
  uses: actions/upload-artifact@v2
  with:
    name: benchmark-results
    path: bench.json

Automating deployments from CI

A complete CI/CD pipeline includes deployment to various environments.

Staging deployment

Deploy to a staging environment after successful tests:

# GitHub Actions
deploy-staging:
  needs: test
  runs-on: ubuntu-latest
  if: github.ref == 'refs/heads/develop'
  steps:
    - uses: actions/checkout@v2
    - name: Deploy to staging
      run: |
        # Deploy commands here
        echo "Deploying to staging"

Production deployment

Deploy to production with additional safeguards:

# GitHub Actions
deploy-production:
  needs: deploy-staging
  runs-on: ubuntu-latest
  if: github.ref == 'refs/heads/main'
  environment: production
  steps:
    - uses: actions/checkout@v2
    - name: Deploy to production
      run: |
        # Production deployment commands
        echo "Deploying to production"

Container-based deployment

For Docker-based deployments:

# GitLab CI
build-image:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

deploy:
  stage: deploy
  needs: [build-image, test]
  script:
    - kubectl set image deployment/myapp container=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

Advanced CI pipeline strategies

Selective testing

Run only tests affected by changes to speed up your pipeline:

# For pytest-testmon integration
- name: Selective testing
  run: |
    pytest --testmon

Caching strategy

Implement effective caching to minimise build times:

# CircleCI caching example
- restore_cache:
    keys:
      - v1-dependencies-{{ checksum "requirements.txt" }}
      - v1-dependencies-
- save_cache:
    paths:
      - ./venv
      - ~/.cache/pip
    key: v1-dependencies-{{ checksum "requirements.txt" }}

Branch-specific workflows

Configure different pipeline behaviours for different branches:

# GitHub Actions
on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main, develop]

Auto-merging with status checks

Implement automatic merging for pull requests that pass CI checks:

# GitHub Actions - Example using Mergify
name: Mergify
on:
  pull_request_target:
    types: [labeled]

jobs:
  automerge:
    runs-on: ubuntu-latest
    if: contains(github.event.pull_request.labels.*.name, 'automerge')
    steps:
      - name: automerge
        uses: "pascalgn/automerge-action@v0.14.3"
        env:
          GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

Best practices for Flask CI pipelines

1. Keep dependencies consistent

Ensure your CI environment matches your development environment:

- name: Install exact dependencies
  run: |
    pip install -r requirements.txt
    pip install -r requirements-dev.txt

Consider using pip-tools or Poetry for deterministic dependencies.

2. Test the production build

Test your application in a production-like configuration:

- name: Test production configuration
  run: |
    FLASK_ENV=production pytest tests/

3. Implement quality gates

Define strict criteria for pipeline success:

- name: Check test coverage
  run: |
    pytest --cov=myapp --cov-fail-under=85

4. Optimise for speed

Structure your pipeline to run fast tests first:

# GitLab CI example
stages:
  - lint
  - unit-test
  - integration-test
  - deploy

lint:
  stage: lint
  script:
    - flake8 myapp

unit-tests:
  stage: unit-test
  script:
    - pytest tests/unit/

integration-tests:
  stage: integration-test
  script:
    - pytest tests/integration/

5. Monitor pipeline performance

Track and optimise build times:

# GitHub Actions
- name: Report job timing
  if: always()
  uses: actions/github-script@v4
  with:
    script: |
      const { duration } = context.payload;
      console.log(`Job duration: ${duration}ms`);

Handling common CI challenges

Flaky tests

Flaky tests can undermine CI reliability. Strategies to handle them:

# Auto-retry flaky tests
- name: Run tests with retry
  run: |
    pytest --reruns 3 --reruns-delay 1

Database testing

For tests requiring a database:

# GitLab CI with service containers
services:
  - postgres:12
variables:
  POSTGRES_DB: test_db
  POSTGRES_USER: postgres
  POSTGRES_PASSWORD: postgres
  DATABASE_URL: "postgresql://postgres:postgres@postgres:5432/test_db"

Environmental secrets

Securely handle credentials and secrets:

# GitHub Actions
- name: Run tests with environment variables
  env:
    SECRET_KEY: ${{ secrets.SECRET_KEY }}
    API_TOKEN: ${{ secrets.API_TOKEN }}
  run: |
    pytest

Conclusion

A well-designed CI pipeline is essential for maintaining Flask application quality and accelerating development. By automating testing, code quality checks, and deployment processes, you can focus on building features while ensuring consistent quality.

Remember these key points:

  • Choose a CI platform that aligns with your project needs and team workflow
  • Structure your pipeline to provide fast feedback on critical issues
  • Integrate the pytest ecosystem tools for comprehensive testing
  • Implement deployment automation for a complete CI/CD experience
  • Follow best practices to keep your pipeline efficient and reliable

With these strategies in place, your Flask applications will benefit from increased stability, faster development cycles, and more confident releases.

Further reading