Dockerizing a Laravel and Inertia App

December 21, 2025

Introduction

This post assumes you are familiar with Laravel, Inertia and have a basic understanding of Docker. To follow along, you can create a new blank Laravel project with React. There are some complexities around file permissions, health checks, and volumes that we will address in this post.

Prerequisites

  • Docker installed on your machine. You can download it from Docker’s official site.
  • Laravel installed on your machine. You can follow the official installation guide here.
  • Basic knowledge of Laravel and Inertia.js.
  • A Laravel Inertia application. If you already have Laravel installed, you can create a new project using the following commands(Skip the npm install and npm run build commands when prompted):
laravel new demo-project --using=laravel/react-starter-kit:^1.0.1

The fixed version is merely used for stability here, you can use the latest version as well. After creating the project, we will not install anything else locally. Let’s dockerize this application step by step.

Create a Dockerfile

First, we will quickly get the application up and running, and then we will improve it with multi-stage builds. We will be using serversideup/php images as our base. These images are optimized for running PHP applications, with good defaults, easy configuration, and include some automation scripts for Laravel. In the root of the Laravel project, create a file named Dockerfile with the following content:

Dockerfile
# Use a base image with PHP, Nginx, and Alpine Linux.
# Use fixed version tags for stability.
FROM serversideup/php:8.4.11-fpm-nginx-alpine3.21-v3.6.0 AS development

# Switch to root to install dependencies
USER root

# Install any needed PHP extensions
# RUN install-php-extensions intl

# Defaults
ARG USER_ID=1000
ARG GROUP_ID=1000

RUN docker-php-serversideup-set-id www-data $USER_ID:$GROUP_ID \
    && docker-php-serversideup-set-file-permissions --owner $USER_ID:$GROUP_ID --service nginx


# Install Node (for building assets)
# This is quick and dirty, we’ll fix it in the multi-stage version
RUN apk add --no-cache nodejs npm
USER www-data

# Copy composer files first (for caching)
COPY --chown=www-data:www-data composer.json composer.lock /var/www/html/
RUN composer install --optimize-autoloader --no-interaction

# Copy package files for frontend
COPY --chown=www-data:www-data package.json package-lock.json /var/www/html/
RUN npm ci

# Copy rest of the app
COPY --chown=www-data:www-data . /var/www/html/

# Build frontend assets
USER root
RUN npm run build
USER www-data

# Run the application
EXPOSE 8080
CMD ["php", "artisan", "serve", "--host=0.0.0.0", "--port=8080"]

Let’s break down what each part does:

  1. FROM serversideup/php:8.4.11-fpm-nginx-alpine3.21-v3.6.0 AS base: This line specifies the base image we are using, which includes PHP 8.4 with FPM and Nginx on Alpine Linux v3.21. The version tag v3.6.0 is used by the serversideup image. We are using fixed versions as much as possible for stability. Using the Nginx variant of the image allows us to serve the static files, while PHP-FPM handles the PHP processing.

  2. USER root: Switches to the root user to install necessary dependencies.

  3. RUN docker-php-serversideup-set-id www-data $USER_ID:$GROUP_ID: This command sets the user and group IDs for the www-data user inside the container to match those of your host system. This helps avoid permission issues when mounting volumes during development. The script is included in the serversideup/php image itself. More information can be found here

  4. RUN docker-php-serversideup-set-file-permissions --owner $USER_ID:$GROUP_ID --service nginx: This command sets the correct file permissions for the Nginx service to ensure it can read and serve the application files.

  5. RUN apk add --no-cache nodejs npm: Installs Node.js and npm, which are required for building the frontend assets.

  6. COPY --chown=www-data:www-data composer.json composer.lock /var/www/html/: Copies the Composer files to the container and sets the ownership to www-data. This benefits from Docker’s layer caching . If the composer.json or composer.lock files haven’t changed, Docker can use the cached layer instead of reinstalling dependencies.

  7. RUN composer install --optimize-autoloader --no-interaction: Installs PHP dependencies using Composer.

  8. COPY --chown=www-data:www-data package.json package-lock.json /var/www/html: Copies the package files for the frontend and sets the ownership. Again, this is done before copying the rest of the application to benefit from caching.

  9. RUN npm ci: Installs Node.js dependencies. Using npm ci ensures a clean and consistent install based on the package-lock.json, and is preferred in CI/CD environments over npm install.

  10. COPY --chown=www-data:www-data . /var/www/html/: Copies the rest of the application files to the container.

  11. RUN npm run build: Builds the frontend assets.

  12. EXPOSE 8080: Exposes port 8080 for the application.

  13. CMD ["php", "artisan", "serve", "--host=0.0.0.0", "--port=8080"]: Starts the Laravel development server.

You can build and run the Docker container using the following commands:

docker build -t demo-project .
docker run -p 8080:8080 demo-project:latest

The application should now be accessible at http://localhost:8080.

Few things to note here:

  1. By default the serversideup/php images use port 8080 for HTTP and 8443 for HTTPS, which we have exposed in the Dockerfile.
  2. If you have not changed any environment variables, the application by default will be using SQLite as the database.
  3. Right now, the image is quite large (around 600mb in my case). You can use dive to analyze the image layers. At this point, if you inspect the image, you will notice that we have lots of unnecessary files included in the image like node_modules, vendor directory, tests, git files, etc.

Add a .dockerignore File

To prevent unnecessary files from being copied into the Docker image, create a .dockerignore file in the root of your project with the following content:

.dockerignore
vendor
tests
.git
.github
storage/logs
storage/framework/cache/*
!storage/framework/sessions/*
storage/framework/views/*
node_modules
.env
.env.*
.gitignore
.gitattributes
.DS_Store
.idea
.vscode
.editorconfig
Dockerfile
compose.*.yml
npm-debug.log
yarn-error.log
_ide_helper.php
_ide_helper_models.php
stubs
eslint.config.js
phpunit.xml
.phpactor.yml
README.md

We try to exclude all the unnecessary files and folders that are not needed in the Docker image :

  • vendor and node_modules directories as they will be installed inside the container.
  • .git directory to avoid copying version control data.
  • storage/logs and other cache directories to avoid copying log files and cached data.
  • Environment files (.env, .env.*) to avoid exposing sensitive information.
  • IDE and editor specific files and folders, log files, configuration files for linters, formatters, testing, etc.
  • Docker-related files to avoid copying Dockerfiles and compose files. These are only needed on the host machine.

This will reduce our docker image size, decrease build times, and improve security by not including sensitive files. You need to rebuild the image after adding this file.

Local Development with Docker Compose

We will use Docker Compose to set up our dev environment. We will be using Postgres as our database and Mailpit as the local mail server. Create a file compose.dev.yml in the docker folder with the following content:

services:
  db:
    image: postgres:17.6-alpine3.21
    restart: no
    ports:
      - "5432:5432"
    env_file:
      - .env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $POSTGRES_USER"]
      interval: 5s
      timeout: 5s
      retries: 5
    volumes:
      - demo-project-data:/var/lib/postgresql/data

  mailpit:
    image: axllent/mailpit:v1.27.7
    restart: always
    ports:
      - "${FORWARD_MAILPIT_PORT:-1025}:1025"
      - "${FORWARD_MAILPIT_DASHBOARD_PORT:-8025}:8025"

  app:
    build:
      context: .
      target: development
      args:
        USER_ID: ${USER_ID:-1000}
        GROUP_ID: ${GROUP_ID:-1000}
    restart: no
    depends_on:
      db:
        condition: service_healthy
    env_file:
      - .env
    healthcheck:
      # /up is Laravel's built-in health check route
      test: ["CMD-SHELL", "curl -f http://localhost:8080/up || exit 1"]
      interval: 5s
      timeout: 5s
      start_period: 5s
      retries: 5
    ports:
      - "80:8080"
      - "443:8443"
    volumes:
      - .:/var/www/html

  queue:
    build:
      context: .
      target: development
      args:
        USER_ID: ${USER_ID:-1000}
        GROUP_ID: ${GROUP_ID:-1000}
    restart: no
    depends_on:
      db:
        condition: service_healthy
    env_file:
      - .env
    environment:
      - AUTORUN_ENABLED=false #prevents migrations from running again, they will already have run in app service
    command: ["php", "/var/www/html/artisan", "queue:work", "--tries=3"]
    stop_signal: SIGTERM
    healthcheck:
      test: ["CMD", "healthcheck-queue"]
      start_period: 5s
    volumes:
      - .:/var/www/html

  node:
    image: node:22.19.0-alpine3.21
    working_dir: /app
    env_file:
      - .env
    volumes:
      - .:/app
    command: sh -c "npm ci --legacy-peer-deps && npm run dev"
    ports:
      - "5173:5173"

volumes:
  demo-project-data:

Let’s break down the services defined in this file:

  1. db service : We are using the official Alpine based Postgres image with fixed version tag. We expose port 5432 for database connections and set up a health check to ensure the database is ready before other services depend on it. The database data is stored in a Docker volume named demo-project-data to persist data across container restarts.

  2. mailpit service : This service uses the Mailpit image to provide a local SMTP server for testing email functionality. It exposes ports 1025 for SMTP and 8025 for the web dashboard.

  3. app service : This service builds the Laravel application using the Dockerfile we created earlier. It depends on the db service and waits for it to be healthy before starting. It exposes ports 80 and 443 for HTTP and HTTPS access respectively. We are using a bind mount to mount the current directory into the container for hot reloading during development. A health check is also defined to ensure the application is running.

  4. queue service : This service is responsible for running Laravel’s queue worker. It also builds from the same Dockerfile and also depends on the db service. The command specified runs the queue worker with a maximum of 3 tries for each job. We disable automatic migrations here since they will already have run in the app service. To gracefully stop the queue worker, we set the stop signal to SIGTERM. For our healthcheck, we are using the healthcheck-queue command provided by the serversideup/php image. More information can be found here

  5. node service : This service uses the official Node.js Alpine image to handle frontend asset compilation. It mounts the current directory into the container and runs npm ci followed by npm run dev to start the development server. Port 5173 is exposed for Vite’s development server.

We are passing the .env file to all services to ensure they have the necessary environment variables. We need to add the following variables to our .env file to configure the database and migration settings:

.env
# These are for Laravel
APP_URL=http://localhost
APP_DEBUG=true
DB_CONNECTION=pgsql
DB_HOST=db
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=postgres
DB_PASSWORD=postgres

#Migration settings (These are for serversideup/php image automation scripts)
AUTORUN_ENABLED=true
AUTORUN_LARAVEL_MIGRATION_ISOLATION=false

#These are for db container, it expects database related variables with POSTGRES_ prefix
POSTGRES_DB=${DB_DATABASE}
POSTGRES_USER=${DB_USERNAME}
POSTGRES_PASSWORD=${DB_PASSWORD}

Since we now have a separate Node service for handling frontend assets, we can remove the Node installation and asset building steps from our Dockerfile. We can also remove the port expose and CMD instructions since they will be handled by Docker Compose. Update the Dockerfile to the following:

Dockerfile
FROM serversideup/php:8.4.11-fpm-nginx-alpine3.21-v3.6.0 AS development

# Switch to root to install dependencies
USER root

ARG USER_ID=1000
ARG GROUP_ID=1000

RUN docker-php-serversideup-set-id www-data $USER_ID:$GROUP_ID \
    && docker-php-serversideup-set-file-permissions --owner $USER_ID:$GROUP_ID --service nginx

USER www-data

# Copy composer files first (for caching)
COPY --chown=www-data:www-data composer.json composer.lock /var/www/html/
RUN composer install --optimize-autoloader --no-interaction

# Copy rest of the app
COPY --chown=www-data:www-data . /var/www/html/

Lastly, we need to update the dev command in our package.json and add the --host flag to ensure Vite listens on all interfaces, allowing access from outside the container:

package.json
"scripts": {
    "dev": "vite --host",
}

This should reduce our Docker image size significantly (around 200 MB in my case!).

We can now start our development environment using Docker Compose with the following command:

docker compose -f compose.dev.yml up --build

This command will build the images and start all the services defined in the compose.dev.yml file. The Laravel application should now be accessible at http://localhost, and the Mailpit dashboard at http://localhost:8025. Note that you don’t need to use --build every time, only when you make changes to the Dockerfile or the compose file. Hot reloading should now work seamlessly.

Building a Production-Ready Image with Multi-Stage Builds

There are some differences when building for production. The permission fixes as well as the UID/GID mapping we used during development are not needed in production. We will also pre-build the frontend assets during image build, so we don’t need a separate node service in production.

Let’s update our Dockerfile to add separate stages for development and production:

Dockerfile
# ============================================================
# Base image - common for both dev and prod
# ============================================================
FROM serversideup/php:8.4.11-fpm-nginx-alpine3.21-v3.6.0 AS base

USER root
#Install any needed PHP extensions
RUN install-php-extensions intl
USER www-data

# ============================================================
# Node builder (for Inertia/Vite assets) - only for production
# ============================================================
FROM node:22.19.0-alpine3.21 AS node-builder

WORKDIR /app

# Copy package files first for caching
COPY package.json package-lock.json* pnpm-lock.yaml* yarn.lock* ./

# Install deps
RUN npm ci --legacy-peer-deps

# Copy the rest of the frontend
COPY resources/ resources/
COPY vite.config.js ./

# Build frontend
RUN npm run build

# ============================================================
# Development stage (fixes host UID/GID permissions)
# ============================================================

FROM base AS development

USER root

ARG USER_ID=1000
ARG GROUP_ID=1000

RUN docker-php-serversideup-set-id www-data $USER_ID:$GROUP_ID \
    && docker-php-serversideup-set-file-permissions --owner $USER_ID:$GROUP_ID --service nginx

USER www-data
COPY --chown=www-data:www-data composer.json composer.lock /var/www/html/

RUN composer install --optimize-autoloader --no-interaction

COPY --chown=www-data:www-data . /var/www/html

# ============================================================
# Production stage
# ============================================================
FROM base AS production

ENV PHP_OPCACHE_ENABLE=1

USER www-data

COPY --chown=www-data:www-data composer.json composer.lock /var/www/html/

#Exclude dev dependencies for production with --no-dev
RUN composer install --no-dev --no-scripts --no-autoloader --no-interaction

COPY --chown=www-data:www-data . /var/www/html

# Copy only the built assets from node stage
COPY --from=node-builder /app/public/build /var/www/html/public/build

# Now run scripts and autoloader generation
RUN composer dump-autoload --optimize && \
    composer run-script post-autoload-dump

In this updated Dockerfile, we have made the following changes:

  1. Added a base stage that contains common setup for both development and production stages.
  2. Added a node-builder stage that installs Node.js dependencies and builds the frontend assets. This stage is only used in the production build. Our development setup uses a separate Node service as defined in the compose.dev.yml file discussed earlier.
  3. The development stage remains the same, still fixing permissions for local development and handling the UID/GID mapping.
  4. The production stage installs only the necessary PHP dependencies without dev dependencies. It then copies the pre-built frontend assets from the node-builder stage without including any build tools or source files from that stage.

You can build the development image same as before. To build the production image locally use the following command:

docker build --target production -t demo-project:prod .

The production image will again be smaller than the dev image, since we have excluded the composer dev dependencies. Please note that the production stage for the Dockerfile is meant for building the production image both locally and in the deployment pipelines.

Testing The Production Image Locally With Docker Compose

Now, let’s create a new compose file for production named compose.prod.yml for building and running the production image locally :

compose.prod.yml
services:
  db:
    image: postgres:17.6-alpine3.21
    restart: unless-stopped
    env_file:
      - .env
      - .env.production
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $POSTGRES_USER"]
      interval: 5s
      timeout: 5s
      retries: 5
    volumes:
      - demo-project-prod-db:/var/lib/postgresql/data

  app:
    build:
      context: .
      target: production
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
    env_file:
      - .env
      - .env.production
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:8080/up || exit 1"]
      interval: 5s
      timeout: 5s
      start_period: 5s
      retries: 5
    ports:
      - "80:8080"
      - "443:8443"
    volumes:
      - demo-project-prod-data:/var/www/html/storage

  queue:
    build:
      context: .
      target: production
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
    env_file:
      - .env
      - .env.production
    environment:
      - AUTORUN_ENABLED=false
      - PHP_FPM_POOL_NAME=app_queue
    command:
      [
        "php",
        "/var/www/html/artisan",
        "queue:work",
        "--tries=3",
        "--timeout=90",
      ]
    stop_signal: SIGTERM
    healthcheck:
      test: ["CMD", "healthcheck-queue"]
      start_period: 5s
    volumes:
      # Same volume as app
      - demo-project-prod-data:/var/www/html/storage

volumes:
  demo-project-prod-db:
  demo-project-prod-data:

Following are the key differences in this file compared to the development compose file:

  1. We are using the production target from our Dockerfile to build the app and queue services.

  2. We have added an additional env file .env.production to separate production specific environment variables. If a variable is defined in both files, the one in .env.production will take precedence as it is listed later.

    For now, my .env.production only contains:

    .env.production
    APP_ENV=production
    APP_DEBUG=false
    NODE_ENV=production
  3. We are using named volumes for persisting database data and storage data instead of bind mounts. Also, the volumes are mounted on /var/www/html/storage only and not the entire var/www/html directory, since that is the only directory that needs to be writable by the application (for file uploads, cache, sessions, etc.).

  4. The restart policy is set to unless-stopped to ensure the containers restart automatically unless explicitly stopped.

In .env.production, you can add other environment variables to replicate your production environment, such as mail server settings, logging configurations, etc.

You can run the compose file using the following command:

docker compose -f compose.prod.yml up --build

Your application should now be accessible at http://localhost.

Again note that during development, for live reloads, you need to use the compose.dev.yml file, while for testing a production build locally, you should use compose.prod.yml.

Adapting The Production Compose File For Deployment

We can use the same Dockerfile for building the production image during deployment. However, we need to change few things in the compose:

  1. Generally, in production deployments, we do not build the image on the server itself. Instead, we build it in a CI/CD pipeline and push it to a container registry. The deployment server then pulls the pre-built image from the registry. If the server builds the images from the repository, a compose file similar to our compose.prod.yml can be used. But if we are pulling the pre-built image from a registry, we need to replace the build section with image in our compose.

  2. We will not use env_file in production deployments. Instead, we will set the environment variables directly in the deployment platform or server, and pass them to the containers. The .env and .env.production files should not be included in the image or the deployment compose file.

  3. You may not want to expose ports for database and other services, based on the requirements you can remove the ones you don’t need. If you are using a reverse proxy in front of your application, you may not need to expose ports from the app container as well.

This part largely depends on your deployment platform and strategy. Below is a sample deployment compose file compose.deploy.yml assuming the above conditions :

compose.deploy.yml
services:
  db:
    image: postgres:17.6-alpine3.21
    restart: unless-stopped
    volumes:
      - demo-project-db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=${DB_DATABASE}
      - POSTGRES_USER=${DB_USERNAME}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U $POSTGRES_USER']
      interval: 5s
      timeout: 5s
      retries: 5

  app:
    image: ${REGISTRY_URL}/demo-project:latest
    pull_policy: always
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
    environment:
      APP_NAME: '${APP_NAME}'
      APP_ENV: '${APP_ENV}'
      APP_KEY: '${APP_KEY}'
      APP_DEBUG: '${APP_DEBUG}'
      APP_URL: '${APP_URL}'
      ASSET_URL: '${ASSET_URL}'
    # Other environment variables as needed

    healthcheck:
      test: ['CMD-SHELL', 'curl -f http://localhost:8080/up || exit 1']
      interval: 5s
      timeout: 5s
      start_period: 5s
      retries: 5
    volumes:
      - demo-project-app-data:/var/www/html/storage

  queue:
    image: ${REGISTRY_URL}/demo-project:latest
    pull_policy: always
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
    command: ['php', '/var/www/html/artisan', 'queue:work', '--tries=3', '--timeout=90']
    environment:
      APP_NAME: '${APP_NAME}'
      APP_ENV: '${APP_ENV}'
      APP_KEY: '${APP_KEY}'
      APP_DEBUG: '${APP_DEBUG}'
      APP_URL: '${APP_URL}'
      ASSET_URL: '${ASSET_URL}'
      PHP_FPM_POOL_NAME: app_queue
    # Other environment variables as needed
    healthcheck:
      test: ['CMD', 'healthcheck-queue']
      start_period: 5s
    volumes:
      - demo-project-app-data:/var/www/html/storage

volumes:
  demo-project-db-data:
  demo-project-app-data:

You can set the REGISTRY_URL environment variable in your server environment to point to your container registry.

Note

  1. The image above is using the latest tag, however it is recommended to use specific version tags or commit SHAs for better control and rollback capabilities.
  2. If you encounter any errors related to static assets in production, ensure that the ASSET_URL variable is set correctly (which should be the same as APP_URL unless you are using a CDN or separate domain for assets).

You can use Github Actions for building and pushing images to a registry. The following is a sample workflow you can use as a starting point :

.github/workflows/deploy.yml
name: Build and Deploy to Production

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to Docker registry
        uses: docker/login-action@v3
        with:
          registry: ${{ secrets.REGISTRY_URL }}
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          target: production
          tags: |
            ${{ secrets.REGISTRY_URL }}/demo-project:latest
            ${{ secrets.REGISTRY_URL }}/demo-project:${{ github.sha }}
  release:
    runs-on: ubuntu-latest
    needs: build
    permissions:
      contents: write

    steps:
      - name: Create GitHub Release
        uses: softprops/action-gh-release@v1
        with:
          tag_name: release-${{ github.run_number }}
          name: Release ${{ github.run_number }}
          body: |
            Commit: ${{ github.sha }}
            Message: ${{ github.event.head_commit.message }}
            Image: demo-project:${{ github.sha }}
          draft: false
          prerelease: false
  deploy:
    runs-on: ubuntu-latest
    needs: release
    steps:
      - name: Trigger Deployment
        run: |
          curl -X POST '${{ secrets.DEPLOY_WEBHOOK_URL }}'

You may need to adjust the workflow based on your requirements and configure the secrets in your repository settings. After building and pushing the image, the workflow triggers a simple deployment webhook.

Further Improvements

You can further improve this setup by:

  1. Using a compose.base.yml file to define common services and configurations, and then extending it in compose.dev.yml and compose.prod.yml files. This will help reduce duplication and make it easier to manage changes across different environments.

  2. Sending the commit SHA in the webhook payload and using it to pull specific image versions during deployment for better traceability.

  3. Set up pre-production deployments using Github actions based on branches or tags.

  4. Set up cache in Github actions (registry cache works with docker/build-push-action) to speed up build times.

    If you have any questions or suggestions, feel free to reach out via email.

Reply via email