THE MODN CHRONICLES

Interview Prep

Interview Questions on Docker — Containers, Images, Networking, and What DevOps Interviews Actually Test

Docker is now a baseline skill for backend developers, DevOps engineers, and cloud roles in India. Every company from TCS to Razorpay uses containers. Here are the Docker interview questions that actually get asked — from basics to production scenarios.

Container infrastructure and DevOps pipeline

Docker has become a must-know skill. If you deploy code, you need to understand containers.

Docker in Indian IT Interviews

Docker has moved from a “nice to have” to a “must know” skill in Indian IT. Service companies like TCS, Infosys, and Wipro now include Docker in their DevOps and cloud practice interviews. Product companies like Flipkart, Swiggy, and Razorpay expect every backend developer to understand containerization. Cloud roles at AWS, Azure, and GCP partners test Docker as a prerequisite.

Docker interviews test three things: conceptual understanding (containers vs VMs, images vs containers), practical skills (Dockerfile writing, docker-compose), and production knowledge (networking, volumes, orchestration). Freshers get basic concepts. Experienced candidates get debugging, optimization, and architecture questions.

This guide covers the actual Docker questions asked in Indian interviews — from fundamentals to production-level scenarios.

The first Docker interview question is always the same: “What is the difference between a container and a virtual machine?” Get this wrong and nothing else matters.

Core Concepts

Q1: What is Docker? How is a container different from a VM?

Docker = platform for building, shipping, and running
applications in containers.

Container vs Virtual Machine:

Virtual Machine:              Container:
┌─────────────────┐          ┌─────────────────┐
│   Application   │          │   Application   │
│   Libraries     │          │   Libraries     │
│   Guest OS      │          │  (no Guest OS)  │
│   Hypervisor    │          │  Docker Engine   │
│   Host OS       │          │   Host OS       │
│   Hardware      │          │   Hardware      │
└─────────────────┘          └─────────────────┘

Key differences:
- VM virtualizes HARDWARE → runs full OS (GBs, minutes to boot)
- Container virtualizes OS → shares host kernel (MBs, seconds)
- VM is fully isolated (own kernel)
- Container shares host kernel (lighter but less isolated)

When to use VMs: different OS needed, strong isolation
When to use containers: microservices, CI/CD, scaling

Q2: What is the difference between a Docker image and a container?

Image = blueprint (read-only template)
Container = running instance of an image

Analogy:
Image = Class (definition)
Container = Object (instance)

# One image can create multiple containers:
docker run -d --name app1 myapp:v1
docker run -d --name app2 myapp:v1
docker run -d --name app3 myapp:v1
# 3 containers from the same image

# Image layers (read-only):
FROM node:18          # Layer 1: base OS + Node
COPY package.json .   # Layer 2: package file
RUN npm install       # Layer 3: dependencies
COPY . .              # Layer 4: application code

# Container adds a writable layer on top
# Changes in container do NOT affect the image
# This is why containers are disposable

Q3: What is a Dockerfile? Explain common instructions.

# Dockerfile = recipe for building a Docker image

FROM node:18-alpine       # Base image (always first)
WORKDIR /app              # Set working directory
COPY package*.json ./     # Copy dependency files
RUN npm ci --production   # Install dependencies
COPY . .                  # Copy application code
EXPOSE 3000               # Document the port (metadata)
ENV NODE_ENV=production   # Set environment variable
CMD ["node", "server.js"] # Default command to run

# Key instructions:
# FROM    → base image (required, must be first)
# RUN     → execute command during BUILD (creates layer)
# CMD     → default command when container STARTS
# COPY    → copy files from host to image
# ADD     → like COPY but can extract tar and fetch URLs
# EXPOSE  → document port (does NOT publish it)
# ENV     → set environment variable
# WORKDIR → set working directory
# ENTRYPOINT → like CMD but cannot be overridden easily

# RUN vs CMD:
# RUN executes during image BUILD (npm install)
# CMD executes when container STARTS (node server.js)
# Multiple RUN = multiple layers
# Only LAST CMD takes effect

Essential Docker Commands

Q4: What are the most important Docker commands?

# Image commands:
docker build -t myapp:v1 .       # Build image from Dockerfile
docker images                     # List all images
docker pull nginx:latest          # Download image from registry
docker push myrepo/myapp:v1      # Push to registry
docker rmi myapp:v1              # Remove image

# Container commands:
docker run -d -p 3000:3000 myapp # Run container (detached)
docker ps                         # List running containers
docker ps -a                      # List ALL containers
docker stop <container_id>        # Stop container
docker start <container_id>       # Start stopped container
docker rm <container_id>          # Remove container
docker logs <container_id>        # View container logs
docker exec -it <id> /bin/sh     # Shell into running container

# Cleanup:
docker system prune               # Remove unused data
docker image prune                # Remove dangling images

# Key flags:
# -d    → detached (background)
# -p    → port mapping (host:container)
# -v    → volume mount
# -e    → environment variable
# --name → container name
# -it   → interactive terminal

Q5: What is the difference between CMD and ENTRYPOINT?

# CMD — default command, CAN be overridden
FROM ubuntu
CMD ["echo", "Hello World"]

docker run myimage              # "Hello World"
docker run myimage echo "Hi"    # "Hi" (CMD overridden)

# ENTRYPOINT — main command, NOT easily overridden
FROM ubuntu
ENTRYPOINT ["echo"]
CMD ["Hello World"]             # default argument

docker run myimage              # "Hello World"
docker run myimage "Hi"         # "Hi" (only arg changes)

# Best practice: use together
# ENTRYPOINT = the executable
# CMD = default arguments
ENTRYPOINT ["python", "app.py"]
CMD ["--port", "8080"]

# Override: docker run myimage --port 9090
# ENTRYPOINT stays, CMD is replaced

Networking and Volumes

Q6: Explain Docker networking types.

# 1. bridge (default)
# Containers on same bridge can communicate
# Isolated from host network
docker network create mynet
docker run --network mynet --name api myapi
docker run --network mynet --name db postgres
# api can reach db by container name: db:5432

# 2. host
# Container shares host's network directly
# No port mapping needed, no isolation
docker run --network host myapp
# App on port 3000 is directly on host:3000

# 3. none
# No networking at all (fully isolated)
docker run --network none myapp

# 4. overlay
# Multi-host networking (Docker Swarm / Kubernetes)
# Containers on different machines can communicate

# Key interview question:
# "How do two containers communicate?"
# Answer: Put them on the same bridge network
# and use container names as hostnames

Q7: What are Docker volumes? Why are they needed?

# Problem: container data is lost when container is removed
# Solution: volumes persist data outside the container

# 3 types of mounts:

# 1. Named volume (recommended for production)
docker volume create mydata
docker run -v mydata:/app/data myapp
# Data persists even if container is deleted

# 2. Bind mount (development)
docker run -v /host/path:/container/path myapp
# Maps host directory into container
# Great for development (live code reload)

# 3. tmpfs mount (temporary, in-memory)
docker run --tmpfs /app/temp myapp
# Data stored in RAM, lost when container stops

# When to use volumes:
# Database data (PostgreSQL, MongoDB)
# Upload files
# Logs
# Shared data between containers
DevOps engineer working on container infrastructure

Docker networking and volumes are where interviews separate beginners from production-ready engineers.

Docker Compose and Optimization

Q8: What is Docker Compose? Write a sample file.

# Docker Compose = tool for defining multi-container apps
# Uses a YAML file (docker-compose.yml)

version: '3.8'
services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      - DB_HOST=db
      - DB_PORT=5432
    depends_on:
      - db
    restart: unless-stopped

  db:
    image: postgres:15
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=secret
      - POSTGRES_DB=myapp

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

# Commands:
# docker compose up -d      → start all services
# docker compose down        → stop and remove
# docker compose logs api    → view logs for api service
# docker compose build       → rebuild images

Q9: How do you optimize a Docker image for production?

# 1. Use multi-stage builds
# Build stage (large, has build tools)
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage (small, only runtime)
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
# Result: 100MB instead of 1GB

# 2. Use alpine base images
FROM node:18-alpine    # ~50MB vs node:18 ~350MB

# 3. Order layers for caching
COPY package*.json ./  # changes rarely → cached
RUN npm ci             # cached if package.json unchanged
COPY . .               # changes often → last

# 4. Use .dockerignore
node_modules
.git
*.md
.env

# 5. Minimize layers (combine RUN commands)
RUN apt-get update && apt-get install -y curl &&     rm -rf /var/lib/apt/lists/*

# 6. Don't run as root
RUN adduser --disabled-password appuser
USER appuser

Q10: What is the difference between Docker and Kubernetes?

Docker is a container runtime — it builds and runs individual containers. Kubernetes is a container orchestrator — it manages hundreds/thousands of containers across multiple machines. Docker answers “how do I run this app in a container?” Kubernetes answers “how do I run 50 copies of this container across 10 servers with auto-scaling, load balancing, and self-healing?”

Think of Docker as a single ship and Kubernetes as the fleet management system. You need Docker (or a container runtime) to create containers. You need Kubernetes to manage them at scale.

How to Prepare

Docker Interview — Priority by Role

Backend Developer

  • • Dockerfile writing
  • • Image vs container
  • • Docker Compose
  • • Port mapping
  • • Environment variables

DevOps Engineer

  • • Multi-stage builds
  • • Networking (bridge, overlay)
  • • Volume management
  • • Image optimization
  • • Docker + CI/CD pipelines

Cloud / SRE

  • • Docker vs Kubernetes
  • • Container security
  • • Registry management
  • • Resource limits
  • • Logging & monitoring

Practice Docker Interview Questions with AI

Get asked real Docker interview questions — containers, Dockerfiles, networking, and production scenarios. Practice explaining architecture and writing configurations.

Free · AI-powered feedback · Container-focused questions