porting_an_lxc_to_docker

This is an old revision of the document!


Porting an LXC to Docker

Porting a Linux Container (LXC) to Docker might seem like a straightforward task given that both are containerization technologies. However, there are several caveats and pitfalls that one should be aware of before attempting this process.

Introduction

LXC and Docker are both used to create and manage containers, but they operate on different underlying principles. LXC is more of a lightweight virtualization method that provides an environment similar to a full-fledged virtual machine. Docker, on the other hand, is a platform for developing, shipping, and running applications inside containers, optimized for microservices and cloud-native applications.

Caveats and Pitfalls

Here are the key issues to be aware of when porting an LXC container to Docker:

  • Different Container Philosophies:
    • LXC containers are closer to traditional virtual machines, providing a more complete operating system environment.
    • Docker containers are designed to run a single application or service, following a microservice architecture.
  • File System Differences:
    • LXC uses a full OS filesystem, while Docker uses layered filesystems.
    • You may need to manually adjust file paths and configurations when moving to Docker.
  • Process Management:
    • LXC containers manage multiple processes similarly to a traditional Linux environment.
    • Docker is designed to run a single process per container. This might require refactoring of the services running in the LXC container.
  • Networking:
    • LXC networking configurations can be complex and highly customized.
    • Docker abstracts much of the networking and provides simpler, yet different, networking options which might require reconfiguration.
  • Security Contexts:
    • LXC provides a more traditional Linux security model.
    • Docker has a different security model, with its own set of user namespaces, capabilities, and SELinux/AppArmor profiles, which might affect how applications behave.
  • Resource Allocation:
    • LXC provides finer control over resource allocation (CPU, memory, etc.).
    • Docker abstracts and automates much of this, which could lead to different performance characteristics.

Containerizing the Flask Application

The process of containerizing the Flask application involved several key steps, including setting up the necessary dependencies, creating a Dockerfile, and configuring the `docker-compose.yml` file.

v1 - Using Clean Docker Images for Python Apps and MariaDB

When transitioning from LXC containers to Docker, it's often more efficient and cleaner to use base Docker images for your applications rather than attempting to convert existing LXC containers. This approach offers several advantages:

  • Universal Compatibility: Docker images are designed to be highly portable and compatible across various cloud platforms, making it easier to deploy and manage your applications.
  • Separation of Concerns: Running different services in separate containers (e.g., Python apps in one container and MariaDB in another) aligns with Docker’s microservices architecture, promoting better scalability and maintenance.
  • Cleaner Environment: Starting with a clean base image ensures that only the necessary dependencies and configurations are included, reducing potential conflicts and bloat.

General Outline for Container Creation

Here’s a general outline of the steps to create Docker containers for a Python application and a MariaDB database.

1. Create Dockerfile for Python App

First, create a `Dockerfile` for your Python application. Use a base image like Debian or Alpine Ubuntu.

  • Create a `Dockerfile`:
FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "app.py"]
  • Build the Docker image:
docker build -t my-python-app .

2. Create Dockerfile for MariaDB (optional)

Next, create a `Dockerfile` for MariaDB or use the official MariaDB image.

  • Use the official MariaDB image in your `docker-compose.yml`:
version: '3.1'

services:
  db:
	image: mariadb
	restart: always
	environment:
	  MYSQL_ROOT_PASSWORD: example
	  MYSQL_DATABASE: mydb
	  MYSQL_USER: user
	  MYSQL_PASSWORD: password

  app:
	build: .
	ports:
	  - "5000:5000"
	depends_on:
	  - db

3. Configure Docker Compose

Use Docker Compose to manage both containers. Create a `docker-compose.yml` file.

  • Example `docker-compose.yml`:
version: '3.1'

services:
  app:
	build: .
	ports:
	  - "5000:5000"
	depends_on:
	  - db

  db:
	image: mariadb
	restart: always
	environment:
	  MYSQL_ROOT_PASSWORD: example
	  MYSQL_DATABASE: mydb
	  MYSQL_USER: user
	  MYSQL_PASSWORD: password

4. Deploy Containers

Deploy your containers using Docker Compose.

  • Run the containers:
docker-compose up -d

5. Test and Verify

Ensure that both containers are running and communicating correctly.

  • Check container status:
docker ps
  • Verify the application is accessible:

Open your browser and navigate to `http://localhost:5000`.

Conclusion

Using clean Docker images for your Python applications and databases provides a more maintainable, scalable, and universally compatible solution compared to converting LXC containers. This approach aligns with modern best practices in containerization and microservices architecture.

V2

1. Setting Up Dependencies

Before containerizing the application, it was crucial to define the required dependencies in a `requirements.txt` file. The Flask application used the following dependencies:

Flask==2.1.2
mariadb==1.1.4

These versions ensured compatibility and stability, avoiding issues related to breaking changes in more recent versions.

2. Creating the Dockerfile

The next step was to create a `Dockerfile` to define the environment for the Flask application. The Dockerfile included the following instructions:

FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "main.py"]

Explanation:

  • `FROM python:3.9-slim`: This uses a lightweight Python 3.9 image as the base.
  • `WORKDIR /app`: Sets the working directory inside the container.
  • `COPY requirements.txt requirements.txt`: Copies the `requirements.txt` file into the container.
  • `RUN pip install -r requirements.txt`: Installs the required Python packages.
  • `COPY . .`: Copies all the application files into the container.

3. Configuring Docker-Compose

To orchestrate the Flask application in a multi-container environment, a `docker-compose.yml` file was created with the following content:

version: '3'

services:
  app:
    build: .
    ports:
      - "5000:5000"
    environment:
      API_TOKEN: ${API_TOKEN}
    networks:
      - app-network

networks:
  app-network:

Explanation:

  • `services`: Defines the application service.
  • `build: .`: Tells Docker Compose to build the image using the Dockerfile in the current directory.
  • `ports: “5000:5000”`: Maps port 5000 on the host to port 5000 in the container.
  • `environment: API_TOKEN: ${API_TOKEN}`: Passes the `API_TOKEN` environment variable to the container.
  • `networks`: Creates and attaches the container to a custom network, `app-network`.

4. Building and Running the Container

After setting up the Dockerfile and `docker-compose.yml`, the following commands were used to build and run the Flask application in a Docker container:

docker-compose build
docker-compose up -d

Explanation:

  • `docker-compose build`: Builds the Docker image based on the instructions in the Dockerfile.
  • `docker-compose up -d`: Starts the container in detached mode.

5. App code with environment variables

from flask import Flask, request, jsonify, abort, g
from datetime import datetime
import mariadb
import re
import os
 
# Get passwords and tokens from environment variables
API_TOKEN = os.environ.get('API_TOKEN')
DB_PASSWORD = os.environ.get('DB_PASSWORD')
DB_HOST = os.environ.get('DB_HOST')
 
app = Flask(__name__)
 
def get_db():
    if 'db' not in g:
        g.db = mariadb.connect(
            host=DB_HOST,
            port=3306,
            user="facundo",
            password=DB_PASSWORD,
            database="VeeamReports"
        )
    return g.db
 
@app.before_request
def before_request():
    get_db()
 
@app.teardown_request
def teardown_request(exception=None):
    db = g.pop('db', None)
    if db is not None:
        db.close()
 
@app.route('/upload', methods=['POST'])
def upload_file():
    db = get_db()  # Get the database connection
    cursor = db.cursor()  # Now 'db' should be defined
 
    # Check for Authorization header
    if 'Authorization' not in request.headers:
        abort(401)  # Unauthorized
 
    # Check if the token is correct
    token = request.headers['Authorization']
    if token != API_TOKEN:
        abort(403)  # Forbidden
 
    data = request.get_json()  # Get JSON data from request
    if data is None:
        return jsonify({'error': 'No JSON received.'}), 400
 
    # ---------- The application itself ---------
    # Extract hostname from data
    hostname = data['hostname']
 
    # Create a new table for this hostname if it doesn't exist
    cursor.execute(f"CREATE TABLE IF NOT EXISTS `{hostname}` (id INT AUTO_INCREMENT PRIMARY KEY, creationtime VARCHAR(255), vmname VARCHAR(255), type VARCHAR(255), result VARCHAR(255))")
 
    # Delete all rows from the table
    cursor.execute(f"DELETE FROM `{hostname}`")
 
    for restorePoint in data['restorePoints']:
        creationtime = restorePoint['creationtime']
 
        # Check if creationtime contains Date
        if "Date" in creationtime:
            # Extract the integer from the creationtime string
            match = re.search(r'\((\d+)\)', creationtime)
            if match:
                creationtime = int(match.group(1))  # Convert to integer type
 
        # Check the type and replace it if necessary
        type = str(restorePoint['type'])  # wtf
        if str(restorePoint['type']) == '0':
            type = 'Full'
        elif str(restorePoint['type']) == '2':
            type = 'Incremental'
        elif str(restorePoint['type']) == '4':
            type = 'Snapshot'
 
        # Check the type and replace the result if necessary
        if str(restorePoint['type']) in ["TieringJob", "BackupJob"]:
            if str(restorePoint['result']) == '0':
                result = 'Success'
            elif str(restorePoint['result']) == '1':
                result = 'Warn'
            elif str(restorePoint['result']) == '2':
                result = 'Fail'
        else:
            result = restorePoint['result']
 
        query = f"INSERT INTO `{hostname}` (creationtime, vmname, type, result) VALUES (%s, %s, %s, %s)"
        values = (
            creationtime,
            restorePoint['vmname'],
            type,
            result
        )
        cursor.execute(query, values)
 
    db.commit()  # Commit the transaction
 
    return jsonify({'message': 'Data saved successfully.'}), 200
 
if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)

Conclusion

Porting from LXC to Docker is not a trivial task due to the differences in how these containers are designed and managed. It's essential to review the services, configurations, and workflows involved before starting the migration to avoid unexpected issues and ensure that your applications run smoothly in Docker.

Additional Resources

porting_an_lxc_to_docker.1723348839.txt.gz · Last modified: 2024/10/17 21:42 (external edit)