Bolt.diy: Replace WebContainer With Node.js & Docker
Let's dive into a detailed guide on replacing the closed-source WebContainer APIs with an open-source Node.js and Docker runtime. This article will provide a step-by-step approach to migrating the Bolt.diy project from a fully browser-based execution to a client-server model. This new model leverages a local Node.js backend that manages Docker containers, ensuring an isolated and fully open-source runtime environment. By using Docker and Node.js, we ensure compliance with open-source standards while maintaining the privacy of user data, as everything runs locally on the user's machine. This migration enhances the project's flexibility and transparency, making it more appealing to developers and users alike. So, let's get started and explore how to make this transition smoothly!
Understanding the Core Idea
At the heart of this project, the core idea is to transition from a closed-source WebContainer to an open-source Node.js and Docker runtime. Here's a breakdown of the main components:
- Docker Containers: Utilizing Docker to spin up isolated Node.js containers for each project or session. These containers will handle command execution, code processing, and hosting previews. This isolation ensures that each project runs in its own environment, preventing conflicts and enhancing security.
- Lightweight Backend: Implementing a lightweight backend using Express.js for API endpoints and WebSockets for real-time communication. This backend will manage interactions between the frontend and the Docker containers, providing a seamless user experience. The Express.js framework allows for efficient handling of API requests, while WebSockets enable streaming output for terminal operations.
- WebContainer Removal: Eliminating all WebContainer dependencies and refactoring the relevant code to use API calls to the new backend. This step is crucial for achieving full open-source compliance. By removing the closed-source components, the project becomes more transparent and maintainable.
- Dockerode Library: Leveraging
dockerode
, a Node.js Docker client library, for programmatic control of Docker. This library simplifies the management of Docker containers, allowing for easy creation, starting, stopping, and removal of containers. Dockerode provides a robust set of APIs for interacting with the Docker daemon. - User Prerequisites: Assuming the user has Docker installed and running. We will add checks and documentation to guide users through the installation process if needed. Ensuring users have the necessary tools in place is vital for a smooth transition.
This approach shifts the architecture to be more server-side oriented but retains accessibility via a browser (e.g., http://localhost:3000
). The backend will handle runtime isolation through Docker, ensuring a consistent and secure environment for all projects. This architectural shift enhances the project's scalability and maintainability, making it a robust platform for future development.
Step 1: Preparation and Forking
The initial steps involve preparing the development environment and forking the repository. This section outlines the actions needed to set up the project for modification. These preliminary steps are crucial for a smooth transition and ensure that the development environment is correctly configured.
- Fork the Repository: Start by forking the repository on GitHub. Navigate to https://github.com/stackblitz-labs/bolt.diy and click the "Fork" button. This creates a copy of the repository under your GitHub account, allowing you to make changes without affecting the original project. Forking is the first step in contributing to open-source projects and allows for independent development.
- Clone Your Fork: Clone the forked repository to your local machine using the command
git clone https://github.com/YOUR_USERNAME/bolt.diy.git
. ReplaceYOUR_USERNAME
with your GitHub username. Cloning downloads the repository to your computer, enabling you to work on the project locally. Git is essential for version control and collaboration. - Checkout the Stable Branch: Switch to the
stable
branch by runninggit checkout stable
. This ensures you are working on the most stable version of the codebase. Thestable
branch is typically used for production-ready code, making it the ideal starting point for major modifications. Using the stable branch minimizes the risk of encountering bugs or unstable features. - Install Dependencies: Ensure that Node.js (LTS), Git, and pnpm are installed on your system. Then, run
pnpm install
to install the project dependencies. Node.js is the runtime environment for the backend server, Git is for version control, and pnpm is a package manager. Installing dependencies is crucial for setting up the project environment. A successful installation ensures that all necessary packages are available. - Install Docker: If Docker is not already installed on your machine, download it from https://www.docker.com/products/docker-desktop. After installation, verify it by running
docker --version
in your terminal. Docker is the containerization platform that provides the isolated runtime environment. Verifying the installation ensures that Docker is correctly configured and accessible. - Start the Original App: Run
pnpm run dev
to start the original application. Openhttp://localhost:3000
(or the appropriate port) and observe how the WebContainer popout, terminal, and preview functions. This step provides a baseline understanding of the current functionality, which is essential for refactoring. Observing the existing behavior helps ensure that the new implementation matches the original in terms of user experience. - Add New Dependencies: Add the necessary dependencies for Docker integration and backend functionality. Run the following commands:
pnpm add dockerode express ws
: This command addsdockerode
for Docker control,express
for the backend API, andws
for WebSockets to stream terminal output.pnpm add -D @types/express @types/ws
: This command adds TypeScript types for Express and WebSockets as development dependencies. TypeScript types enhance code maintainability and prevent runtime errors.
- Remove WebContainer Dependencies: Modify the
package.json
file to remove@webcontainer/api
and any related packages. After removing the dependencies, runpnpm install
to update thenode_modules
directory. This step is crucial for eliminating the closed-source WebContainer and transitioning to a fully open-source environment. Removing unnecessary dependencies helps reduce the project's size and complexity.
Step 2: Add Backend Server
This step involves creating a Node.js backend server to handle the Docker runtime. The backend will manage the Docker containers and provide API endpoints for the frontend to interact with them. This server-side component is essential for the new client-server architecture.
-
Create a New Directory: Create a new directory named
src/backend/
in your project. This directory will house all the backend-related files. Organizing the project structure is crucial for maintainability. -
Implement the Backend: Create a file named
src/backend/server.ts
and implement the backend server using the provided code. This server will handle session creation, file writes, command execution, and preview generation. The code utilizes Express.js for the API, WebSockets for real-time communication, and Dockerode for Docker container management. The provided code includes the following functionalities:import Docker from 'dockerode'; import express from 'express'; import WebSocket from 'ws'; import http from 'http'; import fs from 'fs-extra'; // Assume fs-extra is already a dep or add it import path from 'path'; import { v4 as uuidv4 } from 'uuid'; // Add uuid if needed: pnpm add uuid const app = express(); const server = http.createServer(app); const wss = new WebSocket.Server({ server }); const docker = new Docker(); // Connect to local Docker daemon app.use(express.json()); // Store active containers by session ID (in-memory for simplicity; use DB for prod) const sessions: Map<string, { container: Docker.Container; projectDir: string }> = new Map(); // API: Create a new project session and spin up Docker container app.post('/api/create-session', async (req, res) => { const sessionId = uuidv4(); const projectDir = path.join(__dirname, '../../temp-projects', sessionId); // Local temp dir for project files await fs.ensureDir(projectDir); // Pull Node.js image if needed (use official open-source image) await docker.pull('node:20-alpine'); // Lightweight Node.js image // Create container with volume mount for project files const container = await docker.createContainer({ Image: 'node:20-alpine', Tty: true, OpenStdin: true, AttachStdout: true, AttachStderr: true, HostConfig: { Binds: [`${projectDir}:/app`], // Mount local project dir to /app in container PortBindings: { '3000/tcp': [{ HostPort: '0' }] }, // Dynamic port for app preview }, WorkingDir: '/app', Cmd: ['tail', '-f', '/dev/null'], // Keep container running idly }); await container.start(); sessions.set(sessionId, { container, projectDir }); res.json({ sessionId, message: 'Session created' }); }); // API: Write file to project app.post('/api/write-file/:sessionId', async (req, res) => { const { sessionId } = req.params; const { filePath, content } = req.body; const session = sessions.get(sessionId); if (!session) return res.status(404).json({ error: 'Session not found' }); const fullPath = path.join(session.projectDir, filePath); await fs.writeFile(fullPath, content); res.json({ message: 'File written' }); }); // API: Run command in container app.post('/api/run-command/:sessionId', async (req, res) => { const { sessionId } = req.params; const { command } = req.body; const session = sessions.get(sessionId); if (!session) return res.status(404).json({ error: 'Session not found' }); const exec = await session.container.exec({ Cmd: command.split(' '), AttachStdout: true, AttachStderr: true, }); const stream = await exec.start({ hijack: true, stdin: false }); // For non-streaming, collect output let output = ''; stream.on('data', (chunk) => { output += chunk.toString(); }); stream.on('end', () => res.json({ output })); }); // WebSocket for streaming terminal output wss.on('connection', (ws, req) => { const sessionId = req.url?.split('/')[1]; // e.g., ws://localhost:4000/sessionId ws.on('message', async (message) => { const { command } = JSON.parse(message.toString()); const session = sessions.get(sessionId); if (!session) return ws.send(JSON.stringify({ error: 'Session not found' })); const exec = await session.container.exec({ Cmd: command.split(' '), AttachStdout: true, AttachStderr: true, }); const stream = await exec.start({ hijack: true, stdin: false }); stream.on('data', (chunk) => ws.send(chunk.toString())); }); }); // API: Get preview URL (proxy or direct) app.get('/api/preview/:sessionId', async (req, res) => { const { sessionId } = req.params; const session = sessions.get(sessionId); if (!session) return res.status(404).json({ error: 'Session not found' }); const info = await session.container.inspect(); const port = info.NetworkSettings.Ports['3000/tcp'][0].HostPort; res.json({ previewUrl: `http://localhost:${port}` }); }); // Cleanup on shutdown process.on('SIGINT', async () => { for (const session of sessions.values()) { await session.container.stop(); await session.container.remove(); await fs.remove(session.projectDir); } process.exit(0); }); server.listen(4000, () => console.log('Backend server on port 4000'));
This backend server runs on port 4000. Ensure this port is available or adjust as necessary. It implements several key API endpoints:
/api/create-session
: Creates a new session and spins up a Docker container./api/write-file/:sessionId
: Writes a file to the project directory inside the container./api/run-command/:sessionId
: Executes a command inside the container and returns the output./api/preview/:sessionId
: Retrieves the preview URL for the running application.
The server also includes a WebSocket endpoint (
wss.on('connection')
) for streaming terminal output in real-time. This is crucial for providing a responsive terminal experience to the user. The use of Express.js, WebSockets, and Dockerode ensures that the backend is efficient and scalable. -
Modify Start Scripts: Update the start scripts in
package.json
to include the backend server. This ensures that the backend server runs alongside the frontend development server. The modifications involve:- Changing the
dev
script to run both the backend server and the Vite development server concurrently. This can be achieved using the&
operator or theconcurrently
package.- Using
&
: Change `
- Using
- Changing the