This guide explains how to set up and run the project locally using Docker (with Colima).
This project utilizes AI models and requires significant system resources. Ensure Docker has adequate resources allocated. Here's a recommended configuration using Colima:
colima start --cpu 7 --memory 16 --disk 55After making changes to the source code, rebuild the Docker images using the following commands.
Navigate to the backend directory and build the backend image:
cd backend
docker build \
--no-cache \
--file Dockerfile \
--tag onyx-backend:v0.0.1 .From the project's root directory, build the frontend image:
cd web
docker build \
--no-cache \
--file Dockerfile \
--build-arg NODE_OPTIONS="--max-old-space-size=8192" \
--tag onyx-frontend:v0.0.1 .Update the docker-compose.dev.yml file located at ./deployment/docker_compose/ to use the newly built images. Then start the services with the following command:
IMAGE_TAG=v0.28.1 docker compose -f docker-compose.dev.yml -p onyx-stack up -dThis command will start all the necessary services. After a few minutes, the UI should be accessible at http://localhost:3000.
Currently, the backend and frontend services require building Docker images for every change, which slows the feedback loop. Exploring a setup to run these services natively without rebuilding Docker images could significantly improve development efficiency.