Skip to main content

Deploying RepoFlow

Requirements

  1. Install Docker and Docker Compose.
  2. Ensure your machine has at least 2 CPU cores and 4GB of RAM.
  3. Review the following docker-compose.yml and configuration files to ensure they meet your requirements.

Note: This deployment method is designed for quick and lightweight setups. Optional services like Elasticsearch and Redis are not included in the Docker Compose file to keep the deployment lighter. For production environments, it is recommended to deploy RepoFlow using the Helm chart on a Kubernetes cluster. Refer to the Helm deployment guide for more information.

Installation

1. Create the Docker Compose File

Create a file named docker-compose.yml in your project directory with the following content:

version: "3.9"

services:
nginx:
image: nginxinc/nginx-unprivileged:1.27.4-bookworm-perl
ports:
- "9080:8080"
volumes:
- ./nginx/conf/conf.d:/etc/nginx/conf.d/default.conf
depends_on:
- server
- client
- hasura
deploy:
replicas: 1
networks:
- repoflow-net

client:
image: api.repoflow.io/repoflow-public/docker-public/library/repoflow-client:0.4.7
volumes:
- ./client/env.js:/usr/share/nginx/html/env.js:ro
- ./client/analytics.js:/usr/share/nginx/html/analytics.js:ro
environment:
- HASURA_API_URL=http://hasura:8080/v1/graphql
- FRONTEND_URL=http://nginx:8080
deploy:
replicas: 1
networks:
- repoflow-net

server:
image: api.repoflow.io/repoflow-public/docker-public/library/repoflow-server:0.4.7
environment:
- IS_PRINT_ENV=true
- SERVER_PORT=3000
- SERVER_URL=http://localhost:9080/api
- FRONTEND_URL=http://localhost:9080
- TMP_FOLDER=/tmp
- COOKIE_DOMAIN=localhost
- COOKIE_SECURE=false
- COOKIE_SAME_SITE=strict
- COOKIE_HTTP_ONLY=false
- GENERAL_COOKIE_SECRET=bi3ninEIFB39BIQNWEAIDPAOEJNFIJ200DNr92biBNDF
- S3_ACCESS_KEY=AkIAIOS1Dasd1asfas21eODNN7EXPLE
- S3_SECRET_KEY=d716g2s7!@1da89QD1N98AS8h19s
- S3_USE_SSL=false
- S3_PORT=9000
- S3_END_POINT=minio
- S3_BUCKET=repoflow
- S3_USE_PRE_SIGNED_URL=false
- HASURA_URL=http://hasura:8080/v1/graphql
- HASURA_URL_REST=http://hasura:8080/api/rest
- HASURA_ADMIN_SECRET=af5da89af9cd2!42b3%$9d34$ec34d02bc621
- IS_SMART_SEARCH_ENABLED=false
- DEFAULT_SEARCH_LIMIT=10
- IS_REDIS_ENABLED=false
- IS_REMOTE_CACHE_ENABLED=true
- JWT_SECRET=d09dj109jqshf8m1d9139r93djd9j1d9j209dj1
- RESET_PASSWORD_JWT_SECRET=98b3dn87q6ND98Q3BD8676bn8D98MdMD9d97nD9873D9173BD97
- PERSONAL_ACCESS_TOKEN_JWT_SECRET=9HFN87bv8cwhef8b249cbwegvc72G8F7V84BC8RHCB83g9cbcb
- COOCKIE_EXPIRY_IN_SECONDS=2592000
- JWS_ALGORITHM=HS256
- DEFAULT_ADMIN_USER_NAME=admin
- DEFAULT_ADMIN_PASSWORD=password
volumes:
- server-logs:/var/log/repoflow
- grype-db:/srv/vulnerabilitiesScanning
depends_on:
- hasura
deploy:
replicas: 1
networks:
- repoflow-net

postgresql:
image: postgres:14
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: repoflow
volumes:
- postgresql-data:/var/lib/postgresql/data
deploy:
replicas: 1
networks:
- repoflow-net

minio:
image: bitnami/minio:2024.12.18-debian-12-r1
environment:
- MINIO_ROOT_USER=AkIAIOS1Dasd1asfas21eODNN7EXPLE
- MINIO_ROOT_PASSWORD=d716g2s7!@1da89QD1N98AS8h19s
- MINIO_BROWSER=on
volumes:
- minio-data:/bitnami/minio/data
deploy:
replicas: 1
networks:
- repoflow-net

hasura:
image: hasura/graphql-engine:v2.37.0
environment:
- HASURA_GRAPHQL_ENABLE_CONSOLE=true
- HASURA_GRAPHQL_DEV_MODE=true
- HASURA_GRAPHQL_ENABLED_LOG_TYPES=startup,http-log,webhook-log,websocket-log,query-log
- HASURA_GRAPHQL_ADMIN_SECRET=af5da89af9cd2!42b3%$9d34$ec34d02bc621
- HASURA_GRAPHQL_JWT_SECRET={"key":"d09dj109jqshf8m1d9139r93djd9j1d9j209dj1","type":"HS256"}
- HASURA_GRAPHQL_UNAUTHORIZED_ROLE=anonymous
- HASURA_GRAPHQL_METADATA_DATABASE_URL=postgres://user:password@postgresql:5432/repoflow
- HASURA_GRAPHQL_DATABASE_URL=postgres://user:password@postgresql:5432/repoflow
depends_on:
- postgresql
deploy:
replicas: 1
networks:
- repoflow-net

networks:
repoflow-net:
driver: bridge

volumes:
minio-data:
server-logs:
postgresql-data:
grype-db:

2. Create the Nginx Configuration

Create a file nginx/conf/conf.d with the following content:

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

client_max_body_size 1000M;

server {
listen 8080 default_server;

add_header X-Frame-Options "DENY";
add_header Content-Security-Policy "frame-ancestors 'none';";

# Location to handle /v2 and rewrite to /api/v2 without sending '/api' to the backend
location /v2/ {
client_max_body_size 1000M;
proxy_read_timeout 600s;
proxy_request_buffering off;
rewrite ^/v2/(.*)$ /v2/$1 break;
proxy_pass http://server:3000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location /api/ {
client_max_body_size 1000M;
proxy_read_timeout 600s;
proxy_request_buffering off;
proxy_pass http://server:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

# Handle Hasura GraphQL
location /hasura {
proxy_pass http://hasura:8080/v1/graphql;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}

# Default location for frontend
location / {
proxy_pass http://client:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

3. Create the Client Environment File

Create a file client/env.js with the following content:

window.HASURA_API_URL = "/hasura";
window.IS_CONNECTION_SECURE = false;
window.REPOFLOW_SERVER = "/api";
window.COOKIE_DOMAIN = "localhost";
window.IS_PRINT_ENV = true; // useful for debugging
window.DOCS_URL = "/docs";

4. Create the Analytics Script

Create a file client/analytics.js with your analytics script or leave the file empty:

// Add your analytics script here, e.g., Google Analytics or Mixpanel.

5. Start RepoFlow

Run the following command:

docker-compose up -d

6. Final File and Folder Structure

After completing the above steps, your project directory should look like this:

RepoFlow/
├── docker-compose.yml
├── nginx/
│ └── conf/
│ └── conf.d
├── client/
│ ├── env.js
│ └── analytics.js

This structure ensures all necessary configuration files and scripts are in place, providing a clean and organized setup for deploying RepoFlow.

7. Next Steps

  1. Access RepoFlow at http://localhost:9080.
  2. Log in using the default credentials:
    • Username: admin
    • Password: password
  3. Follow the guides to create a workspace and add repositories.

Tip: For production-grade deployments, switch to Kubernetes with Helm charts for better scalability and reliability.