Welcome to CloudVA Forge!

Forge is the fastest way to create, test and build code with docker containers.
We value your time. We're 3-4x faster than other solutions.

Project Product Config Build time
CPythonCloudVA Forge 8 CPU, 8 GB 0m41s
CPythonAWS CodeBuild 8 CPU, 15 GB 1m53s 3x slower
CPythonGithub Actions 2 CPU, 7 GB 2m31s 4x slower
Node.jsCloudVA Forge 4 CPU, 15 GB 0h27m
Node.jsCircleCI 4 CPU, 15 GB 1h18m 3x slower
Node.jsGithub Actions 2 CPU, 7 GB 1h50m 4x slower
Learn more about Forge below.
Forge VMs
Our VMs are perfect for fast-paced work.
Our provisioning takes 0.7s and the VMs typically respond to ping under 3 seconds from a cold start.
You can automate VMs with Triggers and run arbitrary Github runners on-demand to significantly speed up your CI/CD pipeline.
RESTful API for full control is coming soon.
Forge Docker flow
Docker flow allows you to build a docker image and push it to private Forge docker registry. You can then deploy the image anywhere you want.
Docker flows have 2 stages:
  1. Prepare stage, described by YAML prepare: section, runs commands on the host before building docker image
  2. Build stage builds the Docker image according to specified Dockerfile and pushes it to Forge private registry
Both stages share current-working-directory and environment variables defined by environment: YAML section.
To create a Docker flow in a few clicks specify a Tag, a Dockerfile, and optionally commands to run in a YAML format:

Tag: cpython-builder-ubuntu2004
YAML:
--------------------------------------------
enviroment:
  # Would be set automatically if we used Github integration through Forge Triggers
  - GITHUB_REPOSITORY: "https://github.com/python/cpython"
prepare:
  - git clone $GITHUB_REPOSITORY project
--------------------------------------------
Dockerfile:
--------------------------------------------
# Prepare ubuntu 20.04 build environment
FROM ubuntu:20.04
RUN apt-get update -qq
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
RUN apt-get -y install cmake gcc
COPY project/LICENSE.txt /LICENSE.txt 
# Copies project/LICENSE.txt, which we checked out in "prepare" stage.
# "prepare" stage runs on host, so inside the docker we don't need to clone the project 
# .. or install "git", making the image smaller and faster.
--------------------------------------------
There is no need to rebuild this image every commit, or even every release, but only when the LICENSE, or build requirements change. You can set up automation so to rebuilt only when necessary.
Forge Run flow
Run flow can be used to execute commands within a Docker container. This can be used in a CI/CD pipeline to run a build in a consistent configuration provided by the container, or to test the container itself.
Runs flows have 3 stages:
  1. Prepare stage, described by YAML prepare: section, runs commands on the host before the docker runs
  2. Execute stage, described by YAML execute: section, runs commands inside the docker container specified by Tag
  3. Archive stage, described by YAML archive: section, runs commands on the host after the docker runs
All three stages share current-working-directory and environment variables defined by environment: YAML section.
To create a Runs flow select a docker Tag to work with and specify the commands to run in a YAML format:
Tag: cpython-builder-ubuntu2004
YAML:
--------------------------------------------
enviroment:
  # Would be set automatically if we used Github integration through Forge Triggers
  - GITHUB_REPOSITORY: "https://github.com/python/cpython" 
  - GITHUB_SHA: "29f1b0bb1ff73dcc28f0ca7e11794141b6de58c9"
prepare:
  - git clone $GITHUB_REPOSITORY cpython
  - cd cpython
  - git checkout $GITHUB_SHA
execute:
  - cd cpython
  - ./configure
  - make -j $FORGE_CPU
  - make test
archive:
  - tar -czvf build.tar.gz cpython/build  
  - save_dir=/media/data/cpython/$GITHUB_SHA
  - mkdir -p $save_dir
  - cp build.tar.gz $save_dir
--------------------------------------------
In our example the archive stage uses Forge shared peristent volume (SPV), but we don't limit your options. You're welcome to store your artifacts outside of Forge.
Forge Triggers
Triggers can be used to automate Forge VMs, Dockers and Runs. Triggers are secure webhooks that set off actions on Forge. They can be used to integrate Forge with CI/CD pipelines, or just automatically start/stop a VM when needed.
Github Triggers are designed to integrate with Github through a workflow-forge-trigger action. They copy "GITHUB_*" and "FORGE_*" environment variables from the action, as shown in special variables section.
Forge private docker registry
Forge created a project for you on our private docker registry. By default only you have access to this project. You can manage the project here.
Forge registry hosts your images and serves as a proxy for Docker Hub images, significantly speeding up pulls.
Forge Docker and Run flows automatically use this private registry, no configuration required.
You're welcome to use the registry from outside of Forge. Credentials to do so can be found on your Account page.
YAML specification for Dockers and Runs
environment:
  # present in Dockers and Runs
  # sets common environment variables for other stages
  # some enviroment variables are automatically set by Forge
  - answer: 42
prepare:
  # present in Dockers and Runs
  # executes commands on host before Docker:build or Run:execute
  # typical use: cloning/checking out a repository; installing dependencies
  - echo $answer # echoes "42"
execute:
  # present only in Runs; Dockers have a "build" stage instead, controlled solely by the Dockerfile
  # executes commands in docker container defined by Run tag
  # typical use: building a project; running tests
  - echo $answer # echoes "42"
archive:
  # present only in Runs
  # executes commands on host after Run execute stage
  # typical use: storing build files; saving logs;
  - echo $answer # echoes "42"
Forge special variables
Variables with "FORGE_" prefix are reserved for internal use. Currently the following variables have special meaning to Forge:
environment:
  - FORGE_CPU=4    # CPU cores
  - FORGE_RAM=4096 # RAM in MB
You can use those to control resource usage for Docker and Run flows. If you do not, Forge will set them automatically to some sane defaults. In any case, you can use these variables to limit your build parallelism:
execute:
  #...
  - make -j $FORGE_CPU
When Triggers are used, Forge will set additional environment variables depending on Trigger type. For Github Triggers Forge will copy, as-is, all "GITHUB_*" and "FORGE_*" variables present in the Github action. As a result, all of these are available in Docker and Run flows:
prepare:
  - git clone $GITHUB_REPOSITORY repo
  - cd repo
  - git checkout $GITHUB_REF
  - make -j $FORGE_CPU
Note that variables carrying OS paths local to Github runner such as GITHUB_EVENT_PATH will not be correct, or meaningful, on Forge.
Forge shared persistent volume
Forge created a special network storage space we call "shared persistent volume" (or "SPV") for you.
All your VMs have this volume mounted by default at /media/data.
This volume is writable, persistent, only accessible by you, and shared live across all your VMs (and, by extension, by all your Docker/Run flows). You can use this volume for VM-to-VM file sharing (don't forget locks!) or just to store data for later use. Think of it as a "global home directory".
Web access and RESTful API for accessing the SPV is coming soon.

We use cookies.Learn more... Do you accept them?

Accept Reject