Go Hosting That Compiles on Deploy and Scales on Demand

Push Go source to a Git repo and the platform builds, runs, and scales your binary. Scaled instances run on separate physical hosts, so one hardware failure does not take down your service. Autoscaling stays within the limits you set.

Built for Go APIs Microservices gRPC backends
Git-Push Deploys

Push Go source and the platform fetches dependencies, builds, and runs your binary automatically.

Compiled Binaries

Platform compiles your Go source to a native binary — fast startup, small footprint, low overhead.

Hardware Fault Isolation

Same-layer instances distribute across separate physical hosts automatically for hardware-level fault isolation.

Build Variables

Control build and runtime behavior with GO_RUN, GOPATH, and custom build/run options.

24/7Engineer support
14Daily backups
8+Runtimes
< 10 minAvg support response

Source to production in three steps

  1. Deploy: Push to your repo. The platform resolves dependencies, compiles with your build options, and starts the binary. Automation & CI/CD
  2. Run: The binary runs directly, with no interpreter startup or JIT warmup. Tune startup with environment variables in the dashboard.
  3. Scale: Set reserved cloudlets (128 MiB RAM + 400 MHz CPU each) and a scaling limit. The platform scales vertically on demand and horizontally when CPU, RAM, Network, or Disk triggers fire. Autoscaling & cost controls

Common patterns: stateless APIs behind an LB, compiled microservices, gRPC backends.

Go runtime, modules, and autoscaling

Watch autoscaling and failover in action
Baseline Scaling up Adding node Failover Scaling down
3 Cloudlets
1 nodes
$0.008/hr
Go App · Node 1Primary
Go App · Node 2 Standby
Go App · Node 2Auto-Scaled

Git-based deployment

Platform auto-builds from source

Pre-compiled binary

Upload via archive or external URL

Go version management

Switch Go releases with a container redeploy, no code changes required.

Start hosting Go in minutes

Set a reserved baseline for steady traffic, then set a burst limit for peak load. Billing follows what the platform actually uses.

Estimate cost Chat with an engineer

Go starts fast and usually needs fewer reserved cloudlets than Node.js or Python. A common starter setup is 2 reserved cloudlets (~256 MiB RAM) with a scaling limit of 8. Use the calculator to model your service. If you need a fuller stack, the Marketplace includes load balancer, app node, database, and cache layouts.

Operate a full Go environment from one screen.

Topology view shows load balancers, app nodes, databases, storage, and an active terminal.

Roll to a newer Go runtime while keeping persistent data.

Redeploy dialog shows a tagged rollout with Keep volumes data enabled.

Run Go services at scale with proven architectures

Stateless API

LB → Go nodes → DB/Cache

Scale Go API nodes horizontally behind a load balancer. Keep state in external stores for clean scale-outs.

Microservices

Multiple compiled services

Deploy multiple Go binaries as independent environments. Route between services using platform-assigned internal DNS hostnames.

gRPC backend

Binary protocol services

Go’s native gRPC support and low per-request resource overhead make it well-suited for high-throughput internal service communication.

Need a database? Deploy managed database clusters (MySQL, MariaDB, PostgreSQL) alongside your Go environment. Monitor scaling events and resource usage with built-in observability.

Common Questions

How does Go deployment work on the platform?

Push Go source to a Git repository. The platform fetches dependencies, builds the project, and runs the resulting binary automatically.

What Go versions can I use?

Multiple Go releases are available and updated regularly. Select a version during environment creation, or switch later through container redeploy.

What deployment variables can I configure?

GO_RUN (executable name), GOPATH (workspace directory), GO_BUILD_OPTIONS (build flags), and GO_RUN_OPTIONS (runtime flags).

What is anti-affinity distribution?

Same-layer Go instances are distributed across separate physical hosts, so a single hardware failure does not take down all nodes.

How does scaling work and how do I set baseline/max cost guardrails?

Set a baseline allocation for steady throughput and a ceiling for traffic bursts. Go containers scale within those boundaries. Autoscaling & cost controls explains the mechanics.

Can I deploy a pre-compiled Go binary instead of building from source?

Upload a precompiled binary via archive or external URL instead of using the Git-based build pipeline.

How do load balancers work with Go services?

A load balancer is automatically added when you scale out Go nodes. It distributes traffic across instances for both HTTP and gRPC workloads.

Which databases can I connect to?

MySQL, MariaDB, and PostgreSQL run as managed database nodes. Go apps connect with standard drivers (database/sql + your preferred driver). Database Clusters covers clustering and replication.

When should I choose Kubernetes instead of Go on App Hosting?

Go apps that need fine-grained container orchestration, custom networking, or multi-service deployments are a better fit for Kubernetes. Kubernetes Hosting explains the setup.

Can you help migrate an existing Go service?

We can walk through a Go migration -- build pipeline, binary deployment, and runtime configuration. Portability & Migration covers the approach.

Can I use deploy hooks in Go release workflows?

Hook into the deploy pipeline to run binary validation, readiness probes, or database migration steps before traffic switches.

No matching questions found.

Start your 14-day free trial

Deploy in minutes with managed autoscaling and clustering built in.

No credit card required.