Docker Hosting — Any Image, Any Stack, Managed Infrastructure

Run custom Docker workloads without building your own platform layer. Pull from Docker Hub or private registries, use Docker CLI commands the same way you do locally, and scale with resource limits you control (each cloudlet = 128 MiB RAM + 400 MHz CPU).

Built for Docker teams Microservices Container migration
Any Image

Pull from Docker Hub or private registries with custom image support.

Managed Runtime

The platform manages networking, SSL certificates, and log collection. You deploy images; we handle the runtime plumbing.

Autoscaling

Scale vertically and horizontally with usage-based billing and limits you set.

Portable Workflows

Use standard Docker images and Git deploys for lower lock-in risk during moves.

24/7Engineer support
14Daily backups
8+Runtimes
< 10 minAvg support response

Go from image to production without managing the platform

  1. Deploy: Pull from Docker Hub, connect a private registry, or build from a Dockerfile in your repository. Release flows support Git/SVN, archive uploads, and redeploy workflows. Automation & CI/CD →
  2. Run: The platform handles networking, SSL, logging, and container restarts, while you keep control of images, env vars, ports, and runtime configuration. Container controls →
  3. Scale: Set reserved cloudlets for your baseline and a scaling limit for bursts. Vertical scaling adjusts per node, and horizontal scaling adds nodes behind an automatic load balancer. Scaling controls →

Common patterns: microservices behind LB • legacy app containerization • multi-tier stacks. Start with a proven Marketplace setup, then customize it for production.

Run any Docker image without rebuilding your workflow

Keep your existing images, registries, and Docker commands while the platform handles certificates, networking, and node lifecycle around them.

Watch autoscaling and failover in action
Baseline Scaling up Adding node Failover Scaling down
3 Cloudlets
1 nodes
$0.008/hr
Docker App · Node 1Primary
Docker App · Node 2 Standby
Docker App · Node 2Auto-Scaled

Custom Docker containers

Deploy your own or third-party images from public or private registries

Docker Engine CE

Keep native Docker CLI workflows on managed infrastructure

Docker Swarm Cluster

One-click from Marketplace with preconfigured manager nodes (scheduling) and worker nodes (running containers)
Swarm now, Kubernetes when you need it. Start with Docker Swarm from the Marketplace when you need basic container orchestration without rebuilding your deployment process. Move to Kubernetes when your roadmap requires HPA, Helm, or multi-cluster coordination.

Ship Docker containers in minutes

Estimate cost now and deploy when ready.

Estimate cost Chat with an engineer

Typical Docker starter: 3 reserved cloudlets (~384 MiB RAM) per container, scaling limit set to match your traffic peak. Use the calculator to model multi-container setups. For orchestration, deploy Docker Swarm from the Marketplace. If your roadmap requires Kubernetes-native operations, route workloads to Kubernetes Hosting.

See the deployment model before you commit.

Dashboard view shows custom images, Docker Engine, and Kubernetes paths.

Full runtime control — networking, SSL, and container lifecycle handled for you

Docker containers run in isolated environments. Manage images, env vars, ports, and resources from the dashboard — the platform handles networking, SSL, and container restarts.

  • Image sources & deployment: pull from Docker Hub, connect private registries, or deploy through Git/SVN and archive workflows.
  • Runtime configuration: manage environment variables, ports, links, volumes, and container run flags instead of hardcoding config into images.
  • Domain and TLS: bind domains with CNAME/A records and provision SSL certificates automatically.
  • Diagnostics and operations: use the log browser and SSH access for troubleshooting.
  • Network boundaries: apply inbound/outbound container firewall rules. Isolation between environments is on by default.

Most Linux-based images from Docker Hub work out of the box. Windows containers, GPU-heavy images, and highly privileged containers aren’t supported without prior review. Chat with an engineer before deploying one of these to production.

Pull public images without leaving the platform.

Docker Hub search results for OpenResty shown inside the deployment UI.

Upgrade runtimes without wiping app data.

Redeploy dialog shows a newer image tag with Keep volumes data turned on.

Scale without rebuilding your deployment process

Scale policy starts with your limits: reserved cloudlets set baseline capacity, and the scaling limit caps burst usage. Hourly billing uses whichever is higher — peak RAM usage or average CPU usage — each measured in cloudlets.

  • Scaling triggers: CPU, RAM, network, and disk thresholds can trigger vertical scaling (more resources per node) or horizontal scaling (more nodes).
  • Stateless scale-out: launch new nodes from the base template for web/API tiers that need fast burst response.
  • Stateful scale-out: new nodes are copied from the running primary container (filesystem included) when data continuity matters more than launch speed.
  • Automatic load balancing: an LB layer is added when Docker nodes scale past one instance.
  • Host distribution: same-role nodes can spread across hardware to reduce impact from single-host failures.

Clone full environments for staging. Export/import and transfer workflows support migration planning. Portability depends on your target provider’s Docker configuration.

Questions about scaling your Docker workloads? Chat with an engineer.

Pick the orchestration path that fits your roadmap

Swarm

Docker Swarm from Marketplace

Start with a one-click Swarm cluster when you want container orchestration without rebuilding your deployment process.

Engine

Native Docker Engine CE

Run Docker Engine CE on standalone nodes or inside Swarm workers when your team wants full native Docker commands on managed infrastructure.

Need orchestration beyond Swarm? Route to Kubernetes Hosting. For preconfigured Docker and app stacks, explore the Marketplace.

Deploy proven Docker architectures in minutes

Microservices

Load balancer → containers → database

Deploy independent services as Docker containers behind a shared load balancer. Docker Swarm or standalone containers work equally well for this pattern.

Containerized legacy

VM-to-container migration

Use system containers to migrate legacy applications with minimal refactoring. The container behaves like a lightweight VM while gaining platform scaling and management.

Multi-tier stack

App + DB + cache

Run your application, database, and cache layer as separate Docker containers within one environment. Each tier scales independently based on its own resource triggers.

Put TLS and traffic routing in front of the app tier.

The architecture diagram shows SSL termination at the load balancer.

Common Questions

How do I deploy a Docker image on the platform?

Pull an image from Docker Hub or a private registry, or build from a Dockerfile in your Git repository. The platform handles image layers, networking, and container startup automatically.

Can I use private Docker registries?

Configure registry credentials and the platform pulls images from your private registry during deployment. Any Docker-compatible registry works.

How is Custom Docker containers different from Docker Engine CE?

Custom Docker containers run third-party images with platform-managed networking, scaling, and lifecycle. Docker Engine CE gives you a full native Docker CLI workflow on managed infrastructure -- you control the Docker daemon directly.

How does autoscaling work with Docker containers?

The same cloudlet model applies. Set reserved cloudlets for a baseline and a scaling limit for burst capacity. Vertical scaling adjusts resources per container.

Can I run Docker Compose on the platform?

Native Docker Compose file support is not available, but the platform topology wizard handles multi-container setups with the same result. Define your stack through the dashboard or API -- each service deploys as a separate scalable node with managed networking between them.

How is networking handled for Docker containers?

The platform manages internal DNS resolution, port routing, and SSL termination. Containers within the same environment communicate over a private network. External traffic routes through the platform load balancer with automatic SSL via Let's Encrypt or your own certificates.

No matching questions found.

Start your free trial

Deploy in minutes with managed autoscaling and clustering built in.

No credit card required. Deploy faster with managed Docker operations.