Node.js Hosting With PM2 Cluster Mode and Horizontal Autoscaling

Deploy Node.js from Git with PM2. Cluster mode spreads workers across CPU cores, and pm2 reload handles zero-downtime updates. Autoscaling absorbs traffic surges within the limits you set. If you use WebSockets or server-side sessions, configure sticky sessions or a Redis adapter before adding a second node.

Built for APIs / microservices SSR apps SaaS backends
npm & PM2 Included

Full npm tooling and PM2 for zero-downtime reloads, clustering, and log management — built in.

Traffic-Based Scaling

Resources scale up during peaks and release when demand drops. Set a maximum so costs stay predictable.

Git-Based Deploys

Deploy from a Git repo or archive. The platform resolves dependencies, runs your build script, and starts the app.

Zero Ops Overhead

The platform handles SSL certs, infrastructure, and daily backups. Your team ships features, not maintenance tasks.

24/7Engineer support
14Daily backups
8+Runtimes
< 10 minAvg support response

Deploy, run, and monitor

  1. Deploy: Push to your connected repo or upload an archive. The platform runs npm install, executes your build script, and starts the app. Pre- and post-deploy hooks let you run smoke tests or database migrations on every release. Automation & CI/CD
  2. Run: Switch between Node.js LTS releases from the dashboard, with no redeploy required. Clone a full environment to test the next major version in staging before promoting to production.
  3. Monitor: The dashboard shows real-time cloudlet consumption per node. Horizontal scaling triggers add or remove nodes based on CPU, RAM, Network, or Disk thresholds. You set the limits, the platform responds. Autoscaling & cost controls

Clone any environment for dev, staging, or production from the dashboard. Edit nginx and app configs or tail container logs from the dashboard.

Operate Node.js across real app tiers from one screen.

Topology view shows load balancers, app nodes, databases, storage, and an active terminal.

Move to a newer Node.js image without dropping persistent data.

Redeploy dialog shows a tagged rollout with Keep volumes data enabled.

Managed Node.js containers with backups, SSL, and SSH

Watch autoscaling and failover in action
Baseline Scaling up Adding node Failover Scaling down
3 Cloudlets
1 nodes
$0.008/hr
Node.js App · Node 1Primary
Node.js App · Node 2 Standby
Node.js App · Node 2Auto-Scaled

Daily backups

Included free, with restore points for each of the last 14 days.

SSL via Let’s Encrypt

Certificates auto-renew for every custom domain

SSH & environment variables

Set env vars from the dashboard or API, and SSH into running containers for debugging
  • PM2 cluster mode out of the box keeps workers spread across CPU cores and reloads processes with zero downtime. Log rotation keeps disk usage predictable across nodes without manual cleanup.
  • Plan WebSocket routing before scaling. WebSockets need persistent connections, so standard round-robin balancing can disconnect users mid-session. Use sticky sessions or a Redis pub/sub adapter before adding a second node. Reference architectures →
  • Isolate background workers. Job queues (Bull, BeeQueue) compete with HTTP workers when they share a node. Deploy them as a separate node group so each tier scales independently and a job backlog does not force your web tier to add capacity.

Deploy your first Node.js app in minutes

Model a Node.js baseline first, then size for WebSocket traffic, worker queues, or bursty API load.

Estimate cost Chat with an engineer

Start with a single node. When traffic grows, add load-balanced clusters and separate worker groups from the same dashboard. Use the calculator to model your workload.

Pick the right architecture for your workload

Stateless API

LB → Node.js nodes → DB/Redis

Store sessions in Redis or Memcached (available as environment nodes). Horizontal triggers add app-server nodes behind the load balancer when traffic rises. Scaled instances run on separate physical hosts so a single hardware failure doesn’t take down the cluster.

Web + worker

Split background jobs

Create separate node groups in the same environment. Web nodes handle HTTP; worker nodes process queues (Bull, BeeQueue). Scale each group with independent cloudlet limits.

Realtime

Plan for websockets

For Socket.io or native WebSocket apps, enable sticky sessions on the load balancer or use a Redis adapter for pub/sub across nodes. Test with environment cloning before production.

Common Questions

Do you support npm and Yarn?

npm and Yarn are both supported. Use lockfiles to keep installs consistent across environments.

Do you support PM2 (and other process managers like forever/supervisor)?

Common Node.js process managers -- pm2, forever, and supervisor -- all work.

How do I deploy my Node.js app?

Deploy from archives (local upload or external URL) or from a remote Git or SVN repository such as GitHub. For automation patterns, Automation & CI/CD covers automation patterns.

What is the recommended setup for a production Node.js API?

A common baseline is Traffic → Load Balancer → multiple Node nodes → database/Redis. Start with reference examples in the Pricing Calculator.

How does scaling work and how do I set baseline/max cost guardrails?

Set a reserved baseline for steady traffic and a burst cap for spikes. Node.js processes scale within those bounds. Autoscaling & cost controls has the full breakdown.

How do load balancers work with Node.js clusters?

Load balancers distribute traffic across multiple app nodes. Your session/state design (stateless, sticky, or external store) determines what works best at scale.

Can I separate background workers from web traffic?

A common pattern is running web/API nodes behind the load balancer and background workers as a separate node group.

What should I know about sessions/state when scaling Node.js?

In-memory sessions don't scale cleanly across nodes. For reliability, use external session/state storage and keep web nodes as stateless as possible.

When should I choose Kubernetes instead of Node.js on App Hosting?

Kubernetes makes sense when you need pod-level orchestration, service meshes, or multi-container deployments. Kubernetes Hosting has the setup details.

Can you help migrate an existing Node.js app?

We can help scope a Node.js migration: dependency audit, process manager setup, and cutover steps. Start from Portability & Migration or open a trial and talk through it.

Can I trigger controlled redeploys from repository updates?

Wire repository-based deploy workflows with optional auto-update checks and keep release steps consistent with hooks.

No matching questions found.

Need a database for your Node.js app? Deploy managed database clusters with MySQL, MariaDB, or PostgreSQL alongside your application nodes.

Start your 14-day free trial

Deploy in minutes with managed autoscaling and clustering built in.

No credit card required.