See Every Node, Every Minute — No Agents Required

Every container reports CPU, RAM, disk, and network stats every minute, with alerts active from first deploy. There are no agents to install.

Real-Time Metrics

CPU, RAM, disk, network, and IOPS per node, charted live in the dashboard.

Usage History

Spot weekly trends, compare before and after a deploy, or confirm that stopped environments drew zero compute.

Spike Detection

Isolate a resource spike in the chart, then adjust resource limits directly from the same view.

Billing Transparency

Billing uses peak RAM or average CPU per hour, whichever is higher. Estimate your cost before you deploy.

24/7Engineer support
14Daily backups
8+Runtimes
< 10 minAvg support response

See exactly what every node is consuming

The dashboard and billing system use the same metrics, so the usage you see is the usage you are billed for.

Resource metrics per node

RAM, CPU, network, disk, and I/O charted over adjustable intervals. Hover any data point for exact values and the scaling limit in effect.

Billing tied to real usage

Billing compares peak RAM and average CPU each hour. You pay for whichever metric is higher, not both. Stop an idle environment and compute charges drop to zero.

Get alerted before problems reach your users

Every container starts with default alerts on first deploy. You can adjust the thresholds or add alerts for CPU, RAM, disk, network, or inode limits.

Default alerts and custom triggers

  • Default alerts for RAM, CPU, disk, inodes, and network are active from first deploy, so there is no setup to do before the first warning fires.
  • Add custom triggers with configurable thresholds, duration, and frequency. Target alerts by node type: app servers, databases, load balancers, or build nodes.

Notification delivery

  • Email alerts link directly to scaling controls. The same metrics feed automatic scaling triggers, so the platform can scale before you read the alert.
  • Review trigger history over any time range. In shared environments, alerts go to every collaborator.

Get warned before users feel the problem.

Default alerts show CPU, memory, storage, inode, network, and OOM thresholds.

Read logs without leaving the browser

View, tail, and download log files for any node directly from the dashboard.

Live log tailing

Tail app-server, database, load-balancer, and build-node logs from a single browser tab.

Log files and downloads

Each node role exposes its own log set. Download any log through SFTP or the dashboard. You can also SSH into any node directly from the dashboard for deeper diagnostics.

Read live logs without leaving the dashboard.

Log viewer shows a Redis log with tail, clear, and download controls.

Work inside a real multi-tier environment from one screen.

Topology view shows load balancers, app servers, databases, storage, and an active terminal.

Everything is monitored from first deploy

Every container has metrics, alerts, and logs from first deploy. No agents to install. The same data feeds billing and scaling.

Metrics and alerts

  • Per-node CPU, RAM, disk, and network metrics every minute
  • Default alerts for RAM, CPU, disk, and inodes active from first deploy
  • Billing tied to the same resource metrics, so usage and cost stay in sync

Logs and scaling history

  • Live log tailing for app servers, databases, and load balancers
  • Horizontal and vertical scaling event history

Common Questions

How does pricing work?

App Hosting is billed hourly from a prepaid balance. You set your reserved baseline and burst limit, and autoscaling stays within those limits.

How is this different from VPS hosting?

App Hosting gives you prebuilt stack templates and scaling controls, so you do not have to assemble and operate each layer yourself. If you need full OS control, Cloud VPS is the better fit.

What is a cloudlet?

A cloudlet is one unit of compute: 128 MiB RAM + 400 MHz CPU. You set baseline and burst scaling in cloudlets.

Can I scale without downtime?

Scaling adds capacity within the limits you set. Whether that happens without downtime depends on your architecture and application. Start with a calculator preset, or chat with an engineer about the right setup.

Do you support HA databases?

Database Clusters covers MySQL, MariaDB, and PostgreSQL with HA options.

Is there Kubernetes support?

Kubernetes Hosting is available, with architecture presets to get started.

Can I run production on this?

Many customers run production on App Hosting. Use the trial for hands-on evaluation, and talk to an engineer to design your production setup.

Can you help me migrate?

We can map a target setup and migration plan. Portability & Migration has the details.

Is this a managed service?

Togglebox manages the platform and infrastructure, including the runtime, networking, and base OS layers. Your team manages the application code, dependencies, and runtime upgrades.

Can I upgrade or downgrade runtime versions?

You choose when to switch runtime versions (PHP, Node.js, Java, Python, Ruby, Go, .NET). Test in staging, then promote to production or roll back in a few clicks.

What happens when traffic spikes past my baseline?

The platform adds cloudlets up to the maximum you set. In most cases, scaling happens within seconds. When traffic drops, resources scale back down and billing follows.

What happens when a node goes down?

In a clustered setup, the load balancer routes traffic to the remaining healthy app nodes. For databases, replication modes such as primary/replica and Galera help keep data available during a node failure.

What happens if I set limits too low?

Your app uses all available cloudlets up to your maximum, then stops scaling. It doesn't crash. You get notified and can raise the limit from the dashboard.

What happens to charges when I stop an environment?

RAM, CPU, and traffic charges stop immediately. Charges for retained storage and reserved resources, such as public IPs and SSL certificates, continue. That lets you stop idle dev or staging environments while keeping data and network resources in place.

What deployment methods are supported?

Deploy with application archives (WAR, JAR, ZIP, or EAR), Git or SVN repositories, URL-based archive pulls, or build-node workflows. You can also add pre- and post-deploy hooks for smoke tests, migrations, or cache warming.

Can multiple team members share access to environments?

Invite collaborators with role-based permissions. All usage bills to the primary account.

Do you support two-factor authentication?

Protect platform access with 2FA using time-based codes from an authenticator app. Recovery codes provide backup access.

Can I clone an environment for testing or staging?

Clone full environments in minutes for testing, staging, release rehearsal, or A/B scenarios. Cloned environments include the full setup and configuration.

Is Memcached available for caching?

Add Memcached for distributed in-memory caching to reduce database load and improve response times. It also supports session storage for PHP and Java apps in clustered setups to maintain session continuity during failover.

Can I automate deployments with pre- and post-deploy hooks?

Run pre- and post-deploy hooks for tasks like smoke tests, database migrations, and cache warmups. Automation & CI/CD has more examples.

No matching questions found.

Start your 14-day free trial

Deploy in minutes with managed autoscaling and clustering built in.

No credit card required.