Your App Runs Isolated, Hardened, and Encrypted — Out of the Box
Container isolation, auto-renewing SSL, and per-tier firewalls for the load balancer, app, and database come standard. WAF and intrusion prevention are available as add-ons.
Every container runs isolated on separate physical hosts. A compromised or failing container cannot reach its neighbors.
Free Let's Encrypt certificates with automatic renewal on every environment.
Firewall rules per layer — load balancer, app server, database — each with default-deny posture.
We maintain the runtime, OS layers, and platform services — you focus on your code.
Ship securely without configuring every layer yourself
The platform handles container isolation, certificate lifecycle, and private-by-default networking so your team can focus on application-level controls and access policy.
Firewall rules
Configure allow/deny rules per layer: load balancer, app server, and database. Restrict by IP address, port range, or protocol. All ports are blocked by default on every layer.
Platform security
We maintain the container runtime, OS layers, and platform services. You manage application code, libraries, and language dependencies. Each redeployment starts from the latest base image; your data volumes persist.
Operational security defaults
-
Role-based access, 2FA & restricted root – Scope access by environment or action with time-based 2FA. Containers run without root by default. Use scoped API tokens for CI/CD pipelines.
-
WAF protection Add-on – LiteSpeed includes built-in WAF rules, and NGINX uses ModSecurity. BitNinja WAF is available as an add-on for additional filtering.
Isolated containers, SSL included
Start your free 14-day trial with the standard security defaults already in place.
14 daily backups are included on all environments. When you scale to multiple app nodes, a load balancer handles SSL termination and WAF filtering at the entry point.
Control inbound access per tier from one dashboard.
Firewall table shows load balancer rules by port, source, and action.
Encrypt every connection without managing certificates
Automatic Let’s Encrypt certificates
SSL certificates issue and renew automatically for every custom domain. The platform handles the ACME challenge and renewal cycle, so there is no manual renewal or expired-cert cleanup to manage. Custom certificates (DV, OV, wildcard) are also supported.
Encrypted at every tier
TLS terminates at the load balancer for external traffic. Internal service-to-service calls between app, database, and cache nodes stay on the private network, not the public internet. Database connections use the platform’s internal private IP automatically.
When you scale to multiple app nodes, the load balancer handles TLS termination and WAF filtering at the entry point. App and database nodes communicate over the private internal network.
Terminate SSL once and protect every app node.
Diagram shows SSL ending at the load balancer before traffic reaches the app tier.
Isolate every environment without manual network setup
Environment-level isolation
Per-tier firewall rules
Shared environments and access scoping
Give teams the access they need without opening the whole account.
Role editor shows scoped permissions for environment, deployment, SSH, and billing actions.
Focus on your code while the platform secures the stack
Platform responsibility
Infrastructure and runtime layer
- Container runtime, OS, and hypervisor patching
- Base image security updates on redeploy
- Private network fabric between nodes
- SSL certificate issuance and renewal
- Physical host security and data center controls
Developer responsibility
Application and dependency layer
- Application code, business logic, and APIs
- Language dependencies and library versions
- Secrets management (API keys, DB passwords via env vars)
- Input validation and output encoding
- Authentication logic and session handling
Each redeployment starts from the latest secure base image. Your application code and data volumes persist across redeploys, while the OS and runtime layer refresh automatically.
Common Questions
How does pricing work?
App Hosting is billed hourly from a prepaid balance. You set your reserved baseline and burst limit, and autoscaling stays within those limits.
How is this different from VPS hosting?
App Hosting gives you prebuilt stack templates and scaling controls, so you do not have to assemble and operate each layer yourself. If you need full OS control, Cloud VPS is the better fit.
What is a cloudlet?
A cloudlet is one unit of compute: 128 MiB RAM + 400 MHz CPU. You set baseline and burst scaling in cloudlets.
Can I scale without downtime?
Scaling adds capacity within the limits you set. Whether that happens without downtime depends on your architecture and application. Start with a calculator preset, or chat with an engineer about the right setup.
Do you support HA databases?
Database Clusters covers MySQL, MariaDB, and PostgreSQL with HA options.
Is there Kubernetes support?
Kubernetes Hosting is available, with architecture presets to get started.
Can I run production on this?
Many customers run production on App Hosting. Use the trial for hands-on evaluation, and talk to an engineer to design your production setup.
Can you help me migrate?
We can map a target setup and migration plan. Portability & Migration has the details.
Is this a managed service?
Togglebox manages the platform and infrastructure, including the runtime, networking, and base OS layers. Your team manages the application code, dependencies, and runtime upgrades.
Can I upgrade or downgrade runtime versions?
You choose when to switch runtime versions (PHP, Node.js, Java, Python, Ruby, Go, .NET). Test in staging, then promote to production or roll back in a few clicks.
What happens when traffic spikes past my baseline?
The platform adds cloudlets up to the maximum you set. In most cases, scaling happens within seconds. When traffic drops, resources scale back down and billing follows.
What happens when a node goes down?
In a clustered setup, the load balancer routes traffic to the remaining healthy app nodes. For databases, replication modes such as primary/replica and Galera help keep data available during a node failure.
What happens if I set limits too low?
Your app uses all available cloudlets up to your maximum, then stops scaling. It doesn't crash. You get notified and can raise the limit from the dashboard.
What happens to charges when I stop an environment?
RAM, CPU, and traffic charges stop immediately. Charges for retained storage and reserved resources, such as public IPs and SSL certificates, continue. That lets you stop idle dev or staging environments while keeping data and network resources in place.
What deployment methods are supported?
Deploy with application archives (WAR, JAR, ZIP, or EAR), Git or SVN repositories, URL-based archive pulls, or build-node workflows. You can also add pre- and post-deploy hooks for smoke tests, migrations, or cache warming.
Can multiple team members share access to environments?
Invite collaborators with role-based permissions. All usage bills to the primary account.
Do you support two-factor authentication?
Protect platform access with 2FA using time-based codes from an authenticator app. Recovery codes provide backup access.
Can I clone an environment for testing or staging?
Clone full environments in minutes for testing, staging, release rehearsal, or A/B scenarios. Cloned environments include the full setup and configuration.
Is Memcached available for caching?
Add Memcached for distributed in-memory caching to reduce database load and improve response times. It also supports session storage for PHP and Java apps in clustered setups to maintain session continuity during failover.
Can I automate deployments with pre- and post-deploy hooks?
Run pre- and post-deploy hooks for tasks like smoke tests, database migrations, and cache warmups. Automation & CI/CD has more examples.
No matching questions found.
Start your 14-day free trial
Deploy in minutes with managed autoscaling and clustering built in.
No credit card required.