onehouse_cluster
Provisions and manages a Onehouse compute cluster. Compute clusters are the execution engines for jobs, SQL workloads, and ingestion flows.
This page documents Terraform-specific behavior (HCL syntax, types, mutability, drift, import). For full parameter semantics, valid values, and defaults, see CREATE CLUSTER, ALTER CLUSTER, and DELETE CLUSTER.
Example Usage
Minimal Managed cluster
resource "onehouse_cluster" "main" {
name = "etl-cluster"
type = "Managed"
min_ocu = 2
max_ocu = 4
worker_type = "oh-general-4"
}
Cluster provisioning takes ~7 minutes. The resource's id and status attributes are populated when the cluster reaches a running state.
Cluster with Quanton + spot workers
resource "onehouse_cluster" "batch" {
name = "batch-cluster"
type = "Managed"
min_ocu = 4
max_ocu = 16
worker_type = "oh-general-8"
worker_spot = true
quanton_enabled = true
}
Pause and resume a running cluster
resource "onehouse_cluster" "intermittent" {
name = "intermittent-cluster"
type = "Managed"
min_ocu = 2
max_ocu = 4
worker_type = "oh-general-4"
# Set to "STOP" to pause; "START" to resume. The `status` attribute reflects
# the current runtime state separately.
state = "STOP"
}
Argument Reference
| Argument | Type | Required | Mutability | Description |
|---|---|---|---|---|
name | string | ✅ | Mutable | Cluster name. |
type | string | ✅ | Immutable | Cluster type. → details |
min_ocu | number | Mutable | Minimum OCU (autoscaling lower bound). SQL: MIN_OCU. → details | |
max_ocu | number | Mutable | Maximum OCU (autoscaling upper bound). SQL: MAX_OCU. → details | |
worker_type | string | Mutable | Worker instance type. SQL: worker.type. → details | |
worker_spot | boolean | Mutable | Use spot/preemptible workers. SQL: worker.spot. → details | |
driver_type | string | Mutable | Driver instance type. SQL: driver.type. → details | |
quanton_enabled | boolean | Mutable | Enable Quanton compute optimizations. SQL: quanton.enabled. → details | |
state | string | Mutable | Operator-controlled state. Issues ALTER CLUSTER ... SET STATE = ... when this changes. |
type — when to pick each value
| Value | Use for |
|---|---|
Managed | General-purpose ingestion and table-services workloads. Default choice. |
SQL | Interactive SQL workloads via the SQL endpoint. |
Spark | Batch jobs submitted via the Jobs API. |
Open_Engines | Trino, Flink, or Ray workloads. Requires the open_engines.engine WITH parameter — see the SQL ref. |
Notebook | Notebook-attached compute. |
LakeBase | LakeBase clusters. Uses a different sizing model (lakebase.engine_size, lakebase.min_engines, lakebase.max_engines) rather than min_ocu / max_ocu. See CREATE CLUSTER. |
Changing type forces destroy + recreate.
Sizing — min_ocu and max_ocu
min_ocu is the minimum OCU count the cluster is guaranteed to keep running. max_ocu is the upper autoscaling bound. The cluster scales between the two based on workload. Both are required for every cluster type except LakeBase, which uses lakebase.min_engines and lakebase.max_engines instead.
Server defaults apply when omitted. For the canonical valid ranges and project-specific limits, see CREATE CLUSTER.
state — pause and resume
Set state = "STOP" to pause a running cluster. Set state = "START" to resume. The provider issues ALTER CLUSTER <name> SET STATE = STOP (or START) on change. The status attribute reflects the resulting runtime state (e.g., COMPUTE_CLUSTER_STATUS_PAUSED).
Omitting state means Terraform does not enforce a runtime state — operators can pause and resume from the Onehouse console without Terraform reverting the change.
Attribute Reference
In addition to the arguments above, the server populates the following computed attributes:
| Attribute | Type | Description |
|---|---|---|
id | string | Onehouse-assigned cluster UUID. |
status | string | Runtime status (e.g., COMPUTE_CLUSTER_STATUS_RUNNING, COMPUTE_CLUSTER_STATUS_PAUSED). |
current_active_ocu | number | Live OCU count. |
avg_ocu_used | number | Average OCU utilization. |
ocu_utilization_percent | number | OCU utilization percentage. |
active_jobs_ui_endpoint | string | Live Spark UI URL. |
history_server_ui_endpoint | string | Spark history server URL. |
pyspark_endpoint | string | PySpark connection endpoint. |
Import
Import an existing cluster by name:
terraform import onehouse_cluster.main etl-cluster
After import, the first terraform plan shows drift on worker_type, worker_spot, driver_type, and quanton_enabled because DESCRIBE CLUSTER doesn't echo these fields back. The first terraform apply reconciles them in place.
Data Source
Read an existing cluster without managing it:
data "onehouse_cluster" "lookup" {
name = "etl-cluster"
}
output "current_status" {
value = data.onehouse_cluster.lookup.status
}
The data source exposes the same computed attributes as the resource.
Timeouts
Cluster provisioning takes ~7 minutes; deletion ~5 minutes. The provider's default 30-minute operation timeout is sufficient. For unusually slow provisions, override at the provider block:
provider "onehouse" {
timeout = "1h"
}