Troubleshooting
Installation and authentication
terraform init fails with 403 Forbidden from app.terraform.io
Your Terraform Cloud token is missing or expired. Re-authenticate:
terraform login app.terraform.io
If you don't have a token, contact your Onehouse administrator for the team token shared with consumers.
terraform init fails with "could not query provider registry"
Check ~/.terraform.d/credentials.tfrc.json exists. If using terraform login, this file is created automatically. If you've removed it accidentally, re-run terraform login app.terraform.io.
INVALID_ARGUMENT: 'path' must be a non-empty String
You're missing one or more required environment variables — most likely ONEHOUSE_LINK_UID. Verify all five are exported:
env | grep ONEHOUSE_
You should see ONEHOUSE_PROJECT_UID, ONEHOUSE_API_KEY, ONEHOUSE_API_SECRET, ONEHOUSE_LINK_UID, and ONEHOUSE_REGION. See Authentication for where to find each value.
401 Unauthorized or 403 Forbidden from the Onehouse API
The credentials in ONEHOUSE_API_KEY / ONEHOUSE_API_SECRET are wrong or the service principal has insufficient permissions. Rotate the credentials in the Onehouse console under Profile → Service Principal and update your env vars.
Resource operations
Provider produced inconsistent result after apply
A Computed attribute changed shape between the plan and the post-apply server response. This is common with cluster resources where the server canonicalizes Managed → COMPUTE_CLUSTER_TYPE_MANAGED. Workarounds:
- Run
terraform refreshto sync state with the server. - If the problem persists, file an issue with the resource type and exact attribute named in the error message.
Apply hangs at Still creating... Nm Ns elapsed
Two common causes:
Cluster provisioning failed silently. The API status may report PENDING while the cluster's actual lifecycle status is COMPUTE_CLUSTER_STATUS_FAILED. Cancel the apply, manually delete the cluster from the Onehouse console (or via SQL DELETE CLUSTER), and retry.
Rate limiting. The provider retries on 429 Too Many Requests up to 10 times with exponential backoff (1-60s). If you're hitting this consistently, wait a minute or two and retry, or reduce parallelism via terraform apply -parallelism=1.
terraform destroy fails on onehouse_lake or onehouse_database with "has dependents"
By default, DELETE LAKE and DELETE DATABASE fail if the resource has dependent objects (databases, tables, streams). Set force_destroy = true on the resource to cascade-delete all dependents:
resource "onehouse_lake" "example" {
name = "my-lake"
# ...
force_destroy = true
}
See the lake and database reference for details.
terraform plan shows drift on imported resources
Expected — terraform import populates id, name, and a few computed attributes, but cannot recover sensitive fields (catalog auth tokens, source credentials, etc.) or arguments not echoed by the SHOW <X> SQL command.
After import, supply the missing values in your .tf file and run terraform apply once. From then on, plans should report No changes..
Catalog creation fails with "could not connect to metastore"
The Onehouse control plane validates external endpoints (Hive Metastore Thrift URIs, Databricks SQL warehouses, DataHub servers) by connecting on CREATE. The URI must be reachable from the Onehouse control plane, not just from your local machine. Common gotchas:
- Internal Kubernetes service URIs (
*.svc.cluster.local) won't work - Endpoints behind a corporate VPN are only reachable if the Onehouse VPC peers with yours
- Self-signed certificates may fail TLS verification
Source CREATE fails with "bucket not accessible"
The Onehouse control plane's IAM role doesn't have permission to read the bucket. Re-run your cloud-provider connection setup and ensure the bucket is in the allowed list. AWS: check the Onehouse cross-account IAM role's policy. GCP: check the service account binding on the bucket.
Flow CREATE fails with "source type does not match sub-block"
The flow's source-path sub-block (e.g. s3 {}) must match the type of the referenced onehouse_source. If you reference a Kafka source but write s3 {}, the server rejects the CREATE. Match the sub-block to the source type:
If onehouse_source.source_type is | Use this sub-block in the flow |
|---|---|
S3 | s3 {} |
GCS | gcs {} |
APACHE_KAFKA / MSK_KAFKA / CONFLUENT_KAFKA | kafka {} |
ONEHOUSE_TABLE | onehouse_table {} |
POSTGRES | postgres {} |
MY_SQL | mysql {} |
Getting help
If the issue isn't covered above, gather:
- The exact
terraform plan/terraform applyerror message (redact any credentials) - The resource type and a minimal reproducing
.tfsnippet - Your provider version (
terraform -v)
Open a support ticket via your Onehouse account.