Skip to main content

Usage & Billing

Onehouse runs usage-based billing on Onehouse Compute Units (OCU), determined by the number of instances running Onehouse in your cloud account.

How does Onehouse bill?

Onehouse runs a Bring-Your-Own-Cloud (BYOC) model and charges based on compute usage.

BYOC model

Onehouse charges based on compute instance usage (measured in OCU), plus support costs. Onehouse does not charge additional volume-based or storage fees.

Customers are responsible for their own cloud infrastructure costs, as is standard practice in the Bring-Your-Own-Cloud (BYOC) deployment model.

OCU consumption

Onehouse operates Amazon EC2 or Google Compute Engine instances within your AWS or GCP account, respectively. Onehouse automatically scales these instances up and down based on your workload requirements and the parameters you set in your Clusters.

A Onehouse Compute Unit (OCU) is a normalized unit of compute that meters usage of Onehouse services.

OCU definition

1 OCU is equivalent to a 4 core CPU instance running for 1 hour in your cloud account.

Currently, any 4 core CPU instance in your account will burn 1 OCU per hour, regardless of the instance family. This is subject to change in the future.

Larger instances burn more OCU, at the normalized rate of 4 CPU cores per 1 OCU. For example, a 16 core instance running for 1 hour will burn 4 OCU.

Your final bill is calculated from the OCU consumed: ($ rate per hourly OCU) * (# of hourly OCU consumed).

Instance types

Standard instance types

Onehouse supports the following standard instance types:

Onehouse instanceSpecOCU / HourEC2 instance (subject to upgrade)GCE instance (subject to upgrade)Cluster types supported
oh-general-44 vCPUs, 16 GB RAM1.0m8g.xlargee2-standard-4All
oh-general-88 vCPUs, 32 GB RAM2.0m8g.2xlargee2-standard-8All
oh-general-1616 vCPUs, 64 GB RAM4.0m8g.4xlargee2-standard-16All
oh-gpu-41 GPU, 24 GB GPU RAM, 4 vCPUs, 16 GB RAM5.6g5.xlargeg2-standard-4Open Engines Ray only

Custom instance types

In addition to the standard instance types, project admins may enable custom instance types for their project. The following custom instance types are available:

InstanceCloud providerSpecOCU / HourCluster Types Supported
m7i.xlargeAWS4 vCPUs, 16 GB RAM1.0All
t2.xlargeAWS4 vCPUs, 16 GB RAM1.0All
n2d-standard-4GCP4 vCPUs, 16 GB RAM1.0All
n2d-standard-8GCP8 vCPUs, 32 GB RAM2.0All

Reach out to Onehouse support to request additional custom instance types.

tip

Project admins can enable/disable custom instance types on the Project Settings page in the Onehouse console.

Monitor Usage and Costs

Usage Dashboard

View project-level usage and billing metrics on the Usage page in the Onehouse console. This page is accessible from the main navigation and displays the following metrics for a configurable time period:

  • Total Usage — aggregate OCU consumed over the selected period.
  • Average OCU Consumption — mean OCU usage rate.
  • Average OCU Used — mean active OCU across the period.
  • Total Estimated Cost — projected cost based on OCU consumption.
  • Current OCU Limit — the configured OCU limit for the project (if set).

The dashboard includes charts for:

  • OCU Usage over time — a time-series view of OCU consumption.
  • Overall OCU Consumption — aggregate consumption breakdown.
  • Total Usage Split by Cluster — per-cluster OCU distribution.
  • Total Cost Split by Instance Type — cost allocation across instance families.

All Cluster Usage

Click into the cluster-level usage view for a detailed breakdown of OCU consumption by individual cluster, including pie charts and stacked area charts showing resource utilization over time.

Cluster-level usage on the Clusters page shows the maximum of a cluster's CPU and memory usage, as each can independently cause the cluster to scale up.

OCU Limits

Set an OCU limit to cap compute consumption for your project. When the limit is reached, Onehouse displays a warning banner and may delay operations to stay within the configured threshold.

To manage OCU limits:

  1. Navigate to the Usage page.
  2. Click Add OCU Limit (or edit an existing limit).
  3. Enter the maximum OCU per hour for the project.
  4. Save the configuration.

You can also remove an existing OCU limit from this page.

warning

When a project reaches its configured OCU limit, a global banner displays: "This project has reached your configured OCU Limit. Operations may be delayed." Flows approaching the limit also display warnings on the Flows page.

Notifications

You may set up notifications for OCU usage in your Cluster configurations. Additionally, the OCU limit system triggers automatic notifications when limits are approached or reached.

Retrieving costs via API is not yet available.

Understanding usage vs. consumption

Actual OCU consumption that is billed for a project may be greater than the sum of "OCU Usage" by all the Clusters in the project.

This occurs with Cluster resource sharing, because “Usage” metrics only count the fractional CPU of an instance that is allocated to the Cluster. “Consumption” encompasses the total number instances running in your account that you will be billed for.

Tips to reduce OCU consumption

Reduce Flow sync frequency

You may potentially decrease your OCU usage by reducing the sync frequency of Flows running in your project. However, delays may occur if data volumes spike and commit durations exceed the sync frequency.

Also note that Onehouse kills any instance running idle for more than 10 minutes. If the sync frequency of your Flows is low (<10min) and data in the source is getting refreshed frequently (<10min), the idle timeout threshold might not be met, and resources may not scale down.

Decrease your Cluster Min and Max OCU

Decreasing Min OCU for a Cluster allows it to scale down when less resources are required to run a workload. Consider keeping this low if you don't need to keep the Cluster warm when data processing needs are reduced.

Decreasing the Max OCU limits how the Cluster can scale up. Set a lower Max OCU if you want to throttle the Cluster to use less resources. Note that this may cause workloads to be processed more slowly and decreases the available compute & memory for workloads.