Skip to main content

Usage & Billing

Onehouse runs usage-based billing on Onehouse Compute Units (OCU), determined by the number of instances running Onehouse in your cloud account.

How does Onehouse bill?

Onehouse runs a Bring-Your-Own-Cloud (BYOC) model and charges based on compute usage.

BYOC model

Onehouse charges based on compute instance usage (measured in OCU), plus support costs. Onehouse does not charge additional volume-based or storage fees.

Customers are responsible for their own cloud infrastructure costs, as is standard practice in the Bring-Your-Own-Cloud (BYOC) deployment model.

OCU consumption

Onehouse operates Amazon EC2 or Google Compute Engine instances within your AWS or GCP account, respectively. Onehouse automatically scales these instances up and down based on your workload requirements and the parameters you set in your Clusters.

A Onehouse Compute Unit (OCU) is a normalized unit of compute that meters usage of Onehouse services.

OCU definition

1 OCU is equivalent to a 4 core CPU instance running for 1 hour in your cloud account.

Currently, any 4 core CPU instance in your account will burn 1 OCU per hour, regardless of the instance family. This is subject to change in the future.

Larger instances burn more OCU, at the normalized rate of 4 CPU cores per 1 OCU. For example, a 16 core instance running for 1 hour will burn 4 OCU.

Your final bill is calculated from the OCU consumed: ($ rate per hourly OCU) * (# of hourly OCU consumed).

Instance types

Standard instance types

Onehouse supports the following standard instance types:

Onehouse instanceSpecOCU / HourEC2 instance (subject to upgrade)GCE instance (subject to upgrade)Cluster types supported
oh-general-44 vCPUs, 16 GB RAM1.0m8g.xlargee2-standard-4All
oh-general-88 vCPUs, 32 GB RAM2.0m8g.2xlargee2-standard-8All
oh-general-1616 vCPUs, 64 GB RAM4.0m8g.4xlargee2-standard-16All
oh-gpu-41 GPU, 24 GB GPU RAM, 4 vCPUs, 16 GB RAM5.6g5.xlargeg2-standard-4Open Engines Ray only

Custom instance types

In addition to the standard instance types, project admins may enable custom instance types for their project. The following custom instance types are available:

InstanceCloud providerSpecOCU / HourCluster Types Supported
m7i.xlargeAWS4 vCPUs, 16 GB RAM1.0All
n2d-standard-4GCP4 vCPUs, 16 GB RAM1.0All
n2d-standard-8GCP8 vCPUs, 32 GB RAM2.0All

Reach out to Onehouse support to request additional custom instance types.

tip

Project admins can enable/disable custom instance types on the Project Settings page in the Onehouse console.

Monitor usage and costs

Observability

View project-level usage and billing metrics in the Onehouse UI under Settings > Usage . This page is visible only to project admins.

You may also view Cluster-level usage on the Clusters page in the Onehouse UI.

Notifications

You may set up notifications for OCU usage in your Cluster configurations.

Retrieving costs via API is not yet available.

Understanding usage vs. consumption

Actual OCU consumption that is billed for a project may be greater than the sum of "OCU Usage" by all the Clusters in the project.

This occurs with Cluster resource sharing, because “Usage” metrics only count the fractional CPU of an instance that is allocated to the Cluster. “Consumption” encompasses the total number instances running in your account that you will be billed for.

Tips to reduce OCU consumption

Reduce Stream Capture sync frequency

You may potentially decrease your OCU usage by reducing the sync frequency of Stream Captures running in your project. However, delays may occur if data volumes spike and commit durations exceed the sync frequency.

Also note that Onehouse kills any instance running idle for more than 10 minutes. If the sync frequency of your Stream Captures is low (<10min) and data in the source is getting refreshed frequently (<10min), the idle timeout threshold might not be met, and resources may not scale down.

Decrease your Cluster Min and Max OCU

Decreasing Min OCU for a Cluster allows it to scale down when less resources are required to run a workload. Consider keeping this low if you don't need to keep the Cluster warm when data processing needs are reduced.

Decreasing the Max OCU limits how the Cluster can scale up. Set a lower Max OCU if you want to throttle the Cluster to use less resources. Note that this may cause workloads to be processed more slowly and decreases the available compute & memory for workloads.