Usage & Billing
Onehouse runs usage-based billing on Onehouse Compute Units (OCU), determined by the number of instances running Onehouse in your cloud account.
How does Onehouse bill?
Onehouse runs a Bring-Your-Own-Cloud (BYOC) model and charges based on compute usage.
BYOC model
Onehouse charges based on compute instance usage (measured in OCU), plus support costs. Onehouse does not charge additional volume-based or storage fees.
Customers are responsible for their own cloud infrastructure costs, as is standard practice in the Bring-Your-Own-Cloud (BYOC) deployment model.
OCU consumption
Onehouse operates Amazon EC2 or Google Compute Engine instances within your AWS or GCP account, respectively. Onehouse automatically scales these instances up and down based on your workload requirements and the parameters you set in your Clusters.
A Onehouse Compute Unit (OCU) is a normalized unit of compute that meters usage of Onehouse services.
1 OCU is equivalent to a 4 core CPU instance running for 1 hour in your cloud account.
Currently, any 4 core CPU instance in your account will burn 1 OCU per hour, regardless of the instance family. This is subject to change in the future.
Larger instances burn more OCU, at the normalized rate of 4 CPU cores per 1 OCU. For example, a 16 core instance running for 1 hour will burn 4 OCU.
Your final bill is calculated from the OCU consumed: ($ rate per hourly OCU) * (# of hourly OCU consumed)
.
Instance types
Standard instance types
Onehouse supports the following standard instance types:
Onehouse instance | Spec | OCU / Hour | EC2 instance (subject to upgrade) | GCE instance (subject to upgrade) | Cluster types supported |
---|---|---|---|---|---|
oh-general-4 | 4 vCPUs, 16 GB RAM | 1.0 | m8g.xlarge | e2-standard-4 | All |
oh-general-8 | 8 vCPUs, 32 GB RAM | 2.0 | m8g.2xlarge | e2-standard-8 | All |
oh-general-16 | 16 vCPUs, 64 GB RAM | 4.0 | m8g.4xlarge | e2-standard-16 | All |
oh-gpu-4 | 1 GPU, 24 GB GPU RAM, 4 vCPUs, 16 GB RAM | 5.6 | g5.xlarge | g2-standard-4 | Open Engines Ray only |
Custom instance types
In addition to the standard instance types, project admins may enable custom instance types for their project. The following custom instance types are available:
Instance | Cloud provider | Spec | OCU / Hour | Cluster Types Supported |
---|---|---|---|---|
m7i.xlarge | AWS | 4 vCPUs, 16 GB RAM | 1.0 | All |
n2d-standard-4 | GCP | 4 vCPUs, 16 GB RAM | 1.0 | All |
n2d-standard-8 | GCP | 8 vCPUs, 32 GB RAM | 2.0 | All |
Reach out to Onehouse support to request additional custom instance types.
Project admins can enable/disable custom instance types on the Project Settings
page in the Onehouse console.
Monitor usage and costs
Observability
View project-level usage and billing metrics in the Onehouse UI under Settings > Usage
. This page is visible only to project admins.
You may also view Cluster-level usage on the Clusters
page in the Onehouse UI.
Notifications
You may set up notifications for OCU usage in your Cluster configurations.
Retrieving costs via API is not yet available.
Understanding usage vs. consumption
Actual OCU consumption that is billed for a project may be greater than the sum of "OCU Usage" by all the Clusters in the project.
This occurs with Cluster resource sharing, because “Usage” metrics only count the fractional CPU of an instance that is allocated to the Cluster. “Consumption” encompasses the total number instances running in your account that you will be billed for.
Tips to reduce OCU consumption
Reduce Stream Capture sync frequency
You may potentially decrease your OCU usage by reducing the sync frequency of Stream Captures running in your project. However, delays may occur if data volumes spike and commit durations exceed the sync frequency.
Also note that Onehouse kills any instance running idle for more than 10 minutes. If the sync frequency of your Stream Captures is low (<10min) and data in the source is getting refreshed frequently (<10min), the idle timeout threshold might not be met, and resources may not scale down.
Decrease your Cluster Min and Max OCU
Decreasing Min OCU for a Cluster allows it to scale down when less resources are required to run a workload. Consider keeping this low if you don't need to keep the Cluster warm when data processing needs are reduced.
Decreasing the Max OCU limits how the Cluster can scale up. Set a lower Max OCU if you want to throttle the Cluster to use less resources. Note that this may cause workloads to be processed more slowly and decreases the available compute & memory for workloads.