Thought Leadership

Become a Cloud Data FinOps Expert with Data Observability

December 21, 2021
10 Min Read

Ahhh, the good old days. When a database was a server, not serverless. When data was stored in structured tables, not free-for-all data lakes. And when calculating the return on your data investments was straightforward. Just take the annual dollar value generated by your data projects, and subtract the amortized cost of your hardware and software, IT admin salaries and maybe your data center energy bill. Voila! Whatever was left was your ROI.

Those days -- and this is foreshadowing, but it was before data observability; we'll get to that soon enough -- are long gone. Managing IT costs and maximizing ROI is so difficult today that it has spawned a whole new speciality called FinOps. And the cloud is only partly to blame. There are four reasons why tracking and controlling IT costs, including your data costs, is so hard:

1) Your data infrastructure and data pipelines are more complicated and sprawling

Whether they set out to do this or not, most companies today run a sprawling hybrid infrastructure.They have data stored on premises, in hosted servers, and SaaS clouds. They serve data using old-school SQL relational databases, newer-school NoSQL key-value stores, and cutting-edge cloud data warehouses such as Snowflake. 

They stream real-time data using Kafka, and feed ML applications from Databricks lakehouses, all in the name of becoming data-driven digital enterprises. However, the more cutting-edge cloud-native data applications they deploy, the more complex their infrastructure grows. 

While many data vendors tout themselves as one-stop shops, the reality is that most enterprises are not ready to standardize on a single vendor. Some companies are investing many millions in loosely organized, multi-year consolidation projects, but the overwhelming scope of these efforts cause most to abandon them while their data infrastructure continues to expand. And the rare company that succeeds in standardizing onto a single SaaS data provider will find themselves on a new upgrade treadmill that creates high operational complexity of a different kind.

2) Cloud costs are dynamic, making forecasts slippery

Organizations were initially wowed by the ease and low startup cost of the SaaS cloud. But the very thing that made the cloud so inexpensive to switch to — its utilization-based pricing — also makes it harder to budget for than a capital investment like a server or the flat monthly fees typical to hosted virtual machines. 

Near-infinite scalability and pay-as-you-go pricing combine to make cloud costs highly volatile. Transmission errors in your event stream can cause incoming data to be re-sent, causing volumes to skyrocket. So can a surge in customers using your external personalization engine during holidays. What about the rushed launch of new applications? Forgotten but still operational data pipelines? Developers that optimized for query performance while ignoring cost? Or even the impact of volume discounts or overage penalties?

While the cloud honeymoon ended long ago, IT and data teams continue to grapple unsuccessfully with both tracking and forecasting their cloud operational expenses (op-ex). Because what they can’t measure, they can’t control.

3. Intentionally or not, cloud providers don’t make it easy. While SaaS providers charge by usage, their own costs tend to be fixed. The more usage they drive, the more profit they generate. They beat the Agile drum so we’ll  quickly build and launch applications. Or they leave usage alerts and spending guardrails off by default. Or they provide monthly bills that lack the detail for customers to correctly charge back data costs to the right department, project or pipeline.

Not all of this is intentional. Today’s data pipelines are complicated. For instance, a consumer recommendation engine for a shopping site might pull in a user’s past shopping history, their behavior on other web sites, a real-time feed of the user’s clicks and mousing behavior on the site, and more.

Cloud providers don’t understand your business like you do. They don’t know what performance and uptime SLAs you must meet. They don’t realize your consumer recommendation engine is owned by a different team than your ML application.

4.Who owns the data costs and ROI? In the last several decades, technology has become so mission-critical that business units have sidestepped the IT department to deploy cloud services directly, enabled by the cloud’s low startup costs. However, in the name of moving fast, business teams sacrificed best practices around efficiency and controlling costs. As their cloud and data volumes grew, that has hurt them.

In response, enterprises have begun deploying FinOps experts or creating centrally-managed data operations teams. In theory, DataOps teams care deeply about controlling data costs. The reality is less clear. For anyone with data in their job title — data engineer, data steward, data architect, and even the Chief Data Officer — keeping data pipelines free of errors and bottlenecks is top priority, so that they don’t have to deal with unpleasant voicemails from their colleagues. 

Minimizing costs is not something they were trained for or skilled at. And since technology budgets are still usually owned by the business units themselves, they don’t own the ROI of the data projects, either. Controlling costs is not their KPI, enabling applications is. If an engineering team insists on building a whole new data pipeline instead of reusing an existing 95%-identical one, who is the data engineer to refuse? 

Five Actions to Control Cloud Data Costs

On the whole, the cloud, especially the latest generation of low-ops cloud-native applications, provides a higher-performance way to generate valuable insights from your data while minimizing the cost and effort to manage it. 

But cloud economics are what they are. Without tools to help you, op-ex will always be dynamic and elusive. And cloud providers will never be incentivized to provide those tools or help you curb your costs.

So what steps can companies take to start to get a handle on the ROI of their data operations and pipelines?

  1. Empower your data engineers with a data observability platform that gives them granular visibility into cloud data costs. Charge your data teams with continuing to move fast, while staying within set financial guardrails.
  2. Make cost optimization a first-class metric in your data engineering culture. Cost optimization must start in the design phase. It can’t be done reactively, after the engineering phase.
  3. Save time on investigating spend anomalies by using data observability. This can provide automated root cause analysis that pinpoints potential cost drivers, such as service type, specific warehouse compute costs, etc.
  4. Have DataOps stay informed of unexpected cost events through automated alerts via email, Slack, and other channels.
  5. Have data teams schedule regular financial reviews with their partners in the lines of business, FinOps, IT, etc. Over-communicate on all matters regarding cloud data contracts, budgets, and forecasts.

You can learn more about how Acceldata's cost optimization enables data teams to proactively manage cloud costs to maximize your overall data investments.

Get a demo of the Acceldata Data Observability Platform and learn how you can optimize your data spend, improve operational intelligence, and ensure data reliability.

Photo by Alexander Grey on Unsplash

Similar posts

With over 2,400 apps available in the Slack App Directory.

Ready to start your
data observability journey?