Data Solutions & Analytics

Operationalizing Snowflake Openflow

By Avinash Jadey

Snowflake Openflow makes it easier to run and govern pipelines right where your data lives. In this post, we’ll break down what it is and how to get started.

At this year’s Snowflake Summit, one announcement stood out for data leaders looking to simplify orchestration and governance: Snowflake Openflow. Plenty of new tools promise to streamline data pipelines, but Openflow is unique because it lives where your data already is—inside Snowflake—or, if you prefer, within your own cloud account through a “bring your own cloud” (BYOC) model.

If you’ve ever juggled multiple ETL platforms, worried about governance drift, or struggled to scale orchestration across teams, Openflow will feel like a big shift. Instead of bolting on external schedulers and connectors, Snowflake gives you a native framework for building, running, and governing your pipelines. The question is: how do you move from an exciting announcement to a working production setup?

This blog takes you through that journey by explaining not just what Openflow is, but how to operationalize it effectively.

What is Snowflake Openflow?

Openflow is Snowflake’s new service for data integration and orchestration. It’s designed to let teams build, deploy, and run data flows either directly in Snowflake or in their own cloud environment.

The magic lies in three building blocks:

  1. Deployments: Think of these as “engine rooms” for your flows. They’re isolated environments where pipelines run.
  2. Runtimes: Compute clusters inside deployments dedicated to executing flows. They process the data and perform transformations.
  3. Roles & Privileges: Fine-grained access controls that determine who can create, manage, or simply observe deployments and runtimes.

By bringing these together, Openflow gives organizations a centralized way to manage what was once fragmented across multiple tools.

Why Openflow Matters

Data teams face three recurring challenges: complexity, governance, and speed.

  • Complexity – many organizations run Airflow for scheduling, dbt for transformations, and vendor-specific tools for ingestion. Each adds cost, maintenance, and risk of failure.
  • Governance – with multiple orchestration layers, it’s difficult to enforce role-based access, track lineage, or maintain compliance.
  • Speed – moving data between systems increases latency and slows insight delivery.

Openflow addresses all three:

  1. Centralized orchestration – Everything runs inside Snowflake’s ecosystem, minimizing external dependencies.
  2. Simplified governance - Roles and privileges make it clear who can build, manage, or observe flows.
  3. Operational flexibility – The BYOC model lets teams run pipelines in their own cloud, giving them control over performance and cost.

For organizations trying to accelerate insights and reduce operational overhead, Openflow turns a traditionally fragmented workflow into a unified, manageable process.

Openflow Architecture

To understand why Openflow is so powerful, it helps to break down its architecture. Openflow is composed of two primary planes:

  • Control Plane (Snowflake-managed):
    • Provides the UI and APIs for managing deployments and runtimes.
    • Handles metadata, orchestration logic, and governance.
    • Fully secured and operated by Snowflake.
  • Data Plane (Customer-managed, BYOC):
    • Where the runtime cluster actually live and execute flows.
    • Hosted in your cloud account (AWS, Azure, or GCP), giving full control over infrastructure and scaling.
    • Communicates securely with Snowflake’s control plane.

This separation provides both ease of management and flexibility: Snowflake manages orchestration, while customers maintain control over execution, cost, and scaling.

How to Operationalize Openflow

To make the most of Openflow, it’s important to have a structured approach. Here’s a step-by-step guide to setting up Openflow deployments, roles, and runtimes. We’ll include both technical guidance (SQL) and UI instructions.  

Step 1: Set Up Your Environment

Before creating any deployments or runtimes, Openflow requires a dedicated database, schema, and image repository. This ensures Openflow agents can access the images they need to run pipelines.

SQL Example:

USE ROLE ACCOUNTADMIN;

CREATE DATABASE IF NOT EXISTS OPENFLOW;

USE OPENFLOW;

CREATE SCHEMA IF NOT EXISTS OPENFLOW;

USE SCHEMA OPENFLOW;

CREATE IMAGE REPOSITORY IF NOT EXISTS OPENFLOW;

GRANT USAGE ON DATABASE OPENFLOW TO ROLE PUBLIC;

GRANT USAGE ON SCHEMA OPENFLOW TO ROLE PUBLIC;

GRANT READ ON IMAGE REPOSITORY OPENFLOW.OPENFLOW.OPENFLOW TO ROLE PUBLIC;

This sets up a clean foundation where deployments and runtimes can operate without interference from other workloads.

Step 2: Grant Administrative Permissions

Next, create a master role—OPENFLOW_ADMIN—with privileges to manage deployments and runtimes.

SQL Example:

CREATE ROLE OPENFLOW_ADMIN;

GRANT ROLE OPENFLOW_ADMIN TO USER ‘YOURUSER’;

GRANT CREATE OPENFLOW DATA PLANE INTEGRATION ON ACCOUNT TO ROLE OPENFLOW_ADMIN;

GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE OPENFLOW_ADMIN;

ALTER USER ‘YOURUSER’ SET DEFAULT_SECONDARY_ROLES = ('ALL');

This ensures that designated administrators can always perform Openflow operations, regardless of their current active role.

Step 3: Create Deployment Roles

Openflow separates responsibilities into managers (who build and control deployments) and viewers (who can observe but not modify).

In this example, we’ll create:

  • 1 deployment manager role
  • 2 runtime manager roles (to manage runtimes inside deployments)
  • 2 viewer roles (to observe runtimes without modification)

SQL Example:

CREATE ROLE IF NOT EXISTS deployment_manager;

CREATE ROLE IF NOT EXISTS deployment1_runtime_manager;

CREATE ROLE IF NOT EXISTS deployment1_runtime_viewer_1;

CREATE ROLE IF NOT EXISTS deployment2_runtime_manager;

CREATE ROLE IF NOT EXISTS deployment2_runtime_viewer_1;

-- Grant privileges

GRANT CREATE OPENFLOW DATA PLANE INTEGRATION ON ACCOUNT TO ROLE deployment_manager;

GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE deployment1_runtime_manager;

GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE deployment2_runtime_manager;

The principle here is simple: builders build, observers watch.

Step 4: Assign Roles to Users

Once roles are defined, assign them to the appropriate team members.

SQL Example:

GRANT ROLE deployment_manager TO USER ‘YOURUSER’;

GRANT ROLE deployment1_runtime_manager TO USER ‘YOURUSER’;

GRANT ROLE deployment1_runtime_viewer_1 TO USER ‘YOURUSER’;

At this stage, team members have clear, controlled access to Openflow components—minimizing risk while enabling collaboration.

Step 5: Create Deployments (UI)

Deployments are the execution environments where Openflow pipelines run.

UI Instructions:

  1. Log in to Snowflake with a user who has the deployment_manager role.
  2. Navigate to Data > Openflow > Launch Openflow.
  3. Create Deployment 1 and grant usage to the runtime manager and viewer roles
  4. (Optional) Create Deployment 2 for additional projects or testing.

Think of deployments as separate “workspaces” or engine rooms for your flows. They give teams isolation and flexibility while keeping governance intact.

Step 6: Create Runtimes

A runtime is a compute cluster inside a deployment that actually executes the flows.

UI Instructions:

  1. Log in as a user with the deployment1_runtime_manager role.
  2. In the Openflow UI, navigate to Runtimes > Create Runtime.
  3. Select Deployment 1, provide a runtime name, choose node type, and complete creation.

Your runtime is now ready to process pipelines, and managers can create, modify, or delete as needed—without impacting other deployments.

Best Practices for Openflow Adoption

While Openflow is powerful, organizations can accelerate success by following a few best practices:

  1. Start small: Pilot one deployment with a few runtimes to validate flows and permissions before scaling.
  2. Define clear roles: Managers and viewers should be assigned thoughtfully to maintain governance.
  3. Monitor usage: Keep track of runtime activity to optimize compute costs and avoid idle clusters.
  4. Document everything: Maintain internal SOPs, naming conventions, and flow diagrams to ease onboarding.
  5. Integrate incrementally: Leverage existing data pipelines where possible, and gradually migrate flows into Openflow.

By combining these practices with a structured setup approach, organizations can unlock fast insights, better governance, and operational efficiency.

Example Use Cases

All industries can benefit from Openflow’s centralization and governance-first design.

  • Finance: Daily ETL pipelines for revenue reporting, fully governed with audit trails.
  • Healthcare: Data ingestion and transformation for clinical research, ensuring HIPAA-aligned governance.
  • Retail: Feeding AI-driven personalization models with real-time data pipelines.

From Setup to Business Value

Snowflake Openflow is a practical tool that enables teams to orchestrate data in a governed, scalable way. With the right preparation, organizations can move from announcement to implementation quickly.

Our approach demonstrates how Snowflake’s newest capabilities can be translated into real business outcomes, ensuring your data pipelines are not only operational but optimized for governance, flexibility, and scale.

Sign up to receive our bimonthly newsletter!

Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.

Concord logo
©2025 Concord. All Rights Reserved  |
Privacy Policy