At this year’s Snowflake Summit, one announcement stood out for data leaders looking to simplify orchestration and governance: Snowflake Openflow. Plenty of new tools promise to streamline data pipelines, but Openflow is unique because it lives where your data already is—inside Snowflake—or, if you prefer, within your own cloud account through a “bring your own cloud” (BYOC) model.
If you’ve ever juggled multiple ETL platforms, worried about governance drift, or struggled to scale orchestration across teams, Openflow will feel like a big shift. Instead of bolting on external schedulers and connectors, Snowflake gives you a native framework for building, running, and governing your pipelines. The question is: how do you move from an exciting announcement to a working production setup?
This blog takes you through that journey by explaining not just what Openflow is, but how to operationalize it effectively.
Openflow is Snowflake’s new service for data integration and orchestration. It’s designed to let teams build, deploy, and run data flows either directly in Snowflake or in their own cloud environment.
The magic lies in three building blocks:
By bringing these together, Openflow gives organizations a centralized way to manage what was once fragmented across multiple tools.
Data teams face three recurring challenges: complexity, governance, and speed.
Openflow addresses all three:
For organizations trying to accelerate insights and reduce operational overhead, Openflow turns a traditionally fragmented workflow into a unified, manageable process.
To understand why Openflow is so powerful, it helps to break down its architecture. Openflow is composed of two primary planes:
This separation provides both ease of management and flexibility: Snowflake manages orchestration, while customers maintain control over execution, cost, and scaling.
To make the most of Openflow, it’s important to have a structured approach. Here’s a step-by-step guide to setting up Openflow deployments, roles, and runtimes. We’ll include both technical guidance (SQL) and UI instructions.
Before creating any deployments or runtimes, Openflow requires a dedicated database, schema, and image repository. This ensures Openflow agents can access the images they need to run pipelines.
SQL Example:
USE ROLE ACCOUNTADMIN;
CREATE DATABASE IF NOT EXISTS OPENFLOW;
USE OPENFLOW;
CREATE SCHEMA IF NOT EXISTS OPENFLOW;
USE SCHEMA OPENFLOW;
CREATE IMAGE REPOSITORY IF NOT EXISTS OPENFLOW;
GRANT USAGE ON DATABASE OPENFLOW TO ROLE PUBLIC;
GRANT USAGE ON SCHEMA OPENFLOW TO ROLE PUBLIC;
GRANT READ ON IMAGE REPOSITORY OPENFLOW.OPENFLOW.OPENFLOW TO ROLE PUBLIC;
This sets up a clean foundation where deployments and runtimes can operate without interference from other workloads.
Next, create a master role—OPENFLOW_ADMIN—with privileges to manage deployments and runtimes.
SQL Example:
CREATE ROLE OPENFLOW_ADMIN;
GRANT ROLE OPENFLOW_ADMIN TO USER ‘YOURUSER’;
GRANT CREATE OPENFLOW DATA PLANE INTEGRATION ON ACCOUNT TO ROLE OPENFLOW_ADMIN;
GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE OPENFLOW_ADMIN;
ALTER USER ‘YOURUSER’ SET DEFAULT_SECONDARY_ROLES = ('ALL');
This ensures that designated administrators can always perform Openflow operations, regardless of their current active role.
Openflow separates responsibilities into managers (who build and control deployments) and viewers (who can observe but not modify).
In this example, we’ll create:
SQL Example:
CREATE ROLE IF NOT EXISTS deployment_manager;
CREATE ROLE IF NOT EXISTS deployment1_runtime_manager;
CREATE ROLE IF NOT EXISTS deployment1_runtime_viewer_1;
CREATE ROLE IF NOT EXISTS deployment2_runtime_manager;
CREATE ROLE IF NOT EXISTS deployment2_runtime_viewer_1;
-- Grant privileges
GRANT CREATE OPENFLOW DATA PLANE INTEGRATION ON ACCOUNT TO ROLE deployment_manager;
GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE deployment1_runtime_manager;
GRANT CREATE OPENFLOW RUNTIME INTEGRATION ON ACCOUNT TO ROLE deployment2_runtime_manager;
The principle here is simple: builders build, observers watch.
Once roles are defined, assign them to the appropriate team members.
SQL Example:
GRANT ROLE deployment_manager TO USER ‘YOURUSER’;
GRANT ROLE deployment1_runtime_manager TO USER ‘YOURUSER’;
GRANT ROLE deployment1_runtime_viewer_1 TO USER ‘YOURUSER’;
At this stage, team members have clear, controlled access to Openflow components—minimizing risk while enabling collaboration.
Deployments are the execution environments where Openflow pipelines run.
UI Instructions:
Think of deployments as separate “workspaces” or engine rooms for your flows. They give teams isolation and flexibility while keeping governance intact.
A runtime is a compute cluster inside a deployment that actually executes the flows.
UI Instructions:
Your runtime is now ready to process pipelines, and managers can create, modify, or delete as needed—without impacting other deployments.
While Openflow is powerful, organizations can accelerate success by following a few best practices:
By combining these practices with a structured setup approach, organizations can unlock fast insights, better governance, and operational efficiency.
All industries can benefit from Openflow’s centralization and governance-first design.
Snowflake Openflow is a practical tool that enables teams to orchestrate data in a governed, scalable way. With the right preparation, organizations can move from announcement to implementation quickly.
Our approach demonstrates how Snowflake’s newest capabilities can be translated into real business outcomes, ensuring your data pipelines are not only operational but optimized for governance, flexibility, and scale.
Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.