Artificial Intelligence

Key Takeaways from Thursday’s Keynote at the Databricks Data + AI Summit 2025

By Kate Huber

A recap for data leaders building next-gen analytics and governance.

If Wednesday’s keynote session was about building an AI-native foundation, Thursday’s keynote at the 2025 Data + AI Summit turned the spotlight to analytics, governance, and data sharing at scale. With thousands joining in person and virtually, Databricks leaders and engineers took the stage to demonstrate how the platform is evolving to support a simpler, faster, and more governed data ecosystem across industries.

The session featured insights from Databricks leaders and data engineers shaping the platform, including:

  • Ali Ghodsi, Co-Founder and CEO, Databricks
  • Arsalan Tavakoli-Shiraji, Co-founder and SVP of Field Engineering, Databricks
  • Bilal Aslam, Sr. Director of Product Management, Databricks
  • Ken Wong, Sr. Director of Product Management, Databricks
  • Matei Zaharia, Co-founder and CTO, Databricks; Creator of Apache Spark and MLflow
  • Michael Armbrust, Distinguished Engineer, Databricks
  • Miranda Luna, Director, Manager Product Management, Databricks
  • Keegan Dubbs, Staff Product Manager, Databricks
  • Michelle Leon, Staff Product Manager, Databricks
  • Michael Flynn, Director, Core Data, Rivian
  • Michael Piatek, Principal Software Engineer, Databricks

Delta Sharing and Open Collaboration

Ken Wong emphasized that the future of data collaboration hinges on openness, not walled gardens. Open standards, such as Delta Sharing, allow teams to focus on insight and impact—not integration headaches.

He noted, 'The more we standardize and simplify sharing, the faster the industry will move. Delta Sharing is about enabling velocity across teams, across clouds.'

Delta Sharing’s open-source model enables any consumer to access shared data, regardless of their cloud provider or technology stack. This empowers multi-party analytics, where vendors, suppliers, and customers can all collaborate using the same real-time insights.

“Sharing data securely and easily across organizations shouldn't require new infrastructure or custom pipelines. That’s what Delta Sharing solves.” – Bilal Aslam

With support for Unity Catalog and table-level access control, organizations gain the confidence to share data securely without setting up complex APIs or data warehouses for distribution.

The keynote underscored how Delta Sharing is eliminating vendor lock-in while also providing fine-grained governance through the Lakehouse architecture.

Data sharing was a central theme. Databricks emphasized its investment in Delta Sharing as a frictionless, open protocol for cross-cloud collaboration. Speakers shared how data practitioners can now collaborate across organizational and cloud boundaries with ease, unlocking previously siloed insights.

“Back in the day, just getting data from point A to point B was an absolute nightmare. Making sharing and collaboration easier reduces friction and accelerates value.” – Bilal Aslam

Photon: Faster SQL, Lower Costs

Michael Armbrust emphasized that Photon isn't just for Databricks-native data—it accelerates queries regardless of source, ensuring consistent performance even in federated environments.

Photon also complements streaming and real-time use cases, giving teams sub-second response times for modern applications.

Photon’s performance has been proven across diverse workloads—from ad hoc Business Intelligence (BI) queries to streaming pipelines. Thanks to vectorized execution and adaptive query planning, enterprises have seen cost reductions of up to 50% on heavy compute workloads.

“We designed Photon from the ground up to work with the Lakehouse, with vectorization, cache awareness, and native execution for maximum speed.” – Michael Armbrust

Engineers emphasized how the enhancements are designed to work out-of-the-box, requiring no tuning or rewriting of existing queries.

Performance updates to Photon also stood out. Engineered for lightning-fast SQL execution, Photon delivers better price performance by optimizing query workloads without compromising accuracy or flexibility. The keynote demonstrated how even the most extensive datasets can now be queried at sub-second latency.

Unity Catalog as the Control Plane

For regulated industries, the Unity Catalog offers a compliant foundation for AI. It enables secure model access, permission inheritance, and prompt lineage—all within the same platform that runs analytics and machine learning (ML).

Executive teams are increasingly demanding transparency around large language model (LLM) usage, and Unity Catalog delivers this in an integrated and auditable manner.

Unity Catalog has become the central metadata and governance layer across the entire Databricks ecosystem. It now supports dynamic views, lineage for notebooks and dashboards, and tag-based access policies.

“We see Unity Catalog as the control plane not just for data, but for AI, governance, and access across the entire platform.” – Matei Zaharia

By acting as the 'single source of truth' for both structured and unstructured assets, Unity Catalog simplifies auditability and reduces risks in production pipelines.

Matei Zaharia shared how this unified catalog unlocks secure multi-tenant operations while enabling AI governance for enterprise LLM deployments.

The Unity Catalog continues to mature as the “governance backbone of the Lakehouse. Beyond securing access, its enabling lineage, model governance, and data product discoverability. The keynote highlighted how Unity Catalog integrates deeply with the Databricks ecosystem, helping teams manage complexity without slowing down innovation.

Databricks Catalog and SQL AI Functions

With the Assistant and SQL AI Functions, Databricks is blurring the line between business intelligence (BI) and artificial intelligence (AI). Instead of writing complex SQL, business users can describe what they need—and the assistant translates that into executable, auditable logic.

Michelle Leon demonstrated how SQL AI Functions reduce friction for less technical users by abstracting common joins, filters, and transformations into explainable LLM-driven outputs.

“With AI Functions, any user can generate, explain, and run transformations. It’s SQL with superpowers, governed by default.” – Michelle Leon

Databricks Catalog now provides a unified index of all data assets—tables, views, functions—across all workspaces and accounts. It powers the Databricks Assistant, enabling natural language queries against structured metadata. Users can now invoke SQL AI Functions that leverage large language models (LLMs) to answer questions, recommend actions, or transform data inline—all governed through Unity Catalog.

Streamlining Developer Experience

Arsalan Tavakoli-Shiraji pointed out that developer productivity is now a board-level concern. 'If your data team spends most of their time-fighting config files or permissions, you're not going to ship fast enough to compete,' he said.

The new developer workflows—from local development to cloud deployment—align with software best practices and dramatically reduce time-to-insight.

Multiple speakers walked through new capabilities that align with modern software engineering workflows—such as CI/CD support, interactive debugging, and native notebook branching.

"Our goal is to make developing with Databricks as natural and productive as using any modern IDE—because data engineering should move as fast as software." – Miranda Luna

A key announcement was Git-backed Lakehouse workflows, allowing teams to version, review, and deploy pipelines just like code.

New observability dashboards also give data engineers deeper visibility into lineage, job failures, and performance bottlenecks across environments.

Throughout the session, Databricks engineers introduced enhancements designed to reduce developer toil. From intelligent schema inference to new VS Code extensions and Git-backed workflows, the platform now makes it easier for practitioners to build and deploy analytics and AI with modern DevOps practices.

How Customers Are Using It

Other enterprises echoed similar wins. Teams are cutting onboarding time for new data practitioners from weeks to hours by centralizing metadata and standardizing access via Unity Catalog.

These outcomes aren’t just technical—they translate directly into business agility, cost savings, and faster delivery of data-driven products.

Michael Flynn from Rivian detailed how the company uses the Unity Catalog to enforce consistent data definitions across hundreds of analysts and data engineers.

Their federated architecture enables teams to query cloud object stores and on-premises systems seamlessly, eliminating silos and supporting advanced AI use cases, such as predictive maintenance.

“Federated governance gives us the confidence to scale securely. We don’t need separate tools for cloud, on-prem, and partners anymore.” – Michael Flynn, Rivian

Real-time access, granular security, and metadata visibility were cited as game-changers for enabling trust and efficiency at scale.

Speakers from companies like Rivian described how they’re operationalizing analytics with Unity Catalog and Lakehouse Federation. Real-world use cases highlighted the role of Databricks in simplifying access controls, accelerating compliance, and making data available securely across cloud environments.

“At Rivian, we’re able to discover, query, and analyze data across clouds thanks to the federation capabilities Databricks now offers.”

Databricks is investing not only in building an AI-native future but also in making it governed, performant, and accessible for every team. From open standards and cross-cloud sharing to secure governance and self-serve analytics, the Lakehouse vision is becoming a business-critical reality for enterprises everywhere.

From streamlining developer productivity to redefining how data is shared, governed, and activated across the enterprise, Thursday’s keynote reinforced that Databricks isn’t just evolving its platform—it’s rearchitecting the future of enterprise data. Whether you’re modernizing legacy systems, enabling governed GenAI, or scaling analytics across business units, now is the time to align your architecture with your AI ambitions.  

Concord can help your business implement these innovations to accelerate your data strategy. Contact us today!

Sign up to receive our bimonthly newsletter!

Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.

Concord logo
©2025 Concord. All Rights Reserved  |
Privacy Policy