For years, enterprise data architecture has been split into separate worlds. SQL teams managed operational databases. Analytics teams built warehouses and lakehouses. BI teams maintained semantic models and reports. AI teams copied data into vector stores or notebooks. Each world had its own tools, security model, performance assumptions, and operating rhythm.

The direction signalled by FabCon and SQLCon 2026 is that this separation is becoming harder to justify. Microsoft is positioning SQL and Fabric as parts of one operational data platform: transactional SQL data, analytical storage in OneLake, governed semantic models, real-time intelligence, and AI agents that can use business context from the same foundation.

That does not mean every SQL Server, Azure SQL Database, Synapse workspace, and Fabric item should be merged overnight. It means architects should stop designing SQL modernisation, lakehouse analytics, and AI enablement as unrelated programmes. The practical goal is not a bigger platform diagram. The goal is to reduce duplicated data movement, inconsistent security, and disconnected business definitions.

The old operating model is too fragmented

A typical enterprise pattern looks like this:

Operational SQL systems
    -> nightly ETL
        -> warehouse or lake
            -> semantic model
                -> dashboard
                    -> analyst export
                        -> AI proof of concept

This works, but it is slow and brittle. The business asks a question, the data team checks whether the pipeline contains the right data, the BI team checks whether the metric is defined, and the AI team asks for another copy. By the time the answer arrives, the operational situation may already have changed.

The fragmentation also creates security problems. A customer record may be protected in SQL, exported into a lake with different controls, reshaped into a Power BI dataset with another permission model, and then copied into a notebook or agent tool. Each copy is a new exposure path. Each layer needs lineage, retention, ownership, and audit.

The better pattern is to make SQL, Fabric, and AI share a governed data plane wherever practical.

What an integrated SQL and Fabric platform means

An integrated platform does not mean SQL disappears. SQL remains the system of record for many enterprise applications. Fabric does not replace every database. Instead, Fabric becomes the analytical and contextual layer around the operational estate.

A practical reference model is:

Business Applications
- ERP, CRM, portals, custom apps
- Azure SQL / SQL Server / SQL MI
        |
        | mirroring, shortcuts, pipelines, events
        v
Microsoft Fabric
- OneLake data products
- Lakehouse / warehouse / SQL database in Fabric
- Semantic models and shared metrics
- Real-Time Intelligence and Eventhouse
- Fabric IQ / context layer
        |
        v
Consumers
- Power BI reports
- Data science notebooks
- Data agents
- Operations agents with approval gates

The key design principle is that operational data should become analytically useful without creating uncontrolled copies. Mirroring and OneLake integration are valuable because they reduce the gap between the operational source and the analytical surface. Semantic models are valuable because they define business meaning once. Real-time capabilities are valuable because many operational decisions cannot wait for yesterday's refresh.

Architecture decisions that matter

The first decision is ownership. A SQL database may be owned by an application team, but the Fabric data product derived from it needs a named data owner, technical owner, classification, quality expectation, and support model. Without ownership, integrated platforms become dumping grounds.

The second decision is data movement. Not every table needs to be mirrored. Start with domains where the business value is clear: customer, subscription, invoice, case, order, asset, telemetry, and cost. For each domain, decide whether the data should be mirrored near real time, loaded in batches, exposed through a shortcut, or kept inside the source system with query federation. Use the lowest-complexity pattern that satisfies the requirement.

The third decision is security inheritance. A user who cannot access sensitive salary data in SQL should not get it through a Fabric report, notebook, or agent. Design row-level security, column-level security, workspace boundaries, sensitivity labels, and audit logging before scaling consumption.

The fourth decision is semantic consistency. If revenue means one thing in SQL, another thing in Power BI, and another thing inside an agent prompt, the platform will lose trust. Define core metrics centrally and reuse them.

Example: customer risk platform

Consider a Malaysian enterprise trying to identify customers at risk of churn. The relevant information may sit across SQL databases, CRM, support systems, invoice ageing, and product usage telemetry. The old approach is to export everything into a warehouse and build dashboards. That is useful, but not always timely.

A Fabric-centred approach would mirror selected operational tables, stream usage events into Real-Time Intelligence, curate a customer data product in OneLake, define a semantic model for active customer, recurring revenue, open incident, SLA breach, and risk score, then expose that model to reports and agents.

The agent should not directly query every raw table. It should use curated data products and governed metrics. If it recommends action, such as creating a CRM task or notifying an account manager, that action should pass through a policy or approval gate.

SQL customer + invoice data
Support cases
Usage events
        |
        v
Curated customer data product in Fabric
        |
        +--> Power BI risk dashboard
        +--> Data agent answer with evidence
        +--> Operations workflow recommendation

This is where SQL and Fabric become operationally interesting. The platform is not only reporting what happened. It is helping the business respond with context.

Migration approach for existing SQL estates

Do not start by moving every database. Start with a portfolio assessment.

Group SQL workloads into four categories. First, systems of record that should remain operational databases. Second, reporting-heavy databases that need better analytical scale. Third, legacy marts that can be rationalised into Fabric data products. Fourth, experimental copies that should be retired.

For each candidate domain, document source owner, data classification, refresh requirement, consumer list, integration pattern, and target security controls. Then build one end-to-end slice: operational source to OneLake, semantic model, report, and agent-safe consumption surface.

A simple implementation checklist:

1. Select one high-value business domain.
2. Identify source SQL tables and data owner.
3. Classify sensitive columns before ingestion.
4. Choose mirroring, pipeline, or shortcut pattern.
5. Define curated tables and semantic metrics.
6. Apply row/column security and sensitivity labels.
7. Validate with one report and one agent-safe query path.
8. Monitor freshness, failures, usage, and cost.

This phased approach avoids a common mistake: launching Fabric as a new central platform while leaving business definitions and security controls behind.

Pitfalls to avoid

The first pitfall is treating Fabric as just another destination for SQL exports. If the governance model does not improve, the organisation only gets a newer silo.

The second pitfall is ignoring operational latency. Some use cases are fine with daily refresh. Others require event-driven updates. Match the ingestion pattern to the decision being supported.

The third pitfall is giving agents raw database access. Agent access should be mediated through curated data products, semantic models, and purpose-built tools. Raw access increases the chance of data leakage and incorrect interpretation.

The fourth pitfall is underestimating cost ownership. Mirroring, storage, compute, real-time processing, and agent queries all have cost profiles. FinOps should be part of the design from the beginning.

Key takeaways

  • SQL and Fabric should be planned as one enterprise data platform, not separate modernisation tracks.
  • Keep operational SQL systems where they belong, but expose selected data into governed Fabric data products.
  • Security, semantic consistency, and ownership matter more than the ingestion technology.
  • AI agents should consume curated and governed context, not uncontrolled database copies.
  • Start with one business domain and prove the operating model before scaling.