Implementing Dynamic Data Ingestion for Multi-Store Retail with Microsoft Fabric
#Cloud

Implementing Dynamic Data Ingestion for Multi-Store Retail with Microsoft Fabric

Cloud Reporter
17 min read

A technical guide for building a scalable data platform that automatically accommodates new retail stores without manual configuration, using pattern-based CDC detection and dynamic partitioning in Microsoft Fabric.

One of the most overlooked aspects of building a scalable data platform is the ability to accommodate growth without requiring constant reconfiguration. In retail environments where new stores are opened regularly, the traditional approach of manually configuring each data source becomes a significant operational burden. This article explores how to implement dynamic partitioning combined with automatic data source detection in Microsoft Fabric, specifically for scenarios involving dozens or hundreds of retail store databases. The architecture discussed here addresses a common challenge: how do you ingest data from 57 stores today while ensuring that store number 58, 59, or even store 100 can be onboarded with minimal intervention? The answer lies in a combination of pattern-based source detection at the CDC layer and dynamic partition creation at the storage layer.

Featured image

Understanding the Problem Space

When organizations first approach multi-source data ingestion, they typically start with explicit configuration. Each database connection is defined individually, each table mapping is specified, and each destination partition is created manually. This works reasonably well for five or ten sources. It becomes painful at twenty. It becomes nearly unmanageable at fifty or more.

The operational cost manifests in several ways. Every new store requires a ticket to the data engineering team. Someone must configure the connection, verify the schema compatibility, set up the ingestion pipeline, create the destination structures, and validate the data flow. In a fast-growing retail operation, this creates a bottleneck that delays time-to-insight for new locations.

Beyond the immediate operational burden, there is also the risk of configuration drift. When each source is configured individually, small inconsistencies creep in over time. One store might have slightly different table names. Another might be ingesting an extra column that was added during a schema migration. These inconsistencies compound, making the overall system harder to maintain and debug.

The solution presented here eliminates most of this manual work by implementing two complementary patterns: automatic source detection through regex-based CDC configuration, and dynamic partition creation through Delta Lake's native capabilities.

Architecture Overview

The proposed architecture places Debezium as the CDC engine, reading from PostgreSQL databases and publishing change events to Azure Event Hubs. Fabric EventStream consumes these events and writes them to a Delta Lake table in the Lakehouse. The key insight is that neither Debezium nor Delta Lake requires explicit enumeration of every data source. Both can operate on patterns rather than explicit lists.

At the source layer, Debezium connects to PostgreSQL and monitors the write-ahead log for changes. Rather than configuring a separate connector for each store database, a single connector is configured with a regex pattern that matches all store databases. When a new database is created that matches this pattern, Debezium automatically begins capturing its changes without any configuration update.

At the destination layer, Delta Lake tables are defined with partition columns but without explicit partition values. When a record arrives with a previously unseen partition value, Delta Lake automatically creates the necessary directory structure and begins writing data to the new partition. No DDL statement is required, no manual intervention is needed.

The combination of these two behaviors creates an end-to-end pipeline where adding a new store database is as simple as creating the database itself. Everything downstream happens automatically.

Implementation Details

Configuring Debezium for Automatic Source Detection

The critical configuration element in Debezium is the database include list parameter. Rather than specifying each database explicitly, this parameter accepts a regular expression that defines which databases should be monitored. For a retail environment where store databases follow a naming convention such as store_001, store_002, and so forth, the configuration would specify a pattern like store_.* as the include list. This pattern matches any database whose name begins with store_ followed by any characters.

When the DBA creates store_058, Debezium detects this new database during its periodic metadata refresh and automatically begins capturing changes from it. The connector configuration should also specify which tables within each database to monitor. In most retail scenarios, the schema is standardized across all stores, so this can be a fixed list such as public.transactions, public.inventory, and public.products. If schema variations exist between stores, additional filtering logic may be required, but this is generally a sign that the source systems need standardization rather than accommodation of inconsistency.

The topic routing configuration is equally important. All CDC events should be routed to a single Event Hubs topic, with the store identifier included in the message payload. This allows a single EventStream to process all store data while preserving the ability to identify which store each record originated from. The source metadata that Debezium includes with each change event contains the database name, which serves as the natural store identifier.

Initial snapshot behavior must be considered carefully. When Debezium detects a new database, it will perform an initial snapshot to capture the current state before beginning to track incremental changes. For a store with substantial historical data, this snapshot may take anywhere from a few minutes to several hours. During this period, the connector is occupied with the snapshot and may exhibit increased latency for change events from other databases. Scheduling new store database creation during off-peak hours helps mitigate this impact.

Configuring Delta Lake for Dynamic Partitioning

Delta Lake supports dynamic partition creation as a default behavior. When a table is created with partition columns specified, any INSERT operation that includes a previously unseen partition value will automatically create the corresponding partition directory. The table definition should include the store identifier as the primary partition column. A secondary partition on date is typically advisable for managing data lifecycle and optimizing query performance. The combination of store_id and event_date provides a natural organization that supports both store-specific queries and time-range queries efficiently.

The table should be created with automatic optimization enabled. The autoOptimize.optimizeWrite property causes Delta Lake to automatically coalesce small files during write operations, reducing the small file problem that frequently plagues streaming ingestion workloads. The autoOptimize.autoCompact property enables background compaction of files that have accumulated between optimization runs.

When EventStream writes a record with store_id equal to store_058 and this value has never been seen before, Delta Lake creates the partition directory structure automatically. The first write creates the directory, and subsequent writes append to files within that directory. From the perspective of downstream queries, the new store's data is immediately available without any schema changes or administrative intervention.

EventStream Processing Logic

The EventStream configuration bridges the gap between Event Hubs and the Lakehouse. It must parse the Debezium change event format, extract the relevant fields including the store identifier, and route the data to the appropriate Delta table. The transformation logic should extract the store identifier from the source metadata section of the Debezium payload. This is typically found at a path like source.db within the JSON structure.

The operation type indicating whether the change is an insert, update, or delete should be preserved for downstream CDC merge processing. A derived column for the event date should be computed from the event timestamp. This serves as the secondary partition key and enables time-based data management. The date extraction should use the source system timestamp rather than the ingestion timestamp to ensure that data is partitioned based on when the business event occurred rather than when it was processed.

The destination configuration specifies the Delta table and the partition columns. EventStream handles the actual write operations, and Delta Lake handles the partition management automatically based on the values present in each record.

Best Practices for Production Deployment

Naming Convention Enforcement

The automatic detection pattern is only as reliable as the naming convention it depends on. Before implementing this architecture, establish and enforce a strict naming convention for store databases. Document the convention, communicate it to all teams that provision databases, and implement validation checks in the database provisioning process.

The naming convention should be simple and unambiguous. A pattern like store_NNN where NNN is a zero-padded three-digit number provides clear structure and allows for up to 999 stores without format changes. Avoid conventions that might conflict with other databases or that include characters with special meaning in regex patterns.

Schema Standardization

Automatic source detection assumes that all detected sources share a compatible schema. If store_058 has different table structures than store_001, the downstream processing will fail or produce incorrect results. Schema standardization must be enforced at the source system level before relying on automatic detection.

Implement schema validation as part of the store database provisioning process. When a new store database is created, it should be created from a template that guarantees schema compatibility. If schema migrations are necessary, they should be applied uniformly across all store databases before being reflected in the CDC configuration.

Partition Key Selection

The choice of partition keys has significant implications for both storage efficiency and query performance. The store identifier is the natural first-level partition because it provides the strongest cardinality and aligns with common query patterns such as analyzing a specific store's performance. Date as a secondary partition enables efficient time-range queries and simplifies data lifecycle management. Retention policies can be implemented by dropping old date partitions rather than scanning and deleting individual records.

However, the combination of store and date partitions can produce a large number of partition directories. With 100 stores and 365 days of retained data, the table would have 36,500 partitions. While Delta Lake handles this reasonably well, query planning overhead increases with partition count. Consider using month rather than day as the secondary partition if the total partition count becomes problematic. This reduces the partition count by a factor of roughly 30 while still enabling reasonably efficient time-based queries and lifecycle management.

Monitoring and Alerting

Automatic detection reduces operational burden but does not eliminate the need for monitoring. Implement alerting for several key scenarios. First, monitor for new store detection. While the system handles new stores automatically, operations teams should be notified when a new store begins ingesting data. This serves as a sanity check that the detection is working and provides visibility into the growth of the system.

Second, monitor for schema compatibility failures. If a new database is detected but its schema does not match expectations, the CDC process may fail or produce malformed data. Alerting on processing errors helps catch these issues quickly.

Third, monitor for snapshot completion. When a new store database triggers an initial snapshot, track the snapshot progress and completion. Extended snapshot times may indicate unusually large source tables or performance issues that warrant investigation.

Fourth, monitor partition growth. While dynamic partitioning is convenient, runaway partition creation can indicate a problem such as incorrect store identifiers being generated. Alert if the number of distinct store partitions grows faster than expected based on the known rate of new store openings.

Initial Snapshot Planning

The initial snapshot that occurs when a new database is detected can be resource-intensive for both the source database and the CDC infrastructure. Plan for this by establishing a new store onboarding window during off-peak hours when the impact of snapshot processing is minimized.

Consider implementing a two-phase onboarding process for stores with large historical datasets. In the first phase, configure the database but exclude it from the Debezium include pattern. Perform a bulk historical load using batch processing, which can be throttled and scheduled more flexibly than the streaming snapshot. In the second phase, add the database to the include pattern to begin capturing incremental changes. This approach reduces the load on the streaming infrastructure while still achieving complete data capture.

Capacity Planning

Dynamic detection and partitioning enable easy scaling in terms of configuration, but the underlying infrastructure must still be sized appropriately. Event Hubs throughput units must accommodate the aggregate event volume from all stores. Fabric capacity units must handle the combined processing load of EventStream and Delta Lake operations.

Develop a capacity model that estimates resource requirements per store. Multiply by the current store count plus a growth buffer to determine infrastructure sizing. Review and adjust this model as actual usage patterns become clear.

Risk Assessment and Mitigation

Uncontrolled Source Proliferation

The convenience of automatic detection carries a risk of unintended sources being captured. If the naming convention is not strictly enforced, or if the regex pattern is too broad, databases that should not be ingested may be detected and processed.

Mitigation involves implementing strict naming convention governance and using precise regex patterns. The pattern should be as specific as possible while still accommodating legitimate variations. Consider implementing a whitelist in addition to the pattern match, where new databases matching the pattern are flagged for approval before ingestion begins. This adds a manual step but provides a safety checkpoint.

Schema Drift Between Sources

Over time, individual store databases may drift from the standard schema due to local modifications, failed migrations, or version inconsistencies. When the CDC process encounters unexpected schema elements, it may fail or produce incorrect data.

Mitigation requires implementing schema validation at both the source and destination. At the source, periodic schema audits should compare each store database against the canonical schema and flag deviations. At the destination, schema evolution policies in Delta Lake should be configured to reject incompatible changes rather than silently accepting them. The merge schema option should be used cautiously and only when schema evolution is intentional.

Partition Explosion

Dynamic partition creation can lead to an excessive number of partitions if the partition key has unexpectedly high cardinality or if erroneous data introduces spurious partition values. A misconfigured pipeline might create thousands of partitions, degrading query performance and complicating data management.

Mitigation involves implementing partition count monitoring with alerts at defined thresholds. Additionally, validate partition key values before writing to Delta Lake. If a store identifier does not match the expected format, reject the record or route it to an error table for investigation rather than creating an erroneous partition.

CDC Lag During Snapshot

When a new database triggers an initial snapshot, the Debezium connector dedicates resources to reading the full table contents. During this period, change events from other databases may experience increased latency. In severe cases, the replication slot lag may grow to problematic levels.

Mitigation involves scheduling new database provisioning during low-activity periods, sizing the CDC infrastructure with headroom for snapshot operations, and monitoring replication slot lag with alerts at defined thresholds. For very large initial loads, consider the two-phase onboarding approach described earlier.

Event Hubs Partition Affinity

Event Hubs uses partitions to parallelize message processing. If all messages are routed to a single partition, throughput is limited to what that partition can handle. If messages are distributed across partitions without regard to ordering requirements, related events may be processed out of order.

Mitigation involves configuring the Debezium producer to use the store identifier as the partition key. This ensures that all events for a given store are routed to the same Event Hubs partition, preserving ordering within each store while distributing load across partitions for different stores. The number of Event Hubs partitions should be set high enough to accommodate the expected number of stores with room for growth. Unlike some properties, partition count cannot be increased after the Event Hub is created.

Orphaned Replication Slots

If a store database is decommissioned but the replication slot is not cleaned up, PostgreSQL continues to retain write-ahead log segments for the orphaned slot. Over time, this can fill the disk and cause database outages.

Mitigation requires implementing a decommissioning procedure that includes replication slot cleanup. Monitor for inactive replication slots and alert when a slot has not been read from in an extended period. Consider implementing automatic slot cleanup for slots that have been inactive beyond a defined threshold, though this should be done cautiously to avoid accidentally removing slots that are temporarily inactive due to maintenance.

Operational Procedures

Adding a New Store

The procedure for adding a new store is intentionally minimal. The DBA creates the store database following the established naming convention. The database should be created from the standard template to ensure schema compatibility. Within minutes of database creation, Debezium detects the new database and begins the initial snapshot. Operations teams receive a notification of the new store detection.

The snapshot progresses, with completion typically occurring within 30 minutes for a standard store data volume. Once the snapshot completes, incremental CDC begins. The first records arriving at the Lakehouse trigger automatic partition creation. From this point forward, the new store's data is fully integrated into the analytics platform with no additional intervention required.

Removing a Store

Store removal requires more deliberate action than store addition. First, stop ingesting new data by either renaming the database to no longer match the include pattern or by dropping the database entirely. Second, drop the replication slot associated with the store to prevent WAL retention issues. Third, decide whether to retain or purge historical data in the Lakehouse.

If historical data should be retained, no action is needed at the Lakehouse level. The partition remains but simply receives no new data. If historical data should be purged, drop the partition using Delta Lake's partition drop capability. This removes the data files and the partition metadata in a single atomic operation.

Handling Schema Changes

Schema changes require coordination across all store databases and the downstream processing logic. Minor additive changes such as new nullable columns can often be handled through Delta Lake's schema evolution capabilities. The merge schema option allows new columns to be added automatically when encountered.

Breaking changes such as column renames, type changes, or column removals require a more deliberate migration process. First, update the downstream processing logic to handle both the old and new schemas. Deploy this change and verify it works with existing data. Then, apply the schema change to source databases in a rolling fashion. Finally, once all sources have been migrated, remove support for the old schema from the processing logic.

Disaster Recovery

The architecture provides several recovery options depending on the failure scenario. If Event Hubs experiences an outage, Debezium buffers changes locally and resumes publishing when connectivity is restored. The replication slot ensures no changes are lost during the outage, though extended outages may cause WAL accumulation on the source database.

If Fabric experiences an outage, events accumulate in Event Hubs up to the retention period. Once Fabric recovers, EventStream resumes processing from its last checkpoint, catching up on accumulated events. The Delta Lake table remains consistent due to its transactional nature.

If a source database is lost, recovery depends on backup strategy. The Lakehouse contains a copy of all ingested data, which can serve as a read-only recovery source. Full database recovery requires restoring from PostgreSQL backups.

Conclusion

Dynamic partitioning and automatic source detection transform multi-store data ingestion from an operational burden into a largely self-managing system. The combination of Debezium's pattern-based database detection with Delta Lake's dynamic partition creation eliminates most manual configuration work while maintaining the flexibility to accommodate growth.

The implementation requires careful attention to naming conventions, schema standardization, and monitoring. The risks are real but manageable with appropriate governance and operational procedures. For organizations operating at scale with dozens or hundreds of similar data sources, this architecture provides a sustainable path to unified analytics without proportional growth in operational overhead.

The key principle underlying this approach is that systems should adapt to data rather than requiring data to conform to rigid system configurations. By designing for automatic detection and dynamic accommodation, the data platform becomes a utility that business operations can leverage without constant engineering involvement. This shift from explicit configuration to pattern-based adaptation is essential for organizations seeking to derive value from data at scale.

Published Jan 22, 2026 | VERSION 1.0 | ANALYTICS | AZURE | DELTA LAKE | MICROSOFT FABRIC

Comments

Loading comments...