Operational Data Store-our use case

shalini Pahwa
4 min readMar 26, 2024
https://www.mongodb.com/collateral/implementing-an-operational-data-layer

We need to work upon ODS layer and it was new term for mešŸ¤”. I want to share what I learned and use case where we are implementing. Will add another article once we complete this project :-)

Problem statement:

Ā· Real time search and enable analytical reporting

Ā· Legacy data models difficult to change. Any change might break several apps.

Ā· Delay in processing

Ā· Inaccurate data

Ā· Scalability issue

Ā· Data not available on time

And ODS layer addresses all these issues.

Pros of ODS Layer

Ā· To build new business functionality 3ā€“5x faster

Ā· scale to millions of users(on premises/cloud)

Ā· cut costs

Ā· Providing the benefits of modernization without the risk of a full rip and replace.

Ā· Moving legacy apps to Cloud.

Ā· Gives full picture of enterprise data for Analysis and analytics

Use Cases-

Legacy Modernization

ODL consumers are New or improved data consumers.

Why cant legacy handle new consumers?

Ā· It may strain their capacity

Ā· Single points of failure.

Ā· New apps require redesign the applicationā€™s data model.

Gradually, existing workloads can be shifted to the ODL. Eventually, the ODL can be promoted to a system of record, and legacy systems can be decommissioned

Data as a Service

ODL gathers all important data in one place

Data Access APIs on top of the ODL provide a common set of methods for working with this data.

APIs becomes reusable component for various enterprise data fetch operations.

New analyses can be run and new insights generated.

Cloud Data Strategy

Cloud hosted ODL

When legacy on-prem systems canā€™t be migrated but new applications are being deployed in the cloud, a cloud-hosted ODL can provide an intermediary layer.

Existing enterprise data is federated to the ODL in the cloud, which makes it available to new cloud-native applications.

Source systems remain in place on-premises and can be decommissioned over time as the ODL becomes the system of record.

This provides a gradual, non-disruptive approach to cloud transformation.

Single view

Often, this is a customer, and the term ā€customer 360ā€ is sometimes used. But organizations may also develop single views of customer 360, products, of financial assets, or other entities relevant for the business.

Expose that data as a single view via an API for the users who would benefit from it.

AI-enriched applications

An ODL is a perfect solution to bring enterprise data together from legacy systems, making it available for both model training and for building new AI-powered features in our applications.

short running operational queries against the ODL are routed to transactional nodes while longer running and more complex AI or analytics queries are routed to dedicated analytics nodes.

This approach ensures you can service different workloads from a single ODL, and they never compete for resources

Mainframe modernization

An ODL makes it significantly easier to serve mainframe data to new digital channels without straining legacy systems.

Batch extract and load. This is typically used for an initial one-time operation to load data from source systems. Batch operations extract all required records from the source systems and load them into the Operational Data Layer for subsequent merging

Delta Load-With delta load, you can process only data that needs to be processed, either new data or changed data.

Operational Data Layer evolves over time

https://www.mongodb.com/collateral/implementing-an-operational-data-layer

Initial use cases for an ODL are often tightly scoped, with one (or a few) source systems and one (or a few) consuming systems.

Once the ODL has proven its value, a logical next step is to enrich its data by adding useful metadata or integrating new (related) data sources.

Offloading Reads and Writes -In this phase, when a given consuming system performs a write, it goes to both ODL and source system, either directly from application logic or via a stream messaging system

ODL First By default, all writes are directed to the ODL. Where necessary, changes are routed using change streams and the Kafka source connecter from the ODL back to the source systems, either so that legacy applications can continue to rely on a source system before being ported to the ODL, or merely as a fallback, in case it should be needed

System of Record: Ultimately, the Operational Data Layer can evolve to serve as the System of Record.

https://www.mongodb.com/collateral/implementing-an-operational-data-layer

In Our case this could be future architecture

Future ODS architecture

We chose MongoDB ODS layer although there are few more alternatives.

Steps towards it-

ODS initial steps

Trainings

Implementation šŸ˜Š

References

https://www.snowflake.com/en/data-cloud/workloads/unistore/

https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/f1634737105574.afryx_odl_consulting

https://www.mongodb.com/collateral/implementing-an-operational-data-layer

--

--