WWDVC-2024-rev-stacked.png
How To:Test an Idea for Data Vault Standards

Hi Everyone!

We want to encourage you to engage in helping evolve the Data Vault Standards.  While this is not a democratic process (as we don’t believe in voting for standards), this must be an evolutionary process.  In that light, we want community suggestions.  However, the proper rigor must be applied to the new standards or change suggestions that are put forward.

In this post, we share some of the requirements for suggesting new standards to Data Vault.  Please remember they can be standards for: Architecture, Methodology, Modeling, and Implementation.

Please remember that there are 30,000 test cases in place for the basic Data Vault standards that are offered to the market.  Only standards that: Stand the test of time, are NON-CONDITIONAL (work regardless of condition), Scale in both batch and real-time) and are repeatable, will actually work for the community.

For now, if you want to read a bit, you can read a bit more here: DanLinstedt.com (my blog)

For Architecture Suggestions:

  1. Must incorporate a Hybrid architecture of NoSQL and Relational platform
  2. Must have clean splits as to the function of each component
  3. Each component must be well defined, succinctly defined, easily understood
  4. Components can be: Source Systems, Staging Area, Landing Zone, Data Warehouse, Business Warehouse, Information Mart, Data Science Area, Operational Applications, Real-Time Message Queues, Master Data Management Solutions, and ODS (although ODS is on its way out and very rarely used these days)
  5. Must allow data to flow bidirectionally across multiple pieces.

For Methodology Suggestions:

  1. Must work well with people, process, and technology
  2. Can be suggested for either people, process, OR technology
  3. Must have succinct, clear, non-redundant definitions.
  4. Must be easy to repeat
  5. Must be pattern based
  6. Must be optimized at CMMI Level 5
  7. Must be measured (have Six Sigma KPI results) to prove that it is optimized
  8. Must be business focused (don’t forget: IT is a Business!!)
  9. Must be well-defined, succinctly defined, only non-redundant definitions are accepted.
  10. Must be a methodology based standard, and NOT a framework standard.
  11. Must have TQM KPA’s identified, and TQM KPI’s for measuring success
  12. Must PROVE (through KPI’s) that Cycle Time for build / use / application is reduced.
  13. Must work with split parallel teams, or global teams

For Modeling Suggestions:  (Standards applying only to DV modeling)

  1. Must be simple, measured with complexity ratings (KPI’s for maintenance)
  2. Must work in batch mode for loading
  3. Must work in Real Time mode for loading (without changes to the structure) – up to 400k transactions per second. The requirement excludes Deadlock contention on inserts, that is a function of the database platform and not the modeling paradigm.
  4. Must work with sequences, hash keys, and natural business key architectural designs
  5. Must not change the base definition of a Hub, a Link, or a Satellite (for a new object type, a new definition must be applied).  I highly recommend to offer an “extended object” or change the application of the object rather than changing the actual definition of the base objects available.  For example: Hierarchical Link is one such extended definition, as is Transactional or Non-Historized Link.
  6. Must work in BIG Data sets (>300TB of data)
  7. Must be placed in either Raw DV or Business DV
  8. Must be repeatable
  9. Must be pattern based
  10. Must enable easy back-up and restore
  11. Must support Change Data Capture
  12. Must not break the foundational definitions of each of the core Data Vault objects.
  13. Must work in a LOGICAL and CONCEPTUAL manner.  Please note: Implementing DV in a physical model, on some platforms like MongoDB require changes / denormalization at the physical level – the end “collections” don’t really resemble pure DV objects.
  14. Must fit within a hierarchy (ontology / taxonomy model)
  15. Must NOT have conditional design rules.  What works for ONE case must work for all.

For Implementation Suggestions:

  (this would be the processing standards)

  1. Must work with small and large volumes of data (1TB to 500 TB)
  2. Must have metrics results proving viability
  3. Must have defined test cases (KPA’s) and results (KPI’s) of those test cases presented.
  4. Must work for Real-Time and Batch loading without conditional changes to the processing streams, or process design.
  5. Must be repeatable
  6. Must be optimized
  7. Must be fault-tolerant
  8. Must be restartable (WITHOUT CHANGING INCOMING DATA SETS!!)
  9. Must work with Change Data Capture
  10. Must be tested against multiple platforms, and with multiple tools.  For example: what works in Teradata, must also work in Oracle, must also work in SQLServer WITHOUT changes to the standard!!  NOTE: If it’s a new NoSQL platform like Neo4J, and if it’s a physical implementation standard it may be directed solely at a single platform.  However please note: the more the standard is focused on a single platform – the less it is a standard, the more it is simply a “best practice” for that particular platform – so there IS a DISTINCTION between best practices and standards.
  11. If it works in C# it must work in Perl, Ruby, Python, Java, JavaScript, SQL, etc…
  12. Must be tested for backup and restore
  13. Must meet CMMI Level 5 Optimizations

The whole point to Data Vault standards for implementation is to be platform agnostic, technology agnostic, repeatable, fault-tolerant, and scalable.  Again, if it’s platform specific – then most likely it is a best practice for a specific platform and NOT a standard.

Questions to ASK yourself about your Idea:

Here are some questions I typically ask of the “suggested change / new standard”:

  1. Does it negatively impact the agility or productivity of the team?
  2. Can it be automated for 98% or better of all cases put forward?
  3. Is it repeatable?
  4. Is it consistent?
  5. Is it restartable without massive impact? (when it comes to workflow processes)
  6. Is it cross-platform?  Does it work regardless of platform implementation?
  7. Can it be defined ONCE and used many times? (goes back to repeatability)
  8. Is it easy to understand and document?  (if not, it will never be maintainable, repeatable, or even automatable)
  9. Does it scale without re-engineering? (for example: can the same pattern work for 10 records, as well as 100 billion records without change?)
  10. Does it handle alterations / iterations with little to no re-engineering?
  11. Can this “model” be found in nature?  (model might be process, data, design, method, or otherwise, nature – means reality, beyond the digital realm)
  12. Is it partitionable?  Shardable?
  13. Does it adhere to MPP mathematics and data distribution?
  14. Does it adhere to Set Logic Mathematics?
  15. Can it be measured by KPI’s?
  16. Is the process / data / method auditable?  If not, what’s required to make it auditable?
  17. Does it promote / provide a basis for parallel independent teams?
  18. Can it be deployed globally?
  19. Can it work on hybrid platforms seamlessly?

 

General Admission Pricing

MAIN PROGRAM


Monday-Friday
& 10th Anniversary Reception Tuesday evening

$997

Become a WWDVC Speaker

Submit your sessions and if selected, receive FREE registration!
Exhibitor Packages

All exhibitors will have a booth on the
exhibitor floor that you are responsible
for staffing. Includes a set number of staff tickets. Diamond Level: Includes a Hands-On Lab

Sponsor Package

Sponsors will have access
to attendee network but NO Booth. Sponsored breakfasts and lunches available. Will be featured on all group marketing material.

Scroll to Top