Lead the Future of Analytics

Elevate your data architecture skills with our Certified Data Vault 2.1 Practitioner course, where you’ll master the integration of Data Lakes, Data Mesh, and Data Virtualization. This course offers a blend of theoretical knowledge and practical application, preparing you to lead innovative data projects in your organization. Discover how to transform data into actionable insights and strategic decisions—start your journey towards becoming a data visionary today!

Stay Ahead of the Curve!

Level up
your
bi skill set!

Master Cutting Edge Techniques

Stay ahead of industry trends by learning the latest methodologies in DV2.1, ensuring your skills remain in high demand.

Hands-On Experience

Apply your knowledge through interactive workshops, gaining valuable experience that directly translates to real-world applications.

Build Professional Networks

Connect with peers and industry experts in a supportive community, enhancing your learning and career opportunities.

Flexible Learning Format

Benefit from the convenience of online, and instructor-led sessions that fit into your busy schedule, making advanced education more accessible.

Start With Self-Paced Training

On-line
(part 1)

Fundamental Learning Themes

Master essential data management terminologies and principles, including data lakes, Lakehouses, hubs, mesh, fabric, and virtualization, in our Certified Data Vault 2.1 Practitioner course. Learn Data Vault 2.1 methodology, architecture, modeling, implementation, and governance to effectively manage complex data environments and align strategies with business objectives. Gain practical skills for data storage, processing, and analysis, emphasizing auditability, scalability, and performance in real-world applications.

Lessons and Topics:

  • Class Introduction
    • About This Course
    • Introductory Definitions
    • Common Themes for this Course
  • What is Data Vault 2.1?
    • Learning Objectives
    • Introducing the Methodology
    • Introducing the Architecture
    • Introducing the Modeling
    • Introducing the Implementation
    • Introducing Governance
    • Summary and Review Quiz

Unlock the full potential of your data management skills with this lesson on integrating Data Lake systems architecture with Data Vault 2.1. You’ll gain insights into building robust, scalable data solutions, mastering real-time streaming, ensuring data security, and applying data science principles. Perfect for those looking to lead innovative data projects, this lesson equips you to tackle modern data challenges efficiently and effectively.

Lessons and Topics:

  • Systems Architecture and Methodology
    • Learning Objectives
    • Data Lakes and DV2.1 Architecture
    • Security in the Architecture
    • TQM and BA Responsibilities
    • Understanding Business Rules
    • Delta Processing and CDC
    • Data Vault Architectural Components
    • Divide and Conquer in the Architecture
    • Real-Time Streaming Introduction
    • Data Science and where it fits
    • Master Data in the Architecture
    • Data Virtualization / Fabric Architecture
    • Landing Zone Data Flows
    • Summary and Review Quiz

Accelerate your data analytics build cycles with this lesson, which focuses on strategic Data Vault implementations and self-service analytics. Learn to manage self-service platforms, develop robust data taxonomies, and ensure efficient governance aligned with business goals. Master the distinctions between managed and simple self-service and understand the critical role of write-back enablement in capturing enterprise knowledge. This lesson equips you to oversee compliant and impactful data initiatives, transforming raw data into valuable insights efficiently.

Lessons and Topics:

  • Managed Self Service Analytics
    • Learning Objectives
    • Defining Self Service Analytics
    • Kids and Finger-painting
    • Food Growing: An Analogy
    • Managed SSA and Write Back
    • Managed SSA Risks and Requirements
    • Summary and Review Quiz
  • Ontologies and Taxonomies
    • Learning Objectives
    • Intro to Taxonomies and Ontologies
    • How to Build an Ontology
    • Executing a Profiling Sprint
    • How to Extend an Ontology
    • Building the Business Matrix & Logical Model
    • Summary and Review Quiz

Fill your knowledge gaps in data modeling and agile delivery with this comprehensive lesson. Understand the foundational concepts of data modeling, including Normalized Forms and the layers of Conceptual, Logical, and Physical Modeling, essential for effective database design. Dive into advanced strategies like Dimensional, Data Vault, CIF, and NoSQL Modeling to manage complex enterprise data systems. Additionally, learn how Agile principles and Disciplined Agile Delivery integrate with the DV2.1 methodology to optimize your data projects, ensuring scalable and high-quality outcomes across distributed teams.

Lessons and Topics:

  • Introduction to Data Modeling Styles and Forms
    • Normalized Forms
    • Intro to Conceptual, Logical and Physical
    • Dimensional Data Modeling
    • Data Vault Data Modeling (brief overview)
    • CIF Modeling > 3nf + time
    • NoSQL Modeling (key-value, wide-column, graph, document)
    • Graph Modeling
  • Agile Delivery and DV2.1 Methodology
    • Defining Agility
    • Agile Manifesto Principles
    • Disciplined Agile Delivery Concepts
    • Mitigating Analysis Paralysis
    • Parallel Teams (Product Teams)
    • Summary and Review Questions

Missing out on this lesson means missing out on understanding the pivotal role of a Business Data Vault in your data strategy. You’ll learn how it enhances data quality, governance, and stewardship while reducing technical debt through structured rule categorization and strategic considerations. This lesson also introduces the Information Delivery Layer, essential for interacting with processed data via integrated EDWs, and covers critical concepts of Information Marts, their principles, and the benefits and risks of virtualizing them, ensuring you can optimize performance and maintain security.

Lessons and Topics:

  • Intro to Business Data Vault
    • Defining Business Data Vault
    • Understanding the Business DV
    • Importance of Business DV
    • Types of Tech Debt
    • Reducing Tech Debt with Business DV
    • Rule Categorization in Data Systems
    • Strategic Considerations for BDV
    • Step-By-Step Build Overview
    • Summary and Quiz Questions
  • Information Marts Defined
    • Intro to Information Marts
    • Principles of the Information Mart
    • Rationale for Building Info Marts
    • Benefits and Risks of Virtualizing
    • Security and Privacy in Info Marts
    • Importance of Views in Info Marts
    • Significance of Logical Models over Physical
    • Optimizing Performance in Info Marts
    • Recommended Practices for Info Marts
    • Summary and Quiz Questions

Understanding the value of data as an asset is crucial for maximizing ROI from your Data Vault solution. Learn why managed self-service BI with write-back capabilities is essential for leveraging business intelligence outputs and capturing enterprise knowledge. Explore the importance of business keys in tying data to business processes and discover how to measure and assign value to data, ensuring it is treated as a strategic corporate asset. This knowledge is key for effective data governance, continuous improvement, and maintaining high data quality.

Lessons and Topics:

  • Value of Data As An Asset
    • Intro to Data as a Strategic Asset
    • Identifying Key Data Assets
    • Measuring Data Value
    • Understanding Data Quality and its Impact
    • Cost-Benefit Analysis of Data
    • Data Governance and Stewardship
    • Summary and Quiz Questions
  • Business Processes to Business Keys
    • What is a Business Key?
    • Examples of Business Keys
    • Defining Business Processes
    • Tracking BK’s through Business Flows
    • Understanding Passive Integration
    • Where to Find Business Keys
    • Summary and Quiz Questions

Advance your data warehousing skills by learning about the benefits of hashing over sequence identifiers in Data Vault 2.1. This lesson will show you how to use hash keys to improve scalability, data integrity, and performance in your EDW, covering essential techniques for managing hash collisions and understanding the optimal use of hashing functions. You’ll also explore the importance of consistent terminology in DV2.1 to enhance collaboration and ensure data accuracy, with a focus on load dates, applied dates, and record source tracking. Gain practical insights and best practices for implementing these methods effectively in real-world scenarios.

Lessons and Topics:

  • Hashing and Sequences
    • Introduction to Sequence Identifiers
    • Pros and Cons of Sequence Identifiers
    • Introduction to Hashing
    • Pros and Cons of Hashing
    • Why Hashing is Optimal for the EDW
    • Defining a Hash Collision
    • Managing Hash Collisions
    • Practical Examples
    • Summary and Quiz Questions
  • Common Terminology
    • What Terminology Means to the Methodology
    • Terminology’s Impact on Collaboration
    • Common DV2.1 Attributes
    • Intro to Load Dates
    • Intro to Applied Dates
    • Intro to Record Source
    • Hashing Basic Test Cases
    • Hard Rule: Time-Zone Alignment
    • Hard Rule: Date-Time Alignment
    • Hard Rule: Currency Alignment
    • Summary and Quiz Questions
  • Core Data Vault Structures
    • Defining the Hub
    • Defining the Link
    • Defining the Satellite
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Mastering hard rules and multi-tenancy in Data Vault 2.1 is crucial for data engineering professionals seeking to optimize large-scale data environments and ensure robust data governance. You’ll learn detailed implementation strategies for system fields, data formatting, standardization, and advanced security practices, crucial for maintaining data integrity and meeting regulatory compliance. This lesson also covers tenant isolation, performance optimization, and regulatory compliance in multi-tenant architectures, using real-world case studies and practical examples to highlight best practices. Enroll to enhance your ability to manage complex data environments effectively and stay ahead in your field.

Lessons and Topics:

  • Hard Business Rules In Depth
    • Define and Understand Hard Rules
    • Implementation of System Fields
    • Data Formatting and Standardization
    • Applying Security and Obfuscation
    • Performance Optimizations
    • Practical Example
    • Reconciliation back to Source
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Multi-Tenancy In Depth
    • Introduction To Multi-Tenancy
    • Design Strategies for MT Data Modeling
    • Tenant Isolation and Security with SQL
    • Scalability and Performance Concerns
    • MT and Regulatory Compliance
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Part 1: The Numbers...

WE'LL HELP YOU
ACHIEVE YOUR GOALS

Lessons
0

Each lesson has learning objectives, summary, and a Quiz to help you review.

Unique Topics
0

The topics in each lesson help you focus, and are between 3 and 5 minutes long.

Hours of Learning
0

Approximately 56 hours of learning time.

Mastering the application of business and BI rules in Data Vault 2.1 is crucial for data engineering professionals aiming to optimize performance and maintain data integrity. You’ll explore the practical applications of the 80/20 rule, best practices for implementing business rules in SQL views, and the flexibility BI tools offer for dynamic data exploration. Additionally, this lesson covers security implications, governance, and future trends, ensuring you stay ahead in managing data processing rules effectively. Missing this lesson means missing out on advanced strategies to balance control and flexibility in data handling, crucial for maximizing efficiency and adaptability.

Lessons and Topics:

  • 80/20 Rule for BI and BI Rules
    • Defining Business Rules (Soft Rules)
    • Soft Rules in Views
    • Soft Rules in BI Tooling
    • 80/20 Rule – Where to do What
    • Security Implications of BR Deployment
    • Governance of Business Rules
    • Future Trends in Virtualization/Fabric
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Handling multiple systems in a single hub is crucial for optimizing operations across different regions or data centers, especially with applications like SAP or PeopleSoft. This lesson covers the recommended practice of applying the TYPE CODE standard to manage such scenarios efficiently. You’ll learn how to ensure business key uniqueness, integrate business concepts, and design for auditability and resiliency, with practical implementation examples. Missing this lesson means missing out on essential strategies to manage technical debt and maintain robust data systems.

Lessons and Topics:

  • Business Key Collision Code In Depth
    • BKCC Defined
    • Challenge of Technical Debt
    • Business Concept Integration
    • Ensuring Business Key Uniqueness
    • Designing for Auditability and Resiliency
    • Implementation Examples of BKCC
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Understanding how to apply links in Data Vault 2.1 is crucial for effective data modeling and managing complex data relationships. This lesson covers multi-level and flat hierarchies, bill of materials, and master data, providing practical visual examples. You’ll also learn the importance of proper normalization in defining a Unit of Work (UOW) and the consequences of breaking UOW, ensuring data integrity and consistency. Additionally, the lesson explores link-to-link resolution, the benefits of exploration links, and the use cases for non-delta links, equipping you with advanced strategies to optimize your data architecture.

Lessons and Topics:

  • Applying Links
    • Multi-Level Hierarchy Visual & Example
    • Flat Hierarchy Visual & Example
    • Bill of Materials Visual & Example
    • Master Data Visual & Example
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Link Unit Of Work
    • Defining Unit Of Work
    • Improper Normalization Example
    • Defining Proper Normalization
    • What Happens When UOW is Broken?
    • Testing the Unit Of Work
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Link-To-Link Resolution
    • Origination of Link To Link
    • Reasons Why it Requires Denormalization
    • Defining Proper Denormalization
    • Steps to Remove Link To Link
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Exploration Links
    • Data State Transition Diagrams
    • Business Process Example Workflow
    • Standard DV Model Example
    • Defining an Exploration Link
    • Setting up Data State Changes
    • Benefits of Exploration Links
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Non-Delta Links
    • Defining a Non-Delta Link
    • Reasons for a Non-Delta Link
    • Reasons when NOT to use a ND-Link
    • VS PIT / Bridge Hybrid
    • VS Fact Table
    • ND-Link Example
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Understanding driving keys is crucial for maintaining data integrity and traceability, especially when dealing with modeling errors and complex relationships in source system data models. This lesson will teach you how to handle driving keys properly to ensure auditability within the Data Vault. Additionally, you will explore the concept of dependent child, or weak relationships, originating from normal-forms data modeling. Learning how to work with dependent children at a data modeling level is essential for accurate and effective data relationships management.

Lessons and Topics:

  • Understanding Driving Key
    • Defining the Driving Key
    • Understanding the Role of the DK
    • Improper Normalization – Breaking Auditability
    • Proper Normalization & Unit of Work
    • Driving Key Example
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Dependent Child (Data Modeling)
    • Defining a Dependent Child
    • Role of Weak Keys in ER Models
    • Composite Keys Involving Weak Keys
    • Understanding a Weak-Hub
    • Adding the Dependent Child to a Link
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Mastering satellite effectivity is crucial for effectively managing time-based data in Data Vault environments. This lesson delves into the structure and use cases of satellite effectivity, including advanced techniques for timestamp management and resolving overlapping time periods, ensuring data remains accurate and reliable. Additionally, you’ll learn how to split and merge satellites through detailed steps and real-world examples, addressing various data types and change rates. Without this knowledge, you’ll miss essential skills for handling complex temporal data, which are vital for maintaining a robust and trustworthy data infrastructure.

Lessons and Topics:

  • Intro to Satellite Effectivity
    • Defining a Satellite Effectivity
    • Structure of a Satellite Effectivity
    • Applied Cases for Satellite Effectivity
    • Summary and Review Questions
  • Satellite Effectivity in Depth
    • Advanced Timestamp Management
    • Handling Overlapping Time Periods
    • End-Dating and Record Expiry
    • Addressing SINCE and DURING
    • Flip-Flop Time Case
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Satellites In Depth
    • Intro to Splitting & Merging Satellites
    • Splitting Satellite Steps
    • Merging Satellite Steps
    • Type of Data Split Example
    • Type of Data Merge Example
    • Rate of Change Split Example
    • Rate of Change Merge Example
    • Handling a Multi-Active Satellite
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Gaining proficiency in the application of JSON in Data Vault 2.1 is crucial for managing complex data types and enhancing data integration and analysis. This lesson will cover the benefits and use cases of JSON Satellites and JSON Links, allowing you to handle diverse data structures efficiently and adapt quickly to changing data volumes. Additionally, you will learn about record source tracking, a vital concept for maintaining data integrity and auditability by managing and tracking deletes across data sets. Missing this lesson means missing out on essential skills for modern BI practices and effective data management strategies.

Lessons and Topics:

  • JSON Satellites and JSON Links
    • Brief Recap of JSON
    • Defining JSON Satellites
    • JSON Satellite Use Cases
    • JSON Satellite Advantages
    • Defining JSON Links
    • JSON Link Use Cases
    • JSON Link Benefits
    • Advantages of JSON in Data Vault
    • Pitfalls and Risks of JSON
    • Top 5 Impacts of JSON on SQL
    • Recommended Practices for Implementing JSON
    • Summary and Review Questions
    • Self-Paced SQL Workshop
  • Record Source Tracking
    • Defining Record Source Tracking
    • When & Why to Track Business Keys
    • Intro to Data Aging
    • Detecting “Deleted Data”
    • Intro to Audit Logs
    • Summarizing in a BDV Tracking Satellite
    • Summary and Review Questions
    • Self-Paced SQL Workshop

Learning how to extract data from the Data Vault is essential for building efficient virtual information marts, cubes, and alerts. This lesson will cover the necessary structures and processes to load, maintain, and execute high-performing queries at the virtual layers of dashboards in real-time. You’ll gain a deep understanding of Point-in-Time (PIT) and Bridge structures, which are critical for enhancing performance within the Business Data Vault. Missing this lesson means missing out on key strategies for optimizing data retrieval and ensuring seamless data operations across platforms.

Lessons and Topics:

  • Point-in-Time and Bridge Table Modeling
    • Understanding Info Mart Join Tables
    • Defining the Point-In Time Table
    • Understanding PIT Ghost Records
    • Defining the Bridge Table
    • Differences and Similarities
    • Intro to Pit-Bridge Hybrids
    • Summary and Review Questions

In this lesson, you will learn advanced techniques for improving the performance of complex ELT and ETL processes, such as Type 2 Conformed Dimension loading. The session will cover strategies for dividing and conquering data integration tasks using various tools and methods to maintain near-linear scalability. Additionally, you will explore methods for transitioning your EDW/BI solutions to real-time operations, including running mini or micro-batches without extensive re-engineering efforts. Not enrolling means missing out on vital skills for optimizing data loading routines and enhancing system scalability and performance.

Lessons and Topics:

  • ETL / ELT Performance Tuning
    • Top Issues with ETL / ELT Performance
    • Typical Data Integration Routine
    • Golden Rules of Performance
    • Step-By-Step Improving Performance
    • Turning Off Referential Integrity
    • Summary and Review Questions
    • SQL Workshop
  • Loading Architecture
    • Top 10 Goals and Objectives
    • Golden Rule of Loading Raw DV
    • Parallel Architecture
    • Maturity Curve
    • Turning Off Referential Integrity
    • Benefits of Staggering Loads
    • Summary and Review Questions

In these lessons, students will master handling NULL business keys to maintain an auditable EDW/BI solution. The technical methods and best practices taught will help address and load data with NULL business keys, highlighting the significant business implications and the necessity for addressing this issue within the business community. Additionally, the lessons provide strategies for resolving technical problems when the Data Warehouse is out of sync with the source system, ensuring compliance and the ability to fix technically broken data sets, a crucial skill for data integrity and reliability.

Lessons and Topics:

  • Handling NULL Business Keys
    • Null BK’s in Staging
    • Hard-Rule: Fixed Value Translations
    • Load Process to Staging
    • Load Process to Raw Data Vault
    • Why it Meets Auditability
    • Summary and Review Questions
    • SQL Workshop
  • Dealing With Corrupted Data
    • Defining Corrupted Data
    • Corrupted Data Example
    • Corrupted Data Options
    • Repairing Corrupted Data
    • Summary and Review Questions
    • SQL Workshop

These lessons offer invaluable insights into Stage and Landing Zone loading, essential for optimizing data processing workflows. You’ll learn high-speed data loading techniques, ensuring fault-tolerance and recovery, and effectively capturing changes during stage loads. Understanding and applying these methods is crucial for maintaining robust data pipelines and preventing data loss. Additionally, mastering standardized loading templates for various data structures, such as hubs, links, and satellites, will significantly enhance your data integration capabilities, ensuring consistency and reliability across your data warehouse environment. Without these skills, you risk inefficiencies and data integrity issues in your BI processes.

Lessons and Topics:

  • Stage and Landing Zone Loading
    • Defining a Landing Zone
    • Defining a Staging Area
    • Intro to Stage Load Processing
    • High Speed Data Loading Techniques
    • Ensuring Fault-Tolerance and Recovery
    • Change Data Capture During Stage Load
    • Persistence in Stage Tables
    • 1st Level Stage Table (Landing Zone)
    • 1st Level Stage Load Template
    • 2nd Level Stage Table Defined
    • 2nd Level Stage Load Template
    • Where to Calculate System Fields
    • Summary and Review Questions
    • SQL Workshop
  • Loading Templates / Standards
    • Hub Load Template
    • Link Load Template
    • Satellite Load Template
    • Non-Delta Satellite Load Template
    • Non-Delta Link Load Template
    • Same-As Link Load Template
    • Effectivity Satellite Load Template
    • Hierarchical Link Load Template
    • Record Source Tracking Load Template
    • Summary and Review Questions
    • SQL Workshops

This lesson provides crucial insights into real-time stream processing, essential for modern data engineering and business intelligence (BI). You’ll explore the differences between batch processing and real-time streaming, and how to effectively integrate diverse data sources. By mastering real-time stream processing operations, complex event processing (CEP), and applying business rules on-the-fly, you ensure prompt and accurate data handling. Missing out on this knowledge could mean falling behind in rapidly evolving BI environments that demand quick and efficient data processing.

Lessons and Topics:

  • Intro to Real-Time Stream Processing
    • Defining Real-Time Streaming
    • Basic Concepts and Terminology
    • Batch versus Streaming
    • Data Sources and Integration
    • Stream Processing Concepts
    • Stream Processing Operations
    • Intro to Time Semantics and Windowing
    • Complex Event Processing (CEP)
    • Best Practices for Performance
    • Applying Business Rules in Stream
    • Summary and Review Questions

Get Answers from Your Instructor

Instructor
Led
(Part 2)

Scroll to Top