Checkpoint to review the implementation of semantic mapping and data mastering
  • 10 Dec 2022
  • 2 Minutes to read
  • Dark
    Light

Checkpoint to review the implementation of semantic mapping and data mastering

  • Dark
    Light

Article Summary

Background and Strategy

At this point in the integration process, the interpretation of the source data should be finalized and those insights memorialized in the form of Semantic Mapping and data mastering objects. This is an appropriate moment to pause to review the implementation so far.

As with the prior integration checkpoint (see Checkpoint to review the interpretation of source data), the general idea is to have an experienced integration engineer not involved in the current integration's implementation efforts perform the review.

The focus of this checkpoint is the object configuration and logic implemented so far (covering semantic mapping and data mastering). Roughly speaking, the reviewer should set aside a few hours of independent time to familiarize themselves with the source data, look through each object in the integration Namespace, and document any implementation elements (configurations, mappings, logic, etc.) that appear questionable. This is an opportunity to enforce softer standards as well, such as documentation of terms, commenting, and the naming of fields and objects.

The reviewer and the integration team should then meet to go over these notes and determine what issues should be addressed. The explanations and remedies for each item should be documented briefly in the same document, which should then be preserved in a project folder so that if questions arise in the future it can be used as a reference for the thinking at the time.

Detailed Implementation Guidance

  1. The findings of this exercise should be memorialized in a review document; ideally, in the same document used in the earlier checkpoint to provide some narrative continuity.

  2. The reviewer should start by inspecting the source data directly (in Case Review of the source data object or a data exploration object), documenting anything that might require complex or unintuitive handling in semantic mapping, especially anything that was not briefed in the earlier checkpoint. A best practice is to capture these findings as elements of a board, assign each element an identifier (e.g., "#1", "#2", "#3", etc.) and reference those identifiers in a review document.

  3. Next, the reviewer should open each Semantic Mapping object in the appropriate Namespace (using the Category Filter in the Object Workshop zone to filter to the Namespace and Layer) and look over the configuration and mappings, noting in the review document anything that might be incorrect.

  4. After the Semantic Mapping objects are reviewed, the same approach should be taken with any mastering objects involved in the integration. (Note that the mastering objects -- because they handle data from multiple sources -- will not likely not be in the same Namespace.) As a reminder, the mastering objects most commonly used, which are included in the Ursa Health Core Data Model, are named Source Local Patient ID Crosswalk to Data Model Patient ID and Source Local Provider ID Crosswalk to Data Model Provider ID, abd they are part of the data model's Metadata and Integration Layer.

  5. Rather than make any changes directly in any of the objects examined, the reviewer should simply document their impressions, leaving any corrections to be made for the primary implementation team, who should be briefed by the reviewer on their findings as the last step in the checkpoint. This gives the primary team the best opportunity to internalize the lessons, but also prevents the introduction of spurious "corrections" when the reviewer makes an incorrect read -- which is not uncommon given their necessarily superficial familiarity with the source data.


Was this article helpful?