Bespoke SQL Objects
  • 29 Nov 2022
  • 1 Minute to read
  • Dark
    Light

Bespoke SQL Objects

  • Dark
    Light

Article summary

Overview

Because of pre-existing scripts or other reasons, users may want to execute arbitrary SQL against the database. Ursa Health allows this escape hatch by means of bespoke SQL objects. Because of the sensitive nature of these objects, only users with Bespoke Author credentials can persist these. (See the User Manager documentation for more details about user types.) Bespoke Authors can also attach validation SQL and postprocessing SQL to a bespoke SQL object.

To create a bespoke SQL object users should navigate to the Object Workshop home screen and click the plus sign icon in the upper right corner. In the Create New Object screen, select the Bespoke SQL tile. In the Bespoke SQL object screen, update the parameters listed in the panels detailed below. When the bespoke SQL object's parameters have been specified, select the lightning bolt icon in the upper right corner of the screen. The browser then redirects back to the Object Workshop home screen where the object will be running in the ELT Progress blade.

Incremental Loads

In addition to specifying full-load SQL, Bespoke Authors can specify incremental load SQL. Objects with both full-load SQL and incremental-load SQL will default to perform the incremental load unless an override is specified at the moment of ELT.

When specifying an incremental load, three fields can be populated: the “staging SQL,” the “preprocessing SQL,” and the “postprocessing SQL.” The staging SQL should be a simple select statement that will populate a staging table. The columns of the staging table should exactly match the desired columns of the data table itself. The staging table is automatically named per a convention that is displayed on screen for use in the preprocessing SQL.

Incremental preprocessing SQL is typically the place where a user culls some existing rows of the main data table with the data that has just been inserted into the staging table, although it is not strictly necessary to use the staging table at all.

After preprocessing, the main data table is populated simply by appending the entire contents of the staging table onto the main data table. That’s why it is important that the columns of the staging table match that of the main data table. There is never any variation in this step, so there is no form input to specify it.

After the main data table population is complete, any postprocessing queries can be run, just like for full load.


Was this article helpful?