July 30, 2023

tSchemaComplianceCheck – Docs for ESB 7.x

tSchemaComplianceCheck

Ensures the data quality of any source data against a reference data source.

tSchemaComplianceCheck validates all input rows against a
reference schema or check types, nullability, length of rows against reference values.
The validation can be carried out in full or partly.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tSchemaComplianceCheck Standard properties

These properties are used to configure tSchemaComplianceCheck running in the Standard Job framework.

The Standard
tSchemaComplianceCheck component belongs to the Data Quality family.

The component in this framework is available in all Talend
products
.

Basic settings

Base Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

It describes the structure and nature of your data to be processed as
it is.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Check all columns from schema

Select this option to carry out all checks on all
columns against the base schema.

Custom defined

Select this option to carry out particular checks on
particular columns. When this option is selected, the Checked Columns table shows.

Checked Columns

In this table, define what checks are to be carried
out on which columns.

 

Column: Displays the
columns names.

 

Type: Select the
type of data each column is supposed to contain. This validation is
mandatory for all columns.

 

Date pattern: Define
the expected date format for each column with the data type of
Date.

 

Nullable: Select the
check box in an individual column to define the column to be nullable, that
is, to allow empty rows in this column to go to the output flow regardless
of the base schema definition. To define all columns to be nullable, select
the check box in the table header.

 

Undefined or empty:
Select the check box in an individual column to reject empty rows in this
column while the column is not nullable in the base schema definition. To
carry out this verification on all the columns, select the check box in the
table header.

 

Max length: Select
the check box in an individual column to verify the data length of the
column against the length definition of the base schema. To carry out this
verification on all the columns, select the check box in the table
header.

Use another schema for compliance
check

Define a reference schema as you expect the data to
be, in order to reject the non-compliant data.

It can be
restrictive on data type, null values, and/or length.

Trim the excess content of column when
length checking chosen and the length is greater than defined
length

With any of the three modes of tSchemaComplianceCheck, select this check box to truncate the
data that exceeds the length specified rather than reject it.

Note:

This option is applicable only on data of String type.

Advanced settings

Use Fastest Date Check

Select this check box to perform a fast date format check using
the TalendDate.isDate() method of the
Talend
Date
system routine if Date pattern is not
defined. For more information about routines, see
Talend Studio
User Guide
.

Use Strict Data Check

select this check box to perform a strict data format check.

Once
selected, the Use Fastest Data Check check box will be
hidden.

Ignore TimeZone when Check Date

Select this check box to ignore the time zone setup upon date
check.

Not available when the Check all
columns from schema
mode is selected.

Treat all empty string as NULL

Select this check box to treat any empty fields in any columns
as null values, instead of empty strings.

By default, this check box is selected. When it is cleared,
the Choose Column(s) table shows to let
you select individual columns.

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Global
Variables

Global
Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is an
intermediary step in the flow allowing to exclude from the main flow the
non-compliant data. This component cannot be a start component as it
requires an input flow. It also requires at least one output component to
gather the validated flow, and possibly a second output component for
rejected data using Rejects link. For more information, see
Talend Studio User Guide
.

Validating data against schema

This scenario applies only to Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and Talend Data Fabric.

This scenario presents a Job that checks the type, nullability and length of data from
an incoming flow against a defined reference schema, and displays the validation results
on the Run console.

The incoming flow comes from a simple CSV file that contains heterogeneous data
including wrong data type, data exceeding the maximum length, wrong ID and null values
in non-nullable columns, as shown below:

Setting up the Job

  1. Drop the following components: a tFileInputDelimited, a tSchemaComplianceCheck, and two tLogRow components from the Palette to the design workspace.
  2. Connect the tFileInputDelimited component
    to the tSchemaComplianceCheck component
    using a Row > Main connection.
  3. Connect the tSchemaComplianceCheck
    component to the first tLogRow component
    using a Row > Main connection. This
    output flow will gather the valid data.
  4. Connect the tSchemaComplianceCheck
    component to the second tLogRow component
    using a Row > Rejects connection. This
    second output flow will gather the non-compliant data. It passes two
    additional columns to the next component: ErrorCode and
    ErrorMessage. These two read-only columns provide
    information about the rejected data to ease error handling and
    troubleshooting if needed.

    tSchemaComplianceCheck_1.png

Configuring the components

  1. Double-click the tFileInputDelimited
    component to display its Basic settings
    view and define the basic parameters including the input file name and the
    number of header rows to skip.

    tSchemaComplianceCheck_2.png

  2. Click the […] button next to Edit schema to describe the data structure of the
    input file. In this use case, the schema is made of five columns:
    ID, Name,
    BirthDate, State, and
    City.

    tSchemaComplianceCheck_3.png

  3. Fill the Length field for the
    Name, State and
    City columns with 7,
    10 and 10 respectively. Then
    click OK to close the schema dialog box and
    propagate the schema.
  4. Double-click the tSchemaComplianceCheck
    component to display its Basic settings
    view, wherein you will define most of the validation parameters.

    tSchemaComplianceCheck_4.png

  5. Select the Custom defined option in the
    Mode area to perform custom defined
    checks.

    In this example, we use the Checked
    columns
    table to set the validation parameters. However, you
    can also select the Check all columns from
    schema
    check box if you want to perform all the checks (type,
    nullability and length) on all the columns against the base schema, or
    select the Use another schema for compliance
    check
    option and define a new schema as the expected
    structure of the data.
  6. In the Checked Columns table, define the
    checks to be performed. In this use case:

    – The type of the ID column should be Int.
    – The length of the Name, State
    and City columns should be checked.
    – The type of the BirthDate column should be
    Date, and the expected date pattern is
    dd-MM-yyyy.
    – All the columns should be checked for null values, so clear the
    Nullable check box for all the
    columns.
    Note:

    To send rows containing fields exceeding the defined maximum length to
    the reject flow, make sure that the Trim the
    excess content of column when length checking chosen and the length
    is greater than defined length
    check box is cleared.

  7. In the Advanced settings view of the
    tSchemaComplianceCheck component,
    select the Treat all empty string as NULL
    option to sent any rows containing empty fields to the reject flow.
  8. To view the validation result in tables on the Run console, double-click each tLogRow component and select the Table option in the Basic
    settings
    view.

Executing the Job

Save your Job and press F6 to launch it.

Two tables are displayed on the console, showing the valid data and
rejected data respectively.
tSchemaComplianceCheck_5.png

tSchemaComplianceCheck MapReduce properties (deprecated)

These properties are used to configure tSchemaComplianceCheck running in the MapReduce Job framework.

The MapReduce
tSchemaComplianceCheck component belongs to the Data Quality family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Base Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

It describes the structure and nature of your data to be processed as
it is.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Check all columns from schema

Select this option to carry out all checks on all columns against
the base schema.

Custom defined

Select this option to carry out particular checks on particular
columns. When this option is selected, the Checked Columns table shows.

Checked Columns

In this table, define what checks are to be carried out on which
columns.

 

Column: Displays the columns
names.

 

Type: Select the type of data
each column is supposed to contain. This validation is mandatory for
all columns.

 

Date pattern: Define the expected
date format for each column with the data type of
Date.

 

Nullable: Select the check box in
an individual column to define the column to be nullable, that is,
to allow empty rows in this column to go to the output flow
regardless of the base schema definition. To define all columns to
be nullable, select the check box in the table header.

 

Undefined or empty: Select the
check box in an individual column to reject empty rows in this
column while the column is not nullable in the base schema
definition. To carry out this verification on all the columns,
select the check box in the table header.

 

Max length: Select the check box
in an individual column to verify the data length of the column
against the length definition of the base schema. To carry out this
verification on all the columns, select the check box in the table
header.

Use another schema for compliance check

Define a reference schema as you expect the data to be, in order
to reject the non-compliant data.

It can be restrictive on data type, null values, and/or
length.

Trim the excess content of column when length checking
chosen and the length is greater than defined length

With any of the three modes of tSchemaComplianceCheck, select this check box to
truncate the data that exceeds the length specified rather than
reject it.

Note:

This option is applicable only on data of String type.

Advanced settings

Use Fastest Date Check

Select this check box to perform a fast date format check using
the TalendDate.isDate() method of the

Talend Date
system routine if
Date pattern is not defined. For more
information about routines, see
Talend Studio User
Guide
.

Ignore TimeZone when Check Date

Select this check box to ignore the time zone setup upon date
check.

Not available when the Check all columns
from schema
mode is selected.

Treat all empty string as NULL

Select this check box to treat any empty fields in any columns as
null values, instead of empty strings.

By default, this check box is selected. When it is cleared, the
Choose Column(s) table shows to
let you select individual columns.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, this component is used as an intermediate
step and other components used along with it must be Map/Reduce components, too. They
generate native Map/Reduce code that can be executed directly in Hadoop.

It does not support data of the Object and the List
types.

For further information about a
Talend
Map/Reduce Job, see the sections
describing how to create, convert and configure a
Talend
Map/Reduce Job of the

Talend Open Studio for Big Data Getting Started Guide
.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tSchemaComplianceCheck for Apache Spark Batch

These properties are used to configure tSchemaComplianceCheck running in the Spark Batch Job framework.

The Spark Batch
tSchemaComplianceCheck component belongs to the Data Quality family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Base Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

It describes the structure and nature of your data to be processed as
it is.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Base on default schema

Select this option to carry out all checks on all columns against
the base schema.

Custom defined

Select this option to carry out particular checks on particular columns. When
this option is selected, the Columns table shows.

Checked Columns

In this table, define what checks are to be carried out on which
columns.

 

Column: Displays the columns
names.

 

Type: Select the type of data
each column is supposed to contain. This validation is mandatory for
all columns.

 

Date pattern: Define the expected
date format for each column with the data type of
Date.

 

Nullable: Select the check box in
an individual column to define the column to be nullable, that is,
to allow empty rows in this column to go to the output flow
regardless of the base schema definition. To define all columns to
be nullable, select the check box in the table header.

 

Max length: Select the check box
in an individual column to verify the data length of the column
against the length definition of the base schema. To carry out this
verification on all the columns, select the check box in the table
header.

Use another schema for compliance check

Define a reference schema as you expect the data to be, in order
to reject the non-compliant data.

It can be restrictive on data type, null values, and/or
length.

Discard the excess content of column when the actual length is
greater than the declared length

With any of the three modes of tSchemaComplianceCheck, select this check box to
truncate the data that exceeds the length specified rather than
reject it.

Note:

This option is applicable only on data of String type.

Advanced settings

Ignore TimeZone when Check Date

Select this check box to ignore the time zone setup upon date
check.

Not available when the Check all columns
from schema
mode is selected.

Treat all empty string as NULL

Select this check box to treat any empty fields in any columns as
null values, instead of empty strings.

By default, this check box is selected. When it is cleared, the
Choose Column(s) table shows to
let you select individual columns.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is used as an intermediate step.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

tSchemaComplianceCheck for Apache Spark Streaming

These properties are used to configure tSchemaComplianceCheck running in the Spark Streaming Job framework.

The Spark Streaming
tSchemaComplianceCheck component belongs to the Data Quality family.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Base Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

It describes the structure and nature of your data to be processed as
it is.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Base on default schema

Select this option to carry out all checks on all columns against
the base schema.

Custom defined

Select this option to carry out particular checks on particular columns. When
this option is selected, the Columns table shows.

Checked Columns

In this table, define what checks are to be carried out on which
columns.

 

Column: Displays the columns
names.

 

Type: Select the type of data
each column is supposed to contain. This validation is mandatory for
all columns.

 

Date pattern: Define the expected
date format for each column with the data type of
Date.

 

Nullable: Select the check box in
an individual column to define the column to be nullable, that is,
to allow empty rows in this column to go to the output flow
regardless of the base schema definition. To define all columns to
be nullable, select the check box in the table header.

 

Max length: Select the check box
in an individual column to verify the data length of the column
against the length definition of the base schema. To carry out this
verification on all the columns, select the check box in the table
header.

Use another schema for compliance check

Define a reference schema as you expect the data to be, in order
to reject the non-compliant data.

It can be restrictive on data type, null values, and/or
length.

Discard the excess content of column when the actual length is
greater than the declared length

With any of the three modes of tSchemaComplianceCheck, select this check box to
truncate the data that exceeds the length specified rather than
reject it.

Note:

This option is applicable only on data of String type.

Advanced settings

Ignore TimeZone when Check Date

Select this check box to ignore the time zone setup upon date
check.

Not available when the Check all columns
from schema
mode is selected.

Treat all empty string as NULL

Select this check box to treat any empty fields in any columns as
null values, instead of empty strings.

By default, this check box is selected. When it is cleared, the
Choose Column(s) table shows to
let you select individual columns.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is used as an intermediate step.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x