July 30, 2023

tFileInputXML – Docs for ESB 7.x

tFileInputXML

Reads an XML structured file row by row to split them up into fields and sends the
fields as defined in the schema to the next component.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tFileInputXML Standard properties

These properties are used to configure tFileInputXML running in the Standard Job framework.

The Standard
tFileInputXML component belongs to the File and the XML families.

The component in this framework is available in all Talend
products
.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

File name/Stream

File name: Name and path of the
file to be processed.

Warning: Use absolute path (instead of relative path) for
this field to avoid possible errors.

Stream: The data flow to be
processed. The data must be added to the flow in order for tFileInputXML to fetch these data via the
corresponding representative variable.

This variable could be already pre-defined in your Studio or
provided by the context or the components you are using along with
this component, for example, the INPUT_STREAM
variable of tFileFetch; otherwise,
you could define it manually and use it according to the design of
your Job, for example, using tJava
or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could
select the variable of interest from the auto-completion list
(Ctrl+Space) to fill the
current field on condition that this variable has been properly
defined.

Related topic to the available variables: see
Talend Studio User Guide
. Related scenario to
the input stream, see Reading data from a remote file in streaming mode.

Loop XPath query

Node of the tree, which the loop is based on.

Mapping

Column: Columns to map. They
reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to
be extracted from the structured input.

Get nodes: Select this check box
to recuperate the XML content of all current nodes specified in the
Xpath query list, or select the
check box next to specific XML nodes to recuperate only the content
of the selected nodes. These nodes are important when the output
flow from this component needs to use the XML structure, for
example, the Document data
type.

For further information about the Document type, see

Talend Studio User
Guide
.

Note:

The Get Nodes option
functions in the DOM4j and
SAX modes, although in
SAX mode namespaces are not
supported. For further information concerning the DOM4j and SAX modes, please see the properties noted in
the Generation mode list of the
Advanced Settings
tab.

Limit

Maximum number of rows to be processed. If Limit = 0, no row is
read nor processed. If -1, all rows are read or processed.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Ignore DTD file

Select this check box to ignore the DTD file indicated in the XML
file being processed.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define the
separators to use for thousands.

Decimal separator: define the
separators to use for decimals.

Ignore the namespaces

Select this check box to ignore name spaces.

Generate a temporary file: click
the three-dot button to browse to the XML temporary file and set its
path in the field.

Use Separator for mode Xerces

Select this check box if you want to separate concatenated
children node values.

Note:

This field can only be used if the selected Generation mode is Xerces.

The following field displays:

Field separator: Define the
delimiter to be used to separate the children node values.

Encoding

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Generation mode

From the drop-down list select the generation mode for the XML
file, according to the memory available and the desired speed:

  • Slow and memory-consuming
    (Dom4j)

    Note:

    This option allows you to use dom4j to process the XML
    files of high complexity.

  • Memory-consuming
    (Xerces)
    .

  • Fast with low memory consumption
    (SAX)

Validate date

Select this check box to check the date format strictly against
the input schema.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a
Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

tFileInputXML is for use as an
entry componant. It allows you to create a flow of XML data using a
Row > Main link. You can
also create a rejection flow using a Row >
Reject
link to filter the data which doesn’t
correspond to the type defined. For an example of how to use these
two links, see Procedure.

Reading and extracting data from an XML structure

This scenario describes a basic Job that reads a defined XML directory and extracts
specific information and outputs it on the Run console
via a tLogRow component.

tFileInputXML_1.png

Procedure

  1. Drop tFileInputXML and tLogRow from the Palette to the
    design workspace.
  2. Connect both components together using a Main Row
    link.
  3. Double-click tFileInputXML to open its
    Basic settings view and define the
    component properties.

    tFileInputXML_2.png

  4. As the street dir file used as input file has been previously defined in the
    Metadata area, select Repository as Property type. This way, the properties are
    automatically leveraged and the rest of the properties fields are filled in
    (apart from Schema). For more information regarding the metadata
    creation wizards, see
    Talend Studio
    User
    Guide.

  5. Select the same way the relevant schema in the Repository metadata list.
    Edit schema if you want to make any change
    to the schema loaded.
  6. The Filename shows the structured file to be
    used as input
  7. In Loop XPath query, change if needed the
    node of the structure where the loop is based.
  8. On the Mapping table, fill the fields to be
    extracted and displayed in the output.
  9. If the file size is consequent, fill in a Limit of rows to be read.
  10. Enter the encoding if needed then double-click on tLogRow to define the separator character.
  11. Save your Job and press F6 to execute
    it.
tFileInputXML_3.png

The fields defined in the input properties are extracted from the XML structure and
displayed on the console.

Extracting erroneous XML data via a reject flow

This Java scenario describes a three-component Job that reads an XML file and:

  1. first, returns correct XML data in an output XML file,

  2. and second, displays on the console erroneous XML data which type does not
    correspond to the defined one in the schema.

Procedure

  1. Drop the following components from the Palette to the design workspace: tFileInputXML, tFileOutputXML
    and tLogRow.

    Right-click tFileInputXML and select
    Row > Main in the contextual menu and then click tFileOutputXML to connect the components together.
    Right-click tFileInputXML and select
    Row > Reject in the contextual menu and then click tLogRow to connect the components together using a
    reject link.
    tFileInputXML_4.png

  2. Double-click tFileInputXML to display the
    Basic settings view and define the
    component properties.

    tFileInputXML_5.png

  3. In the Property Type list, select Repository and click the three-dot button next to the
    field to display the Repository Content
    dialog box where you can select the metadata relative to the input
    file if you have already stored it in the File xml node under the Metadata folder of the Repository tree view. The fields that follow are automatically
    filled with the fetched data. If not, select Built-in and fill in the fields that follow manually.

    For more information about storing schema metadat in the Repository tree
    view, see
    Talend Studio User Guide
    .
  4. In the Schema Type list, select Repository and click the three-dot button to open the
    dialog box where you can select the schema that describe the structure of the
    input file if you have already stored it in the Repository tree view. If not, select Built-in and click the three-dot button next to Edit schema to open a dialog box where you can define
    the schema manually.

    tFileInputXML_6.png

    The schema in this example consists of five columns: id,
    CustomerName, CustomerAddress, idState
    and
    id2.
  5. Click the three-dot button next to the Filename field and browse to the XML file you want to
    process.
  6. In the Loop XPath query, enter between
    inverted commas the path of the XML node on which to loop in order to retrieve
    data.

    In the Mapping table, Column is automatically populated with the defined
    schema.
    In the XPath query column, enter between
    inverted commas the node of the XML file that holds the data you want to extract
    from the corresponding column.
  7. In the Limit field, enter the number of lines
    to be processed, the first 10 lines in this example.
  8. Double-click tFileOutputXML to display its
    Basic settings view and define the
    component properties.

    tFileInputXML_7.png

  9. Click the three-dot button next to the File
    Name
    field and browse to the output XML file you want to collect
    data in, customer_data.xml in this example.

    In the Row tag field, enter between inverted
    commas the name you want to give to the tag that will hold the recuperated
    data.
    Click Edit schema to display the schema
    dialog box and make sure that the schema matches that of the preceding
    component. If not, click Sync columns to
    retrieve the schema from the preceding component.
  10. Double-click tLogRow to display its Basic settings view and define the component
    properties.

    Click Edit schema to open the schema dialog
    box and make sure that the schema matches that of the preceding component. If
    not, click Sync columns to retrieve the schema
    of the preceding component.
    In the Mode area, select the Vertical option.
  11. Save your Job and press F6 to execute
    it.
tFileInputXML_8.png

The output file customer_data.xml holding the correct XML data is
created in the defined path and erroneous XML data is displayed on the console of the
Run view.

tFileInputXML MapReduce properties (deprecated)

These properties are used to configure tFileInputXML running in the MapReduce Job framework.

The MapReduce
tFileInputXML component belongs to the MapReduce family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example,/user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the property mapreduce.input.fileinputformat.input.dir.recursive to be true in the Hadoop properties table in the Hadoop configuration tab.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then ttFileInputXML automatically decompresses it at
runtime. The supported compression formats and their corresponding
extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

Element to extract

Enter the element from which you need to read the contents and the
child elements of the input XML data.

The element defined in this field is used at the root node of any
XPath specified within this component. This element helps define the
atomic units of the XML data to be used so that however big the
original document is or wherever the input is split, the rows within
this element can be correctly distributed to the mapper
tasks.

Note that any content outside this element is ignored and the
child elements of this element cannot contain this element
itself.

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

Mapping

Column: Columns to map. They
reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to
be extracted from the structured input.

Get nodes: Select this check box
to recuperate the XML content of all current nodes specified in the
Xpath query list, or select the
check box next to specific XML nodes to recuperate only the content
of the selected nodes. These nodes are important when the output
flow from this component needs to use the XML structure, for
example, the Document data
type.

For further information about the Document type, see

Talend Studio User
Guide
.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Ignore the namespaces

Select this check box to ignore name spaces.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

Because of the characteristics of the MapReduce framework, the
Map/Reduce version of tFileInputXML
does not support none of the following XML parsers: the DOM-based
parsers, the SAX-based parsers and the streaming-based
parsers.

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileInputXML as well as the MapReduce
family appears in the Palette of
the Studio.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputXML properties for Apache Spark Batch

These properties are used to configure tFileInputXML running in the Spark Batch Job framework.

The Spark Batch
tFileInputXML component belongs to the File family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will
read all of the files stored in that folder, for example,
/user/talend/in; if sub-folders exist, the sub-folders are automatically
ignored unless you define the property
spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be
true in the Advanced properties table in the
Spark configuration tab.

  • Depending on the filesystem to be used, properly configure the corresponding
    configuration component placed in your Job, for example, a
    tHDFSConfiguration component for HDFS, a
    tS3Configuration component for S3 and a
    tAzureFSConfiguration for Azure Storage and Azure Data Lake
    Storage.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then ttFileInputXML automatically decompresses it at
runtime. The supported compression formats and their corresponding
extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Element to extract

Enter the element from which you need to read the contents and the
child elements of the input XML data.

The element defined in this field is used at the root node of any
XPath specified within this component. This element helps define the
atomic units of the XML data to be used so that however big the
original document is or wherever the input is split, the rows within
this element can be correctly distributed to the mapper
tasks.

Note that any content outside this element is ignored and the
child elements of this element cannot contain this element
itself.

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

Mapping

Column: Columns to map. They
reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to
be extracted from the structured input.

Get nodes: Select this check box
to recuperate the XML content of all current nodes specified in the
Xpath query list, or select the
check box next to specific XML nodes to recuperate only the content
of the selected nodes. These nodes are important when the output
flow from this component needs to use the XML structure, for
example, the Document data
type.

For further information about the Document type, see

Talend Studio User
Guide
.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Advanced settings

Set minimum partitions

Select this check box to control the number of partitions to be created from the input
data over the default partitioning behavior of Spark.

In the displayed field, enter, without quotation marks, the minimum number of partitions
you want to obtain.

When you want to control the partition number, you can generally set at least as many partitions as
the number of executors for parallelism, while bearing in mind the available memory and the
data transfer pressure on your network.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.

tFileInputXML properties for Apache Spark Streaming

These properties are used to configure tFileInputXML running in the Spark Streaming Job framework.

The Spark Streaming
tFileInputXML component belongs to the File family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will
read all of the files stored in that folder, for example,
/user/talend/in; if sub-folders exist, the sub-folders are automatically
ignored unless you define the property
spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be
true in the Advanced properties table in the
Spark configuration tab.

  • Depending on the filesystem to be used, properly configure the corresponding
    configuration component placed in your Job, for example, a
    tHDFSConfiguration component for HDFS, a
    tS3Configuration component for S3 and a
    tAzureFSConfiguration for Azure Storage and Azure Data Lake
    Storage.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then ttFileInputXML automatically decompresses it at
runtime. The supported compression formats and their corresponding
extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Element to extract

Enter the element from which you need to read the contents and the
child elements of the input XML data.

The element defined in this field is used at the root node of any
XPath specified within this component. This element helps define the
atomic units of the XML data to be used so that however big the
original document is or wherever the input is split, the rows within
this element can be correctly distributed to the mapper
tasks.

Note that any content outside this element is ignored and the
child elements of this element cannot contain this element
itself.

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

Mapping

Column: Columns to map. They
reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to
be extracted from the structured input.

Get nodes: Select this check box
to recuperate the XML content of all current nodes specified in the
Xpath query list, or select the
check box next to specific XML nodes to recuperate only the content
of the selected nodes. These nodes are important when the output
flow from this component needs to use the XML structure, for
example, the Document data
type.

For further information about the Document type, see

Talend Studio User
Guide
.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Advanced settings

Set minimum partitions

Select this check box to control the number of partitions to be created from the input
data over the default partitioning behavior of Spark.

In the displayed field, enter, without quotation marks, the minimum number of partitions
you want to obtain.

When you want to control the partition number, you can generally set at least as many partitions as
the number of executors for parallelism, while bearing in mind the available memory and the
data transfer pressure on your network.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component is only used to provide the lookup flow (the right side of a join
operation) to the main flow of a tMap component. In this
situation, the lookup model used by this tMap must be
Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x