tExtractJSONFields
query.
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Standard: see tExtractJSONFields Standard properties.
The component in this framework is generally available.
-
MapReduce: see tExtractJSONFields MapReduce properties.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Batch:
see tExtractJSONFields properties for Apache Spark Batch.The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Streaming:
see tExtractJSONFields properties for Apache Spark Streaming.The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric. -
Storm: see tExtractJSONFields Storm properties.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
tExtractJSONFields Standard properties
These properties are used to configure tExtractJSONFields running in the Standard Job framework.
The Standard
tExtractJSONFields component belongs to the Processing family.
The component in this framework is generally available.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the |
|
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to
Click Edit schema to make changes to the
|
|
Read By |
Select a way of extracting JSON data in the
|
|
JSON field |
List of the JSON fields to be extracted. |
|
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
|
Mapping |
Complete this table to map the columns defined in the schema to the
|
|
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Advanced settings
|
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at a Job |
Global Variables
|
Global Variables |
ERROR_MESSAGE: the error message generated by the
NB_LINE: the number of rows read by an input component or A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
|
Usage rule |
This component is an intermediate component. It needs an input and an |
|
Limitation |
Due to license incompatibility, one or more JARs required to use this component are not |
Scenario 1: Retrieving error messages while extracting data from JSON fields
In this scenario, tWriteJSONField wraps the incoming
data into JSON fields, data of which is then extracted by tExtractJSONFields. Meanwhile, the error messages generated due to
extraction failure, which include the concerned JSON fields and errors, are retrieved
via a Row > Reject
link.
Linking the components
- Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tWriteJSONField, tExtractJSONFields, and tLogRow (X2). The two tLogRow components are renamed as data_extracted and reject_info.
-
Link tFixedFlowInput and tWriteJSONField using a Row > Main connection.
-
Link tWriteJSONField and tExtractJSONFields using a Row > Main connection.
-
Link tExtractJSONFields and data_extracted using a Row > Main connection.
-
Link tExtractJSONFields and reject_info using a Row > Reject connection.
Configuring the components
Setting up the tFixedFlowInput
-
Double-click tFixedFlowInput to display
its Basic settings view.
-
Click Edit schema to open the schema
editor.
Click the [+] button to add three
columns, namely firstname, lastname and dept, with the type of string.Click OK to close the editor. -
Select Use Inline Content and enter the
data below in the Content box:123Andrew;Wallace;DocJohn;Smith;R&DChristian;Dior;Sales
Setting up the tWriteJSONField
-
Click tWriteJSONField to display its
Basic settings view.
-
Click Configure JSON Tree to open the XML
tree editor.
The schema of tFixedFlowInput appears in
the Linker source panel. -
In the Linker target panel, click the
default rootTag and type in staff, which is the root node of the JSON field
to be generated. - Right-click staff and select Add Sub-element from the context menu.
-
In the pop-up box, enter the sub-node name, namely firstname.
Repeat the steps to add two more sub-nodes, namely lastname and dept. -
Right-click firstname and select
Set As Loop Element from the context
menu. -
Drop firstname from the Linker source panel to its counterpart in the
Linker target panel.In the pop-up dialog box, select Add linker to
target node.
Click OK to close the dialog box. -
Repeat the steps to link the two other items.
Click OK to close the XML tree
editor. -
Click Edit schema to open the schema
editor.
-
Click the [+] button in the right panel
to add one column, namely staff, which
will hold the JSON data generated.Click OK to close the editor.
Setting up the tExtractJSONFields
-
Double-click tExtractJSONFields to
display its Basic settings view.
-
Click Edit schema to open the schema
editor.
-
Click the [+] button in the right panel
to add three columns, namely firstname,
lastname and dept, which will hold the data of their counterpart nodes in
the JSON field staff.Click OK to close the editor. -
In the pop-up Propagate box, click
Yes to propagate the schema to the
subsequent components.
-
In the Loop XPath query field, enter
“/staff”, which is the root node of
the JSON data. -
In the Mapping area, type in the node
name of the JSON data under the XPath query
part. The data of those nodes will be extracted and passed to their
counterpart columns defined in the output schema. -
Specifically, define the XPath query “firstname” for the column firstname, “lastname” for
the column lastname, and “” for the column dept. Note that “” is not
a valid XPath query and will lead to execution errors.
Setting up the tLogRow components
-
Double-click data_extracted to display
its Basic settings view.
-
Select Table (print values in cells of a
table) for a better display of the results. - Perform the same setup on the other tLogRow component, namely reject_info.
Executing the Job
- Press Ctrl + S to save the Job.
-
Click F6 to execute the Job.
As shown above, the reject row offers such details as the data extracted,
the JSON fields whose data is not extracted and the cause of the extraction
failure.
Scenario 2: Collecting data from your favorite online social network
In this scenario, tFileInputJSON retrieves the
friends node from a JSON file that contains the
data of a Facebook user and tExtractJSONFields extracts
the data from the friends node for flat data
output.
Linking the components
- Drop the following components from the Palette onto the design workspace: tFileInputJSON, tExtractJSONFields and tLogRow.
-
Link tFileInputJSON and tExtractJSONFields using a Row > Main connection.
-
Link tExtractJSONFields and tLogRow using a Row > Main connection.
Configuring the components
-
Double-click tFileInputJSON to display
its Basic settings view.
-
Click Edit schema to open the schema
editor.
Click the [+] button to add one column,
namely friends, of the String
type.Click OK to close the editor. -
Click the […] button to browse for the
JSON file, facebook.json in this
case:1234567891011121314151617181920212223242526272829303132333435363738394041{ "user": { "id": "9999912398","name": "Kelly Clarkson","friends": [{ "name": "Tom Cruise","id": "55555555555555","likes": {"data": [{ "category": "Movie","name": "The Shawshank Redemption","id": "103636093053996","created_time": "2012-11-20T15:52:07+0000"},{ "category": "Community","name": "Positiveretribution","id": "471389562899413","created_time": "2012-12-16T21:13:26+0000"}]}},{ "name": "Tom Hanks","id": "88888888888888""likes": {"data": [{ "category": "Journalist","name": "Janelle Wang","id": "136009823148851","created_time": "2013-01-01T08:22:17+0000"},{ "category": "Tv show","name": "Now With Alex Wagner","id": "305948749433410","created_time": "2012-11-20T06:14:10+0000"}]}}]}} -
Clear the Read by XPath check box.
In the Mapping table, enter the JSONPath
query “$.user.friends[*]” next to the
friends column, retrieving the entire
friends node from the source
file. -
Double-click tExtractJSONFields to
display its Basic settings view.
-
Click Edit schema to open the schema
editor.
-
Click the [+] button in the right panel
to add five columns, namely id, name, like_id, like_name and
like_category, which will hold the
data of relevant nodes in the JSON field friends.Click OK to close the editor. -
In the pop-up Propagate box, click
Yes to propagate the schema to the
subsequent components.
-
In the Loop XPath query field, enter
“/likes/data”. -
In the Mapping area, type in the queries
of the JSON nodes in the XPath query
column. The data of those nodes will be extracted and passed to their
counterpart columns defined in the output schema. -
Specifically, define the XPath query “../../id” (querying the “/friends/id” node) for the column id, “../../name”
(querying the “/friends/name” node) for
the column name, “id” for the column like_id, “name” for the
column like_name, and “category” for the column like_category. -
Double-click tLogRow to display its
Basic settings view.
-
Select Table (print values in cells of a
table) for a better display of the results.
Executing the Job
- Press Ctrl + S to save the Job.
-
Click F6 to execute the Job.
As shown above, the friends data of the Facebook user Kelly Clarkson is
extracted correctly.
tExtractJSONFields MapReduce properties
These properties are used to configure tExtractJSONFields running in the MapReduce Job framework.
The MapReduce
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
Read by |
Select a way of extracting the JSON data in the file.
|
|
JSON field |
List of the JSON fields to be extracted. |
|
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
|
Mapping |
Complete this table to map the columns defined in the schema to the
|
|
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Advanced settings
|
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
Global Variables
|
Global Variables |
ERROR_MESSAGE: the error message generated by the
NB_LINE: the number of rows read by an input component or A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
|
Usage rule |
In a You need to use the Hadoop Configuration tab in the For further information about a Note that in this documentation, unless otherwise |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tExtractJSONFields properties for Apache Spark Batch
These properties are used to configure tExtractJSONFields running in the Spark Batch Job framework.
The Spark Batch
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
Read by |
Select a way of extracting the JSON data in the file.
|
|
JSON field |
List of the JSON fields to be extracted. |
|
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
|
Mapping |
Complete this table to map the columns defined in the schema to the
|
|
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
|
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
Usage
|
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Batch version of this component
yet.
tExtractJSONFields properties for Apache Spark Streaming
These properties are used to configure tExtractJSONFields running in the Spark Streaming Job framework.
The Spark Streaming
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
Read by |
Select a way of extracting the JSON data in the file.
|
|
JSON field |
List of the JSON fields to be extracted. |
|
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
|
Mapping |
Complete this table to map the columns defined in the schema to the
|
|
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
|
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
Usage
|
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a related scenario, see Analyzing a Twitter flow in near real-time.
tExtractJSONFields Storm properties
These properties are used to configure tExtractJSONFields running in the Storm Job framework.
The Storm
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
JSON field |
List of the JSON fields to be extracted. |
|
Loop XPath query |
Node within the JSON field, on which the loop is based. |
|
Mapping |
Column: schema defined to hold
XPath Query: XPath Query to specify
Get nodes: select this check box to
Is Array: select this check box |
Advanced settings
|
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
Usage
|
Usage rule |
If you have subscribed to one of the The Storm version does not support the use of the global variables. You need to use the Storm Configuration tab in the This connection is effective on a per-Job basis. For further information about a Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Related scenarios
No scenario is available for the Storm version of this component
yet.