tExtractJSONFields
query.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tExtractJSONFields Standard properties.
The component in this framework is available in all Talend
products. -
MapReduce: see tExtractJSONFields MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch:
see tExtractJSONFields properties for Apache Spark Batch.The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming:
see tExtractJSONFields properties for Apache Spark Streaming.This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
-
Storm: see tExtractJSONFields Storm properties (deprecated).
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tExtractJSONFields Standard properties
These properties are used to configure tExtractJSONFields running in the Standard Job framework.
The Standard
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available in all Talend
products.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Schema and Edit schema |
A schema is a row description. It defines the number of fields
Click Edit
schema to make changes to the schema. Note: If you
make changes, the schema automatically becomes built-in.
|
Read By |
Select a way of extracting JSON data in the
|
JSON field |
List of the JSON fields to be extracted. |
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
Mapping |
Complete this table to map the columns defined in the schema to the
|
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Advanced settings
Use the loop node as root |
Select this check box to use the loop node as the root for querying the file. The loop node is set in the Loop Json query text frame This check box is available only when JsonPath is |
Encoding |
Select the encoding from the list or select Custom |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at a Job |
Global Variables
NB_LINE |
The number of rows processed. This is an After variable and it returns an integer. |
ERROR_MESSAGE |
The error message generated by the component when an error occurs. This is an After |
Usage
Usage rule |
This component is an intermediate component. It needs an input and an |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Retrieving error messages while extracting data from JSON fields
In this scenario, tWriteJSONField wraps the incoming
data into JSON fields, data of which is then extracted by tExtractJSONFields. Meanwhile, the error messages generated due to
extraction failure, which include the concerned JSON fields and errors, are retrieved
via a Row > Reject
link.
Linking the components
- Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tWriteJSONField, tExtractJSONFields, and tLogRow (X2). The two tLogRow components are renamed as data_extracted and reject_info.
-
Link tFixedFlowInput and tWriteJSONField using a Row > Main connection.
-
Link tWriteJSONField and tExtractJSONFields using a Row > Main connection.
-
Link tExtractJSONFields and data_extracted using a Row > Main connection.
-
Link tExtractJSONFields and reject_info using a Row > Reject connection.
Configuring the components
Setting up the tFixedFlowInput
-
Double-click tFixedFlowInput to display
its Basic settings view. -
Click Edit schema to open the schema
editor.Click the [+] button to add three
columns, namely firstname, lastname and dept, with the type of string.Click OK to close the editor. -
Select Use Inline Content and enter the
data below in the Content box:123Andrew;Wallace;DocJohn;Smith;R&DChristian;Dior;Sales
Setting up the tWriteJSONField
-
Click tWriteJSONField to display its
Basic settings view. -
Click Configure JSON Tree to open the XML
tree editor.The schema of tFixedFlowInput appears in
the Linker source panel. -
In the Linker target panel, click the
default rootTag and type in staff, which is the root node of the JSON field
to be generated. - Right-click staff and select Add Sub-element from the context menu.
-
In the pop-up box, enter the sub-node name, namely firstname.
Repeat the steps to add two more sub-nodes, namely lastname and dept.
-
Right-click firstname and select
Set As Loop Element from the context
menu. -
Drop firstname from the Linker source panel to its counterpart in the
Linker target panel.In the pop-up dialog box, select Add linker to
target node.Click OK to close the dialog box. -
Repeat the steps to link the two other items.
Click OK to close the XML tree
editor. -
Click Edit schema to open the schema
editor. -
Click the [+] button in the right panel
to add one column, namely staff, which
will hold the JSON data generated.Click OK to close the editor.
Setting up the tExtractJSONFields
-
Double-click tExtractJSONFields to
display its Basic settings view. -
Click Edit schema to open the schema
editor. -
Click the [+] button in the right panel
to add three columns, namely firstname,
lastname and dept, which will hold the data of their counterpart nodes in
the JSON field staff.Click OK to close the editor. -
In the pop-up Propagate box, click
Yes to propagate the schema to the
subsequent components. -
In the Loop XPath query field, enter
“/staff”, which is the root node of
the JSON data. -
In the Mapping area, type in the node
name of the JSON data under the XPath query
part. The data of those nodes will be extracted and passed to their
counterpart columns defined in the output schema. -
Specifically, define the XPath query “firstname” for the column firstname, “lastname” for
the column lastname, and “” for the column dept. Note that “” is not
a valid XPath query and will lead to execution errors.
Setting up the tLogRow components
-
Double-click data_extracted to display
its Basic settings view. -
Select Table (print values in cells of a
table) for a better display of the results. - Perform the same setup on the other tLogRow component, namely reject_info.
Executing the Job
- Press Ctrl + S to save the Job.
-
Click F6 to execute the Job.
As shown above, the reject row offers such details as the data extracted,
the JSON fields whose data is not extracted and the cause of the extraction
failure.
Collecting data from your favorite online social network
In this scenario, tFileInputJSON retrieves the
friends node from a JSON file that contains the
data of a Facebook user and tExtractJSONFields extracts
the data from the friends node for flat data
output.
Linking the components
- Drop the following components from the Palette onto the design workspace: tFileInputJSON, tExtractJSONFields and tLogRow.
-
Link tFileInputJSON and tExtractJSONFields using a Row > Main connection.
-
Link tExtractJSONFields and tLogRow using a Row > Main connection.
Configuring the components
-
Double-click tFileInputJSON to display
its Basic settings view. -
Click Edit schema to open the schema
editor.Click the [+] button to add one column,
namely friends, of the String
type.Click OK to close the editor. -
Click the […] button to browse for the
JSON file, facebook.json in this
case:1234567891011121314151617181920212223242526272829303132333435363738394041{ "user": { "id": "9999912398","name": "Kelly Clarkson","friends": [{ "name": "Tom Cruise","id": "55555555555555","likes": {"data": [{ "category": "Movie","name": "The Shawshank Redemption","id": "103636093053996","created_time": "2012-11-20T15:52:07+0000"},{ "category": "Community","name": "Positiveretribution","id": "471389562899413","created_time": "2012-12-16T21:13:26+0000"}]}},{ "name": "Tom Hanks","id": "88888888888888""likes": {"data": [{ "category": "Journalist","name": "Janelle Wang","id": "136009823148851","created_time": "2013-01-01T08:22:17+0000"},{ "category": "Tv show","name": "Now With Alex Wagner","id": "305948749433410","created_time": "2012-11-20T06:14:10+0000"}]}}]}} -
Clear the Read by XPath check box.
In the Mapping table, enter the JSONPath
query “$.user.friends[*]” next to the
friends column, retrieving the entire
friends node from the source
file. -
Double-click tExtractJSONFields to
display its Basic settings view. -
Click Edit schema to open the schema
editor. -
Click the [+] button in the right panel
to add five columns, namely id, name, like_id, like_name and
like_category, which will hold the
data of relevant nodes in the JSON field friends.Click OK to close the editor. -
In the pop-up Propagate box, click
Yes to propagate the schema to the
subsequent components. -
In the Loop XPath query field, enter
“/likes/data”. -
In the Mapping area, type in the queries
of the JSON nodes in the XPath query
column. The data of those nodes will be extracted and passed to their
counterpart columns defined in the output schema. -
Specifically, define the XPath query “../../id” (querying the “/friends/id” node) for the column id, “../../name”
(querying the “/friends/name” node) for
the column name, “id” for the column like_id, “name” for the
column like_name, and “category” for the column like_category. -
Double-click tLogRow to display its
Basic settings view. -
Select Table (print values in cells of a
table) for a better display of the results.
Executing the Job
- Press Ctrl + S to save the Job.
-
Click F6 to execute the Job.
As shown above, the friends data of the Facebook user Kelly Clarkson is
extracted correctly.
Extracting data from a JSON file through looping
This scenario describes a Job that extracts data from a JSON file through
multiple loops and displays the data on the console.
The following lists the content of the JSON file,
sample.json.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
{ "Guid": "a2hdge9-5517-4e12-b9j6-887ft29e1711", "Transactions": [ { "TransactionId": 1, "Products": [ { "ProductId": "A1", "Packs": [ { "Quantity": 20, "Price": 40.00, "Due_Date": "2019/03/01" } ] } ] }, { "TransactionId": 2, "Products": [ { "ProductId": "B1", "Packs": [ { "Quantity": 1, "Price": 15.00, "Due_Date": "2019/01/01" }, { "Quantity": 21, "Price": 315.00, "Due_Date": "2019/02/14" } ] } ] }, { "TransactionId": 3, "Products": [ { "ProductId": "C1", "Packs": [ { "Quantity": 2, "Price": 5.00, "Due_Date": "2019/02/19" }, { "Quantity": 3, "Price": 7.50, "Due_Date": "2019/05/21" } ] } ] } ] } |
This Job extracts the values of the following elements.
- Guid
- TransactionId
- ProductId
- Quantity
- Price
- Due-Date
Establishing the tExtractJSONFields looping Job
-
Create a Job and add a tFileInputJSON component, three tExtractJsonFields
components, and a tLogRow component. -
Connect the components using Row > Main connections.
Configuring tExtractJSONFields looping input
This task assumes that you know the structure of the JSON file.
-
In the Basic settings view
of the tFileInputJSON component, select
JsonPath from the Read By drop-down list. -
In the filename field, specified the input JSON file,
sample.json in this example. -
In the schema editor, add two columns,
Guid (type String) and
Transactions (type Object). -
Click Yes in the subsequent dialog box
to propagate the schema to the next component.The columns just added appear in the Mapping table of the
Basic settings view. -
In the Basic settings view, enter
"$" in the Loop Json
query text box to loop the elements within the root elements.
-
In the Json query column of the Mapping table, enter the
following Json query expressions in double quotation marks.- $.Guid to extract the value of the Guid
element; - $.Transactions to extract the content of the
Transactions element.
- $.Guid to extract the value of the Guid
Configuring the tExtractJSONFields components for looping
-
In the schema editor of the first
tExtractJSONFileds component, add the following
columns in the output table.- Guid, type String;
- TransactionId, type Integer;
- Products, type Object
-
Close the schema editor and click
Yes in the subsequent dialog box to propagate the
schema to the next component.The columns just added appear in the Mapping table of the
Basic settings view. -
Set the other options in the Basic
settings view as follows.- JSON field: Transactions;
- Loop Jsonpath query: "*" (in double quotation marks);
- Guid: empty, for receiving the Guid value from the previous
component; - TransactionId: "TransactionId" (in double quotation marks);
- Products: "Products" (in double quotation marks);
- Others: unchanged
The settings loop all the elements within the Transactions element and extract
the values of the TransactionId and the Products elements. -
In the schema editor of the second tExtractJSONFileds
component, add the following columns in the output table.- Guid, type String;
- TransactionId, type Integer;
- ProductId, type String;
- Packs, type Object
-
Close the schema editor and click
Yes in the subsequent dialog box to propagate the
schema to the next component.The columns just added appear in the Mapping table of the
Basic settings view. -
Set the other options in the Basic
settings view as follows.- JSON field: Products;
- Loop Jsonpath query: "*" (in double quotation
marks); - Guid: empty, for receiving the Guid value from the previous
component; - TransactionId: empty, for receiving the TransactionId from the previous
component; - ProductId: "ProductId" (in double quotation
marks); - Packs: "Packs" (in double quotation marks);
- Others: unchanged
The settings in the above figure loop all the elements within the Products
element and extract the values of the ProductId and the Packs elements. -
In the schema editor of the third tExtractJSONFileds
component, add the following columns in the output table.- Guid, type String;
- TransactionId, type Integer;
- ProductId, type String;
- Quantity, type Integer;
- Price, type Float;
- Due_Date, type Date
-
Close the schema editor and click
Yes in the subsequent dialog box to propagate the
schema to the next component.The columns just added appear in the Mapping table of the
Basic settings view. -
Set the other options in the Basic
settings view as follows.- JSON field: Packs;
- Loop Jsonpath query: "*" (in double quotation
marks); - Guid: empty, for receiving the Guid value from the previous
component; - TransactionId: empty, for receiving the TransactionId value from the
previous component; - ProductId: empty, for receiving the ProductId value from the previous
component; - Quantity: "Quantity" (in double quotation
marks); - Price: "Price" (in double quotation marks);
- Due_Date: "Due_Date" (in double quotation
marks); - Others: unchanged
The settings in the above figure loop all the elements within the Packs
element and extract the values of the Quantity, the Price, and the Due_Date
elements.
Setting the display for tExtractJSONFields values
-
Open the Basic settings view of the
tLogRow component. - Select the preferred option in the Mode section.
Executing tExtractJSONFields loop Job
- Press Ctrl+S to save the Job.
-
Press F6 to execute the Job.
The following figure shows the result.The values of the Guid element, the TransactionId element, the
ProductId element, the Quantity element, the Price element, and the Due_date element
are extracted from the source JSON file and displayed.
tExtractJSONFields MapReduce properties (deprecated)
These properties are used to configure tExtractJSONFields running in the MapReduce Job framework.
The MapReduce
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Read by |
Select a way of extracting the JSON data in the file.
|
JSON field |
List of the JSON fields to be extracted. |
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
Mapping |
Complete this table to map the columns defined in the schema to the
|
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Advanced settings
Encoding |
Select the encoding from the list or select Custom |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the
NB_LINE: the number of rows read by an input component or A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a You need to use the Hadoop Configuration tab in the For further information about a Note that in this documentation, unless otherwise |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tExtractJSONFields properties for Apache Spark Batch
These properties are used to configure tExtractJSONFields running in the Spark Batch Job framework.
The Spark Batch
tExtractJSONFields component belongs to the Processing family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Read by |
Select a way of extracting the JSON data in the file.
|
JSON field |
List of the JSON fields to be extracted. |
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
Mapping |
Complete this table to map the columns defined in the schema to the
|
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
Encoding |
Select the encoding from the list or select Custom |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Batch version of this component
yet.
tExtractJSONFields properties for Apache Spark Streaming
These properties are used to configure tExtractJSONFields running in the Spark Streaming Job framework.
The Spark Streaming
tExtractJSONFields component belongs to the Processing family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Read by |
Select a way of extracting the JSON data in the file.
|
JSON field |
List of the JSON fields to be extracted. |
Loop Jasonpath query |
Enter the path pointing to the node within the JSON field, on which the loop is Note if you have selected Xpath from the Read |
Mapping |
Complete this table to map the columns defined in the schema to the
|
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
Encoding |
Select the encoding from the list or select Custom |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a related scenario, see Analyzing a Twitter flow in near real-time.
tExtractJSONFields Storm properties (deprecated)
These properties are used to configure tExtractJSONFields running in the Storm Job framework.
The Storm
tExtractJSONFields component belongs to the Processing family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
The Storm framework is deprecated from Talend 7.1 onwards. Use Talend Jobs for Apache Spark Streaming to accomplish your Streaming related tasks.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
JSON field |
List of the JSON fields to be extracted. |
Loop XPath query |
Node within the JSON field, on which the loop is based. |
Mapping |
Column: schema defined to hold
XPath Query: XPath Query to specify
Get nodes: select this check box to
Is Array: select this check box |
Advanced settings
Encoding |
Select the encoding from the list or select Custom |
Usage
Usage rule |
If you have subscribed to one of the The Storm version does not support the use of the global variables. You need to use the Storm Configuration tab in the This connection is effective on a per-Job basis. For further information about a Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Related scenarios
No scenario is available for the Storm version of this component
yet.