tFileInputDelimited properties for Apache Spark Batch – Docs for ESB 6.x
tFileInputDelimited properties for Apache Spark Batch These properties are used to configure tFileInputDelimited running in the Spark Batch Job framework. The Spark Batch tFileInputDelimited component belongs to the File family. The component in this framework is available only if you have subscribed to one of the Talend solutions with Big Data. Basic settings Define a…
tFileOutputDelimited properties for Apache Spark Streaming – Docs for ESB 6.x
tFileOutputDelimited properties for Apache Spark Streaming These properties are used to configure tFileOutputDelimited running in the Spark Streaming Job framework. The Spark Streaming tFileOutputDelimited component belongs to the File family. The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric. Basic settings Define…
tFileInputDelimited properties for Apache Spark Streaming – Docs for ESB 6.x
tFileInputDelimited properties for Apache Spark Streaming These properties are used to configure tFileInputDelimited running in the Spark Streaming Job framework. The Spark Streaming tFileInputDelimited component belongs to the File family. The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric. Basic settings Define…
Writing and reading data from MongoDB using a Spark Batch Job – Docs for ESB 6.x
Writing and reading data from MongoDB using a Spark Batch Job This scenario applies only to a subscription-based Talend solution with Big data. In this scenario, you create a Spark Batch Job to write data about some movie directors into the MongoDB default database and then read the data from this database. The sample data…
Scenario: Deploying your Job on Talend Runtime to retrieve data from a MySQL database – Docs for ESB 6.x
Scenario: Deploying your Job on Talend Runtime to retrieve data from a MySQL database This scenario describes a two-component Job that retrieves data from a MySQL database table and displays the data on the console. The Job will be deployed in Talend Runtime and will use the data source created on Talend Runtime to connect…
Scenario: Retrieving data in error with a Reject link – Docs for ESB 6.x
Scenario: Retrieving data in error with a Reject link This scenario describes a four-component Job that carries out migration from a customer file to a MySQL database table and redirects data in error towards a CSV file using a Reject link. In the Repository, select the customer file metadata that you want to migrate and…
Reading and writing data in MongoDB using a Spark Streaming Job – Docs for ESB 6.x
Reading and writing data in MongoDB using a Spark Streaming Job This scenario applies only to Talend Real-time Big Data Platform or Talend Data Fabric. In this scenario, you create a Spark Streaming Job to extract data about given movie directors from MongoDB, use this data to filter and complete movie information and then write…
Scenario: Deduplicating entries using Map/Reduce components – Docs for ESB 6.x
Scenario: Deduplicating entries using Map/Reduce components This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric. This scenario illustrates how to create a Talend Map/Reduce Job to deduplicate entries, that is to say, to use Map/Reduce components to generate Map/Reduce code and run the Job right in Hadoop….
Scenario: Inserting a column and altering data using tMysqlOutput – Docs for ESB 6.x
Scenario: Inserting a column and altering data using tMysqlOutput This Java scenario is a three-component job that aims at creating random data using a tRowGenerator, duplicating a column to be altered using the tMap component, and eventually altering the data to be inserted based on an SQL expression using the tMysqlOutput component. Drop the following…
Updating an issue in JIRA application – Docs for ESB 6.x
Updating an issue in JIRA application Double-click the tFileInputDelimited component to open its Basic settings view. In the File name/Stream field, specify the path to the JSON file used to update the issue. In this example, a simple JSON file D:/JiraComponents/issue_update.json will be used to update an existing issue with the key DOC-2 that is…