
tSalesforceOutputBulk and tSalesforceBulkExec components are used together to output the needed
file and then execute intended actions on the file for your Salesforce.com. These two
steps compose the tSalesforceOutputBulkExec component,
detailed in a separate section. The interest in having two separate elements lies in the
fact that it allows transformations to be carried out before the data loading.
Component family |
Business/Cloud |
|
Function |
tSalesforceOutputBulk generates |
|
Purpose |
Prepares the file to be processed by tSalesforceBulkExec for executions in |
|
Basic settings |
File Name |
Type in the directory where you store the generated file. |
|
Append |
Select the check box to write new data at the end of the existing |
|
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on Since version 5.6, both the Built-In mode and the Repository mode are Click Edit schema to make changes to the schema. If the
Click Sync columns to retrieve the schema from the |
|
Ignore NULL fields values |
Select this check box to ignore NULL values in Update or Upsert mode. |
Advanced settings |
Relationship mapping for upsert |
Click the [+] button to add lines Additionally, the Polymorphic
Column name of Talend schema:
Lookup field name: lookup
External id name: external ID field Polymorphic: select this check
Module name: name of the lookup Note
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at a |
|
Global Variables |
NB_LINE: the number of rows read by an input component or ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
Usage |
This component is intended for the use along with tSalesforceBulkExec component. Used |
|
Log4j |
The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html. |
|
Limitation |
Due to license incompatibility, one or more JARs required to use this component are not |
This scenario describes a six-component Job that transforms .csv
data suitable for bulk processing, load them in Salesforce.com and then displays the Job
execution results in the console.

This Job is composed of two steps: preparing data by transformation and processing the
transformed data.
Before starting this scenario, you need to prepare the input file which offers the
data to be processed by the Job. In this use case, this file is
sforcebulk.txt, containing some customer information.
Then to create and execute this Job, operate as follows:
-
Drop tFileInputDelimited, tMap, tSalesforceOutputBulk, tSalesforceBulkExec and tLogRow from the Palette
onto the workspace of your studio. -
Use a Row > Main connection to connect tFileInputDelimited to tMap, and Row > out1 from tMap
to tSalesforceOutputBulk. -
Use a Row > Main connection and a Row
> Reject connection to connect tSalesforceBulkExec respectively to the two
tLogRow components. -
Use a Trigger > OnSubjobOk connection to connect tFileInputDelimited and tSalesforceBulkExec.
-
Double-click tFileInputDelimited to
display its Basic settings view and define
the component properties. -
From the Property Type list, select
Repository if you have already stored
the connection to the salesforce server in the Metadata node of the Repository tree view. The property fields that follow are
automatically filled in. If you have not defined the server connection
locally in the Repository, fill in the details manually after selecting
Built-in from the Property Type list.For more information about how to create the delimited file metadata, see
Talend Studio User
Guide. -
Next to the File name/Stream field, click
the […] button to browse to the input
file you prepared for the scenario, for example,
sforcebulk.txt. -
From the Schema list, select Repository and then click the three-dot button to
open a dialog box where you can select the repository schema you want to use
for this component. If you have not defined your schema locally in the
metadata, select Built-in from the
Schema list and then click the
three-dot button next to the Edit schema
field to open the dialog box to set the schema manually. In
this scenario, the schema is made of four columns:
Name, ParentId,
Phone and Fax. -
According to your input file to be used by the Job, set the other fields
like Row Separator, Field Separator…
-
Double-click the tMap component to open
its editor and set the transformation. -
Drop all columns from the input table to the output table.
-
Add
.toUpperCase()
behind the Name
column. -
Click OK to validate the
transformation.
-
Double-click tSalesforceOutputBulk to
display its Basic settings view and define
the component properties. -
In the File Name field, type in or browse
to the directory where you want to store the generated
.csv data for bulk processing. -
Click Sync columns to import the schema
from its preceding component.
-
Double-click tSalesforceBulkExect to
display its Basic settings view and define
the component properties. -
Use the by-default URL of the Salesforce Web service or enter the URL you
want to access. -
In the Username and Password fields, enter your username and password for the
Web service. -
In the Bulk file path field, browse to
the directory where is stored the generated .csv file
by tSalesforceOutputBulk. -
From the Action list, select the action
you want to carry out on the prepared bulk data. In this use case, insert. -
From the Module list, select the object
you want to access, Account in this
example. -
From the Schema list, select Repository and then click the three-dot button to
open a dialog box where you can select the repository schema you want to use
for this component. If you have not defined your schema locally in the
metadata, select Built-in from the
Schema list and then click the
three-dot button next to the Edit schema
field to open the dialog box to set the schema manually. In
this example, edit it conforming to the schema defined previously.
-
Double-click tLogRow_1 to display its
Basic settings view and define the
component properties. -
Click Sync columns to retrieve the schema
from the preceding component. -
Select Table mode to display the
execution result. -
Do the same with tLogRow_2.
-
Press CTRL+S to save your Job.
-
Press F6 to execute it.
You can check the execution result on the Run console.
In the tLogRow_1 table, you can read the
data inserted into your Salesforce.com.In the tLogRow_2 table, you can read the
rejected data due to the incompatibility with the Account objects you have accessed.All the customer names are written in upper case.