tJapaneseTokenize
Splits Japanese text into tokens.
Tokenization is an important pre-processing step and prepares text data for subsequent
analysis, transliteration, text mining or natural language processing tasks.
Unlike English or French, there are no spaces to mark word boundaries in Japanese.
Splitting Japanese text into tokens is then more challenging.
Based on the IPADIC dictionary, tJapaneseTokenize deduces where word
boundaries exist and adds a space to separate tokens.
The IPADIC dictionary was developed by the Information-Technology Promotion Agency of
Japan (IPA). This dictionary is based on the IPA corpus and is the most widely used
dictionary for Japanese tokenization.
In local mode, Apache Spark 1.6.0, 2.1.0, 2.3.0 and 2.4.0 are supported.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
- Standard: see tJapaneseTokenize Standard properties.
The component in this framework is available in Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and in Talend Data Fabric.
- Spark Batch: see tJapaneseTokenize properties for Apache Spark Batch.
The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.
- Spark Streaming: see tJapaneseTokenize properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
- tJapaneseTokenize Standard properties
These properties are used to configure tJapaneseTokenize running in the Standard Job framework. - Tokenizing Japanese text
- tJapaneseTokenize properties for Apache Spark Batch
These properties are used to configure tJapaneseTokenize running in the Spark Batch Job framework. - tJapaneseTokenize properties for Apache Spark Streaming
These properties are used to configure tJapaneseTokenize running in the Spark Streaming Job framework.