Analyzing a Twitter flow in near real-time
This scenario applies only to Talend Real-time Big Data Platform or Talend Data Fabric.
In this scenario, you create a Spark Streaming Job to analyze, at the end of each
15-second interval, which hashtags are most used by Twitter users when they mention Paris in
their Tweets over the previous 20 seconds.
An open source third-party program is used to receive and write Twitter streams in a given
Kafka topic, twitter_live for example, and the Job you
design in this scenario is used to consume the Tweets from this topic.
A row of Twitter raw data with hashtags reads like the example presented at https://dev.twitter.com/overview/api/entities-in-twitter-objects#hashtags.
Before replicating this scenario, you need to ensure that your Kafka system is up and
running and you have proper rights and permissions to access the Kafka topic to be used. You
also need a Twitter-streaming program to transfer Twitter streams into Kafka in near
does not provide this kind of program but some free programs
created for this purpose are available in some online communities such as Github.
To replicate this scenario, proceed as follows: