简体   繁体   English

在Spark上使用Twitter Streaming API无法获得Tweets流

[英]Can't get a stream of Tweets using Twitter Streaming API on Spark

I am trying to do some analysis on Tweets using Twitter Streaming API. 我正在尝试使用Twitter Streaming API对推文进行一些分析。

I first wanted to print the Status messages from stream, and start from there. 我首先想从流中打印状态消息,然后从那里开始。

My code is shown below: 我的代码如下所示:

public static void main(String[] args) {
  SparkConf conf = new SparkConf().setAppName("TwitterStreamPrinter").setMaster("local");

  Configuration twitterConf = new ConfigurationBuilder()
      .setOAuthConsumerKey(consumerKey)
      .setOAuthConsumerSecret(consumerSecret)
      .setOAuthAccessToken(accessToken)
      .setOAuthAccessTokenSecret(accessTokenSecret).build();
  OAuth2Authorization auth = new OAuth2Authorization(twitterConf);
  JavaReceiverInputDStream<Status> twitterStream = TwitterUtils.createStream(ssc, auth);

  JavaDStream<String> statuses = twitterStream.map(new Function<Status, String>() {
    public String call(Status status) throws Exception {
      return status.getText();
    }
  });
  statuses.print();

It does not print out anything other than Spark logs. 除了Spark日志外,它不会打印其他任何内容。 I initially thought that this was because of authorization, so I tried all kinds of different ways to pass the authorization, but maybe it's not the authorization. 我最初以为这是因为授权,所以我尝试了各种不同的方式来传递授权,但也许不是授权。

I looked at every example I could find from the web (although there's not many), and this code looks like a standard code to get Twitter statuses, but why is it not printing anything? 我看了我可以从网络上找到的每个示例(尽管数量不多),并且该代码看起来像是获取Twitter状态的标准代码,但是为什么不打印任何内容? I also tried System.out.println , but it didn't work. 我也尝试了System.out.println ,但是没有用。

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/03/19 12:02:23 INFO SparkContext: Running Spark version 1.6.1
16/03/19 12:02:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/19 12:02:24 INFO SecurityManager: Changing view acls to: abcd
16/03/19 12:02:24 INFO SecurityManager: Changing modify acls to: abcd
16/03/19 12:02:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(abcd); users with modify permissions: Set(abcd)
16/03/19 12:02:24 INFO Utils: Successfully started service 'sparkDriver' on port 50995.
16/03/19 12:02:24 INFO Slf4jLogger: Slf4jLogger started
16/03/19 12:02:25 INFO Remoting: Starting remoting
16/03/19 12:02:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.0.0.12:51003]
16/03/19 12:02:25 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 51003.
16/03/19 12:02:25 INFO SparkEnv: Registering MapOutputTracker
16/03/19 12:02:25 INFO SparkEnv: Registering BlockManagerMaster
16/03/19 12:02:25 INFO DiskBlockManager: Created local directory at /private/var/folders/3b/wzflbsn146qgwdglbm_6ms3m0000hl/T/blockmgr-e3de07a6-0c62-47cf-9940-da18382c9241
16/03/19 12:02:25 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
16/03/19 12:02:25 INFO SparkEnv: Registering OutputCommitCoordinator
16/03/19 12:02:25 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/03/19 12:02:25 INFO SparkUI: Started SparkUI at http://10.0.0.12:4040
16/03/19 12:02:25 INFO Executor: Starting executor ID driver on host localhost
16/03/19 12:02:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51016.
16/03/19 12:02:25 INFO NettyBlockTransferService: Server created on 51016
16/03/19 12:02:25 INFO BlockManagerMaster: Trying to register BlockManager
16/03/19 12:02:25 INFO BlockManagerMasterEndpoint: Registering block manager localhost:51016 with 2.4 GB RAM, BlockManagerId(driver, localhost, 51016)
16/03/19 12:02:25 INFO BlockManagerMaster: Registered BlockManager
16/03/19 12:02:25 WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data, otherwise Spark jobs will not get resources to process the received data.
16/03/19 12:02:26 INFO SparkContext: Invoking stop() from shutdown hook
16/03/19 12:02:26 INFO SparkUI: Stopped Spark web UI at http://10.0.0.12:4040
16/03/19 12:02:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/03/19 12:02:26 INFO MemoryStore: MemoryStore cleared
16/03/19 12:02:26 INFO BlockManager: BlockManager stopped
16/03/19 12:02:26 INFO BlockManagerMaster: BlockManagerMaster stopped
16/03/19 12:02:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/03/19 12:02:26 INFO SparkContext: Successfully stopped SparkContext
16/03/19 12:02:26 INFO ShutdownHookManager: Shutdown hook called
16/03/19 12:02:26 INFO ShutdownHookManager: Deleting directory /private/var/folders/3b/.....

You have everything in your logs: 您的日志中有所有内容:

6/03/19 12:02:25 WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data, otherwise Spark jobs will not get resources to process the received data. 19/6/03 12:02:25 WARN StreamingContext:如果您有接收器来获取数据,则在本地模式下将spark.master设置为local [n],n> 1,否则Spark作业将不会获得资源来处理接收到的数据数据。

so the answer is set master to be local[*] 因此答案设置为master为本地[*]

in addition, have you forgot to start? 另外,您忘了开始吗?

jssc.start(); jssc.start(); // Start the computation //开始计算

jssc.awaitTermination(); jssc.awaitTermination();

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM