簡體   English   中英

使用spark和RDD映射Cassandra數據庫的表

[英]Map a table of a cassandra database using spark and RDD

我必須映射一張表格,其中寫入了應用程序的使用歷史。 桌子上有這些元組:

<AppId,date,cpuUsage,memoryUsage>
<AppId,date,cpuUsage,memoryUsage>
<AppId,date,cpuUsage,memoryUsage>
<AppId,date,cpuUsage,memoryUsage>
<AppId,date,cpuUsage,memoryUsage>

AppId總是不同的,因為在許多應用程序中都引用了它,所以date以這種格式表示: dd/mm/yyyy hh/mm cpuUsagememoryUsage%表示,例如:

<3ghffh3t482age20304,230720142245,0.2,3,5>

我以這種方式(小片段)從cassandra中檢索了數據:

public static void main(String[] args) {
        Cluster cluster;
        Session session;
        cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
        session = cluster.connect();
        session.execute("CREATE KEYSPACE IF NOT EXISTS foo WITH replication "
                + "= {'class':'SimpleStrategy', 'replication_factor':3};");
        String createTableAppUsage = "CREATE TABLE IF NOT EXISTS foo.appusage"
                + "(appid text,date text, cpuusage double, memoryusage double, "
                + "PRIMARY KEY(appid,date) " + "WITH CLUSTERING ORDER BY (time ASC);";
        session.execute(createTableAppUsage);
        // Use select to get the appusage's table rows
        ResultSet resultForAppUsage = session.execute("SELECT appid,cpuusage FROM foo.appusage");
       for (Row row: resultForAppUsage)
             System.out.println("appid :" + row.getString("appid") +" "+ "cpuusage"+row.getString("cpuusage"));
        // Clean up the connection by closing it
        cluster.close();
    }

所以,我現在的問題是key value映射數據並創建一個集成此代碼的元組(無效的代碼段):

        <AppId,cpuusage>

        JavaPairRDD<String, Integer> saveTupleKeyValue =someStructureFromTakeData.mapToPair(new PairFunction<String, String, Integer>() {
            public Tuple2<String, Integer> call(String x) {
                return new Tuple2(x, y);
            }

我如何使用RDD和reduce來映射appId和cpuusage eg. cpuusage >50 eg. cpuusage >50

有什么幫助嗎?

提前致謝。

假設您已經創建了一個有效的SparkContext sparkContext ,並已將spark-cassandra連接器依賴項添加到您的項目中,並配置了spark應用程序以與您的cassandra集群進行通信(請參閱該文檔 ),那么我們可以將數據加載到RDD中,例如這個:

val data = sparkContext.cassandraTable("foo", "appusage").select("appid", "cpuusage")

在Java中,思路是相同的,但它需要多一點的管道,描述在這里

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM