[英]Convert Rows to Columns in PYSPARK
我正在做一個需要轉置數據的項目。 過去,我使用 SAS 和 SQL 完成了它,它們曾經非常快。 我使用了帶有堆棧的 expr function,如下所述(代碼部分)。
我面臨的問題是2倍。
到目前為止我所做的:數據以鑲木地板文件的形式存儲在 Azure Synapse Workspace 中。 首先,我為數據框中的每一行分配了一個 ROWNUMBER。 然后我將數據分成兩個數據框。
第 2 步是一個殺手,我的意思是我無法通過這一步,因為 session 在 4 小時后終止。
我也嘗試過使用 SPARK SQL,但運氣不好。 此外,我被建議不要在 SPARK 中使用 SQL,因為它會降低性能。
我也在考慮在 PYSPARK 之外進行轉置(不確定如何以及是否建議這樣做)。
到目前為止我寫的代碼:
import sys
import pyspark.sql as t
import pyspark.sql.functions as f
from pyspark.sql.types import *
df_raw=spark.read.parquet("abfss:path/med_claims/*.parquet")
df_rn=df_raw.withColumn("ROWNUM", f.row_number().over(t.Window.orderBy(df_raw.MEMBER_ID, df_raw.SERVICE_FROM_DATE, df_raw.SERVICE_THRU_DATE)))
df1=df_rn.select(
df_rn.ROWNUM,
df_rn.MEMBER_ID,
df_rn.MEMBER_ID_DEPENDENT,
df_rn.SERVICE_FROM_DATE,
df_rn.SERVICE_THRU_DATE,
df_rn.SERVICE_PROCEDURE_CODE
)
df2=df_rn.select(df_rn.ROWNUM,
f.expr("stack(25, code1, code2, code3, code4, code5, \
code6, code7, code8, code9, code10, \
code11, code12, code13, code14, code15, \
code16, code17, code18, code19, code20, \
code21, code22, code23, code24, code25) as (TRANPOSED_DIAG)")) \
.dropDuplicates() \
.where(" (TRANPOSED_DIAG IS NOT NULL) OR (TRIM(TRANPOSED_DIAG) <> '') ")
df3=df1.join(df2, df1.ROWNUM == df2.ROWNUM, 'left') \
.select(df1.ROWNUM,
df1.MEMBER_ID,
df1.MEMBER_ID_DEPENDENT,
df1.SERVICE_FROM_DATE,
df1.SERVICE_THRU_DATE,
df1.SERVICE_PROCEDURE_CODE,
df2.TRANPOSED_DIAG
)
輸入數據:
會員ID | MEMBER_ID_DEPENDENT | PROVIDER_KEY | REVENUE_KEY | PLACE_OF_SERVICE_KEY | SERVICE_FROM_DATE | SERVICE_THRU_DATE | SERVICE_PROCEDURE_CODE | 代碼1 | 代碼2 | 代碼3 | CODE4 | CODE5 | CODE6 | CODE7 | CODE8 | CODE9 | CODE10 | CODE11 | CODE12 | CODE13 | CODE14 | CODE15 | CODE16 | CODE17 | CODE18 | CODE19 | CODE20 | CODE21 | CODE22 | CODE23 | CODE24 | CODE25 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A1 | A11 | AB05547 | 4.85148E+12 | 7.96651E+11 | 2019 年 9 月 23 日 0:00 | 2019 年 9 月 23 日 0:00 | 89240 | Z0000 | M25852 | M25851 | Z0000 | M25551 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
A1 | A11 | AB92685 | 4.85148E+12 | 7.96651E+11 | 2020 年 10 月 23 日 0:00 | 2020 年 10 月 23 日 0:00 | 89240 | Z524 | Z524 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
A2 | A12 | AB64081 | 4.8515E+12 | 7.96651E+11 | 2020 年 6 月 19 日 0:00 | 2020 年 6 月 19 日 0:00 | 76499 | Z9884 | R109 | K219 | K449 | Z9884 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | |
A3 | A13 | AB64081 | 4.8515E+12 | 7.96651E+11 | 2019 年 9 月 13 日 0:00 | 2019 年 9 月 13 日 0:00 | 76499 | Z1231 | Z1231 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
A4 | A14 | AB74417 | 4.8515E+12 | 7.96651E+11 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76499 | N210 | N400 | E782 | E119 | I10 | Z87891 | N210 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
預期 Output:
會員ID | MEMBER_ID_DEPENDENT | PROVIDER_KEY | REVENUE_KEY | PLACE_OF_SERVICE_KEY | SERVICE_FROM_DATE | SERVICE_THRU_DATE | SERVICE_PROCEDURE_CODE | TRANSPOSED_DIAGNOSIS |
---|---|---|---|---|---|---|---|---|
A1 | A11 | AB05547 | 4851484842551 | 796650504854 | 2019 年 9 月 23 日 0:00 | 2019 年 9 月 23 日 0:00 | 89240 | Z0000 |
A1 | A11 | AB05548 | 4851484842551 | 796650504854 | 2019 年 9 月 23 日 0:00 | 2019 年 9 月 23 日 0:00 | 89241 | M25852 |
A1 | A11 | AB05549 | 4851484842551 | 796650504854 | 2019 年 9 月 23 日 0:00 | 2019 年 9 月 23 日 0:00 | 89242 | M25851 |
A1 | A11 | AB05550 | 4851484842551 | 796650504854 | 2019 年 9 月 23 日 0:00 | 2019 年 9 月 23 日 0:00 | 89243 | M25551 |
A1 | A11 | AB92685 | 4851484842551 | 796650504854 | 2020 年 10 月 23 日 0:00 | 2020 年 10 月 23 日 0:00 | 89240 | Z524 |
A2 | A12 | AB64081 | 4851504842551 | 796650504854 | 2020 年 6 月 19 日 0:00 | 2020 年 6 月 19 日 0:00 | 76499 | Z9884 |
A2 | A12 | AB64082 | 4851504842551 | 796650504854 | 2020 年 6 月 19 日 0:00 | 2020 年 6 月 19 日 0:00 | 76500 | R109 |
A2 | A12 | AB64083 | 4851504842551 | 796650504854 | 2020 年 6 月 19 日 0:00 | 2020 年 6 月 19 日 0:00 | 76501 | K219 |
A2 | A12 | AB64084 | 4851504842551 | 796650504854 | 2020 年 6 月 19 日 0:00 | 2020 年 6 月 19 日 0:00 | 76502 | K449 |
A3 | A13 | AB64081 | 4851504842551 | 796650504854 | 2019 年 9 月 13 日 0:00 | 2019 年 9 月 13 日 0:00 | 76499 | Z1231 |
A4 | A14 | AB74417 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76499 | N210 |
A4 | A14 | AB74418 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76500 | N400 |
A4 | A14 | AB74419 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76501 | E782 |
A4 | A14 | AB74420 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76502 | E119 |
A4 | A14 | AB74421 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76503 | I10 |
A4 | A14 | AB74422 | 4851504842551 | 796650504854 | 2019 年 9 月 30 日 0:00 | 2019 年 9 月 30 日 0:00 | 76504 | Z87891 |
這在任何方法中都是一項昂貴的操作,但是您可以考慮使用以下方法來避免使用另一個昂貴的連接。
為了簡化和代碼重用,我將所需的列和與代碼相關的列過濾到不同的變量中,而不是對它們進行硬編碼。
從df_raw
的第一次加載開始,您可以嘗試以下操作:
from pyspark.sql import functions as F
from pyspark.sql import Window
# extract service procedure code columns from `df_raw` by looking for the simple pattern 'CODE'.
# This filter can be easily modified for more complex code columns names
service_procedure_cols = [col for col in df_raw.columns if 'CODE' in col and 'SERVICE' not in col]
# extract the desired column names in the dataframe
desired_cols = [col for col in df_raw.columns if 'CODE' not in col or 'SERVICE' in col]
#build the stack expresssion by counting the number of columns with `len` and concatenating the column names
code_column_stack_expression = "stack("+str(len(service_procedure_cols))+", "+",".join(service_procedure_cols)+") as (TRANSPOSED_DIAGNOSIS)"
df_step_1 = (
# select the desired column names and unpivot the data
df_raw.select(desired_cols + [ F.expr(code_column_stack_expression)])
# filter or remove null and empty columns
.where(F.col("TRANSPOSED_DIAGNOSIS").isNotNull() & (F.trim("TRANSPOSED_DIAGNOSIS") != '' ))
# remove duplicates
.dropDuplicates()
)
df_step_1.show(truncate=False)
輸出:
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|MEMBER_ID|MEMBER_ID_DEPENDENT|PROVIDER_KEY|REVENUE_KEY|PLACE_OF_SERVICE_KEY|SERVICE_FROM_DATE|SERVICE_THRU_DATE|SERVICE_PROCEDURE_CODE|TRANSPOSED_DIAGNOSIS|
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25852 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25851 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25551 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89240 |Z524 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89240 |Z524 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |Z9884 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |R109 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |K219 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |K449
df_step_2 = (
# Replace the existing `SERVICE_PROCEDURE_CODE` column with the new service procedure column by casting it as an integer and adding the generated row number partitioned by your desired columns and ordered by the columns you specified in your example
df_step_1.withColumn(
"SERVICE_PROCEDURE_CODE",
F.col("SERVICE_PROCEDURE_CODE").cast("INT")+F.row_number().over(
Window.partitionBy(desired_cols).orderBy(["MEMBER_ID", "SERVICE_FROM_DATE", "SERVICE_THRU_DATE"]) -1
)
)
)
df_step_2.show(truncate=False)
輸出:
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|MEMBER_ID|MEMBER_ID_DEPENDENT|PROVIDER_KEY|REVENUE_KEY|PLACE_OF_SERVICE_KEY|SERVICE_FROM_DATE|SERVICE_THRU_DATE|SERVICE_PROCEDURE_CODE|TRANSPOSED_DIAGNOSIS|
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89241 |M25852 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89242 |M25851 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89243 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89244 |M25551 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89240 |Z524 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89241 |Z524 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |Z9884 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76500 |R109 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76501 |K219 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76502 |K449 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76503 |Z9884 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76499 |Z1231 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76500 |Z1231 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |N210 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76500 |N400 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76501 |E782 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76502 |E119 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76503 |I10 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76504 |Z87891 |
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
only showing top 20 rows
對於某些人來說,這種方法也可能更易於閱讀,因為它使用循環來構建所需數據集的聯合。
注意。 這可能會導致您的服務過程代碼重疊
從df_raw
的第一次加載開始,您可以嘗試以下操作:
from pyspark.sql import functions as F
from pyspark.sql import Window
# cache the original df
df_raw.cache()
# extract service procedure code columns from `df_raw` by looking for the simple pattern 'CODE'.
# This filter can be easily modified for more complex code columns names
service_procedure_cols = [col for col in df_raw.columns if 'CODE' in col and 'SERVICE' not in col]
# extract the desired column names in the dataframe
desired_cols = [col for col in df_raw.columns if 'CODE' not in col or 'SERVICE' in col]
# use a temp variable `df_combined` to store the final dataframe
df_combined = None
# for each of the service procedure columns
for col in service_procedure_cols:
# extract the code number
col_num = int(col.replace("CODE",""))
# combined the desired columns with this code column to get all desired columns for the diagnosis
diagnosis_desired_columns = desired_cols + [col]
# creating a temporary df
interim_df = (
# select all desired columns
df_raw.select(*diagnosis_desired_columns)
# update the service procedure code with the extracted code number
.withColumn(
"SERVICE_PROCEDURE_CODE",
F.col("SERVICE_PROCEDURE_CODE").cast("INT")+col_num
)
# rename the code column
.withColumnRenamed(col,"TRANSPOSED_DIAGNOSIS")
# filter null and empty columns
.where(F.col("TRANSPOSED_DIAGNOSIS").isNotNull() & (F.trim("TRANSPOSED_DIAGNOSIS") !=''))
.dropDuplicates()
)
# if the initial combined df variable is empty assign it `interim_df`
# otherwise perform a union and store the result
if df_combined is None:
df_combined = interim_df
else:
df_combined = df_combined.union(interim_df)
# only here for debugging purposes to show the results
df_combined.orderBy(desired_cols).show(truncate=False)
輸出:
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|MEMBER_ID|MEMBER_ID_DEPENDENT|PROVIDER_KEY|REVENUE_KEY|PLACE_OF_SERVICE_KEY|SERVICE_FROM_DATE|SERVICE_THRU_DATE|SERVICE_PROCEDURE_CODE|TRANSPOSED_DIAGNOSIS|
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89241 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89242 |M25852 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89243 |M25851 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89244 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89245 |M25551 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89241 |Z524 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89242 |Z524 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76500 |Z9884 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76501 |R109 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76502 |K219 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76503 |K449 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76504 |Z9884 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76500 |Z1231 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76501 |Z1231 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76500 |N210 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76501 |N400 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76502 |E782 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76503 |E119 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76504 |I10 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76505 |Z87891 |
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
only showing top 20 rows
合並列,過濾 null 值后將其分解。
codes = list(filter(lambda c: c.startswith('CODE'), df.columns))
df.withColumn('TRANSPOSED_DIAGNOSIS', f.array(*map(lambda c: f.col(c), codes))) \
.drop(*codes) \
.withColumn('TRANSPOSED_DIAGNOSIS', f.expr('filter(TRANSPOSED_DIAGNOSIS, x -> x is not null)')) \
.withColumn('TRANSPOSED_DIAGNOSIS', f.explode('TRANSPOSED_DIAGNOSIS')) \
.show(30, truncate=False)
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|MEMBER_ID|MEMBER_ID_DEPENDENT|PROVIDER_KEY|REVENUE_KEY|PLACE_OF_SERVICE_KEY|SERVICE_FROM_DATE|SERVICE_THRU_DATE|SERVICE_PROCEDURE_CODE|TRANSPOSED_DIAGNOSIS|
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25852 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25851 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |Z0000 |
|A1 |A11 |AB05547 |4.85148E+12|7.96651E+11 |9/23/2019 0:00 |9/23/2019 0:00 |89240 |M25551 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89240 |Z524 |
|A1 |A11 |AB92685 |4.85148E+12|7.96651E+11 |10/23/2020 0:00 |10/23/2020 0:00 |89240 |Z524 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |Z9884 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |R109 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |K219 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |K449 |
|A2 |A12 |AB64081 |4.8515E+12 |7.96651E+11 |6/19/2020 0:00 |6/19/2020 0:00 |76499 |Z9884 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76499 |Z1231 |
|A3 |A13 |AB64081 |4.8515E+12 |7.96651E+11 |9/13/2019 0:00 |9/13/2019 0:00 |76499 |Z1231 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |N210 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |N400 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |E782 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |E119 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |I10 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |Z87891 |
|A4 |A14 |AB74417 |4.8515E+12 |7.96651E+11 |9/30/2019 0:00 |9/30/2019 0:00 |76499 |N210 |
+---------+-------------------+------------+-----------+--------------------+-----------------+-----------------+----------------------+--------------------+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.