简体   繁体   中英

What changes i need to do so that i can Change this python code for DATABRICKS

Hello this is the python code i developed in my local machine, but now i am trying to work this code on DATABRICKS. But i am new to DATABRICKS so dont know how can i do it.

What i am trying to do is that i have a sample of huge JSON file and i am splitting it in two parts one contain headers and second file contains all the details.

Here is my local machine python code.

import json
import itertools


with open('new_test.json', 'r') as fp:
    data = json.loads(fp.read())


d1 = dict(itertools.islice(data.items(), 8))
print(d1)
d2 = dict(itertools.islice(data.items(), 8, len(data.items())))
print(d2)

with open("new_test_header.json", "w") as header_file:
    json.dump(d1, header_file)
with open("new_test_detail.json", "w") as detail_file:
    json.dump(d2, detail_file)

Here is the JSON file.

{
  "reporting_entity_name": "launcher",
  "reporting_entity_type": "launcher",
  "plan_name": "launched",
  "plan_id_type": "hios",
  "plan_id": "1111111111",
  "plan_market_type": "individual",
  "last_updated_on": "2020-08-27",
  "version": "1.0.0",
  "in_network": [
    {
      "negotiation_arrangement": "ffs",
      "name": "Boosters",
      "billing_code_type": "CPT",
      "billing_code_type_version": "2020",
      "billing_code": "27447",
      "description": "Boosters On Demand",
      "negotiated_rates": [
        {
          "provider_groups": [
            {
              "npi": [
                0
              ],
              "tin": {
                "type": "ein",
                "value": "11-1111111"
              }
            }
          ],
          "negotiated_prices": [
            {
              "negotiated_type": "negotiated",
              "negotiated_rate": 123.45,
              "expiration_date": "2022-01-01",
              "billing_class": "organizational"
            }
          ]
        }
      ]
    }
  ]
}

Here is what i am trying to write in DATABRICKS

import json
import itertools
from pyspark.sql.functions import explode, col

df_json = spark.read.option("multiline","true").json("/mnt/BigData_JSONFiles/SampleDatafilefrombigfile.json")
display(df_json)

d1 = dict(itertools.islice(df_json.items(), 4))
d2 = dict(itertools.islice(df_json.items(), 4, len(df_json.items())))

# I am unable to write the WRITE function.

A help or guidance will be very helpful.

Here is a snippet example:

from pyspark.sql.functions import explode, col

# Read the JSON file from Databricks storage
df_json = spark.read.json("/mnt/BigData_JSONFiles/new_test.json")

# Convert the dataframe to a dictionary
data = df_json.toPandas().to_dict()

# Split the data into two parts
d1 = dict(itertools.islice(data.items(), 8))
d2 = dict(itertools.islice(data.items(), 8, len(data.items())))

# Convert the first part of the data back to a dataframe
df1 = spark.createDataFrame([d1])

# Write the first part of the data to a JSON file in Databricks storage
df1.write.format("json").save("/mnt/BigData_JSONFiles/new_test_header.json")

# Convert the second part of the data back to a dataframe
df2 = spark.createDataFrame([d2])

# Write the second part of the data to a JSON file in Databricks storage
df2.write.format("json").save("/mnt/BigData_JSONFiles/new_test_detail.json")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM