简体   繁体   中英

For loop to create a dataframe using pandas read_sql in python

I have a SQLite database named Europe.db. I want to import, filter and save the results as several pandas data frames. The current code works, but I'm sure it could somehow be simplified with a for loop.

Current code:

company = "THEP.PA"
database = "sqlite:///Europe.db"

sqlite_table = f"SELECT * FROM balance_sheet WHERE symbol='{company}'"
bs_df = pd.read_sql(sqlite_table, database)

sqlite_table = f"SELECT * FROM cashflow_statement WHERE symbol='{company}'"
cf_df = pd.read_sql(sqlite_table, database)

sqlite_table = f"SELECT * FROM income_statement WHERE symbol='{company}'"
is_df = pd.read_sql(sqlite_table, database)

sqlite_table = f"SELECT * FROM key_executives WHERE company='{company}'"
key_executives_df = pd.read_sql(sqlite_table, database)

sqlite_table = f"SELECT * FROM key_metrics WHERE symbol='{company}'"
metrics_df = pd.read_sql(sqlite_table, database)

A dict is great for this:

company = "THEP.PA"
database = "sqlite:///Europe.db"

tables = {
    'balance_sheet': None,
    'cashflow_statement': None,
    'income_statement': None,
    'key_executives': None,
    'key_metrics': None,
}

for table_name in tables.keys():
    sqlite_table = f"SELECT * FROM {table_name} WHERE symbol='{company}'"
    tables[table_name] = pd.read_sql(sqlite_table, database)

Now, to get at one of those dataframes, use, eg tables['balance_sheet'] where you would have previously used bs_sheet .

Dict would be the better solution but another way could be:

table_names = [
"balance_sheet",
"cashflow_statement",
..
..
]
tables = []

for table_name in table_names:
    sqlite_table = f"SELECT * FROM { table_name } WHERE symbol='{ company }'"
    tables.append(pd.read_sql(sqlite_table, database))
    

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM