简体   繁体   中英

Do I need to add an ODBC Connection to Azure Databricks? If so, how?

I am trying to do run a simple a df.to_sql() process and I'm getting the following error.

Can't open lib 'Simba Spark ODBC Driver' : file not found (0) (SQLDriverConnect)

Here is my code.

import pandas as pd
from sqlalchemy import create_engine
import urllib
import pyodbc

params = urllib.parse.quote_plus("DRIVER={SQL Server Native Client 11.0};SERVER=server_name.database.windows.net;DATABASE=my_db;UID=my_id;PWD=my_pw")
myeng = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)


df.to_sql(name="dbo.my_table", con=myeng, if_exists='append', index=False)

If I look in Cluster > Configuration > Advanced Options > JDBC/ODBC, I don't see any reference to ODBC. I see some JDBC stuff, and that's it. I'm not completely sure how to proceed here. If anyone can offer some guidance as to how to make this work, I would really appreciate it.

To make Databricks ODBC connect, you need to:

In this article , you learn how to use the Databricks ODBC driver to connect Azure Databricks with Microsoft Excel, Python, or R language. Once you establish the connection, you can access the data in Azure Databricks from the Excel, Python, or R clients. You can also use the clients to further analyze the data.

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM