简体   繁体   中英

How does Spark handle failure scenarios involving JDBC data source?

I'm writing a data source that shares similarities with Spark's JDBC data source implementation, and I'd like to ask how Spark handles certain failure scenarios. To my understanding, if an executor dies while it's running a task, Spark will revive the executor and try to re-run that task. However, how does this play out in the context of data integrity and Spark's JDBC data source API (eg df.write.format("jdbc").option(...).save() )?

In the savePartition function of JdbcUtils.scala , we see Spark calling the commit and rollback functions of the Java connection object generated from the database url/credentials provided by the user (see below). But if an executor dies right after commit() finishes or before rollback() is called, does Spark try to re-run the task and write the same data partition again, essentially creating duplicate committed rows in the database? And what happens if the executor dies in the middle of calling commit() or rollback()?

try {
    ...
    if (supportsTransactions) {
        conn.commit()
    }
    committed = true
    Iterator.empty
} catch {
    case e: SQLException =>
        ...
        throw e
} finally {
    if (!committed) {
        // The stage must fail.  We got here through an exception path, so
        // let the exception through unless rollback() or close() want to
        // tell the user about another problem.
        if (supportsTransactions) {
          conn.rollback()
        }
        conn.close()
    } else {
        ...
    }
}

I had to introduce some de-duplication logic exactly for the reasons described. You might end up with the same committed twice (or more) indeed.

But if an executor dies right after commit() finishes or before rollback() is called, does Spark try to re-run the task and write the same data partition again, essentially creating duplicate committed rows in the database?

What would you expect since Spark SQL (which is a high-level API over RDD API) does not really know much about all the peculiarities of JDBC or any other protocol? Not to mention the underlying execution runtime, ie the Spark Core.

When you write a structured query like df.write.format(“jdbc”).option(...).save() Spark SQL translates it to a distributed computation using the low-level assembly-like RDD API. Since it tries to embrace as many "protocols" as possible (incl. JDBC), Spark SQL's DataSource API leaves much of the error handling to the data source itself.

The core of Spark that schedules tasks (that does not know or even care what the tasks do) simply watches execution and if a task fails, it will attempt to execute it again (until 3 failed attempts by default).

So when you write a custom data source, you know the drill and have to deal with such retries in your code.

One way to handle errors is to register a task listener using TaskContext (eg addTaskCompletionListener or addTaskFailureListener ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM