![](/img/trans.png)
[英]What is the difference between synchronized(this) and synchronized method
[英]Difference between synchronized method and synchronized block
我試圖通過示例了解synchronized塊和synchronized方法之間的區別。 考慮以下簡單類:
public class Main {
private static final Object lock = new Object();
private static long l;
public static void main(String[] args) {
}
public static void action(){
synchronized(lock){
l = (l + 1) * 2;
System.out.println(l);
}
}
}
編譯后的Main::action()
將如下所示:
public static void action();
Code:
0: getstatic #2 // Field lock:Ljava/lang/Object;
3: dup
4: astore_0
5: monitorenter // <---- ENTERING
6: getstatic #3 // Field l:J
9: lconst_1
10: ladd
11: ldc2_w #4 // long 2l
14: lmul
15: putstatic #3 // Field l:J
18: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
21: getstatic #3 // Field l:J
24: invokevirtual #7 // Method java/io/PrintStream.println:(J)V
27: aload_0
28: monitorexit // <---- EXITING
29: goto 37
32: astore_1
33: aload_0
34: monitorexit // <---- EXITING TWICE????
35: aload_1
36: athrow
37: return
我認為我們最好使用synchronized塊而不是synchronized方法,因為它提供了更多的封裝,防止客戶端影響同步策略(使用synchronized方法,任何客戶端都可以獲取影響同步策略的鎖定)。 但從表現的角度來看,在我看來幾乎是一樣的。 現在考慮synchronized-method版本:
public static synchronized void action(){
l = (l + 1) * 2;
System.out.println(l);
}
public static synchronized void action();
Code:
0: getstatic #2 // Field l:J
3: lconst_1
4: ladd
5: ldc2_w #3 // long 2l
8: lmul
9: putstatic #2 // Field l:J
12: getstatic #5 // Field java/lang/System.out:Ljava/io/PrintStream;
15: getstatic #2 // Field l:J
18: invokevirtual #6 // Method java/io/PrintStream.println:(J)V
21: return
因此,在synchronized方法版本中,執行的內容要少得多,所以我會說它更快。
問題:同步方法比同步塊更快嗎?
使用在此答案底部發布的Java代碼進行的快速測試導致synchronized method
更快。 在i7上的Windows JVM上運行代碼導致以下平均值
同步塊:0.004254 s
同步方法:0.001056 s
actually faster as per your byte-code assessment. 這將意味着, synchronized method
實際上更快按照您的字節碼的評估。
然而,令我感到困惑的是2次中的明顯差異。 我原以為JVM仍然可以鎖定基礎同步方法,並且時間上的差異可以忽略不計,但這不是最終結果。 由於Oracle JVM已關閉,我查看了OpenJDK熱點JVM源代碼並挖掘了處理同步方法/塊的字節碼解釋器。 重申一下,以下JVM代碼適用於OpenJDK,但我認為官方JVM在性質上與處理這種情況的方式類似。
構建.class
文件時,如果方法是同步的,則放入字節代碼,警告JVM該方法是同步的(類似於如果方法是static/public/final/varargs
等,則添加字節代碼) ,並且底層JVM代碼在方法結構上設置了此效果的標志。
當字節碼解釋器命中字節代碼進行方法調用時,在調用方法之前調用以下代碼來檢查是否需要鎖定它:
case method_entry: {
/* CODE_EDIT: irrelevant code removed for brevities sake */
// lock method if synchronized
if (METHOD->is_synchronized()) {
// oop rcvr = locals[0].j.r;
oop rcvr;
if (METHOD->is_static()) {
rcvr = METHOD->constants()->pool_holder()->java_mirror();
} else {
rcvr = LOCALS_OBJECT(0);
VERIFY_OOP(rcvr);
}
// The initial monitor is ours for the taking
BasicObjectLock* mon = &istate->monitor_base()[-1];
oop monobj = mon->obj();
assert(mon->obj() == rcvr, "method monitor mis-initialized");
bool success = UseBiasedLocking;
if (UseBiasedLocking) {
/* CODE_EDIT: this code is only run if you have biased locking enabled as a JVM option */
}
if (!success) {
markOop displaced = rcvr->mark()->set_unlocked();
mon->lock()->set_displaced_header(displaced);
if (Atomic::cmpxchg_ptr(mon, rcvr->mark_addr(), displaced) != displaced) {
// Is it simple recursive case?
if (THREAD->is_lock_owned((address) displaced->clear_lock_bits())) {
mon->lock()->set_displaced_header(NULL);
} else {
CALL_VM(InterpreterRuntime::monitorenter(THREAD, mon), handle_exception);
}
}
}
}
/* CODE_EDIT: irrelevant code removed for brevities sake */
goto run;
}
然后,當方法完成並返回到JVM函數處理程序時,將調用以下代碼來解鎖方法(請注意,在調用方法bool method_unlock_needed = METHOD->is_synchronized()
之前設置boolean method_unlock_needed
):
if (method_unlock_needed) {
if (base->obj() == NULL) {
/* CODE_EDIT: irrelevant code removed for brevities sake */
} else {
oop rcvr = base->obj();
if (rcvr == NULL) {
if (!suppress_error) {
VM_JAVA_ERROR_NO_JUMP(vmSymbols::java_lang_NullPointerException(), "");
illegal_state_oop = THREAD->pending_exception();
THREAD->clear_pending_exception();
}
} else {
BasicLock* lock = base->lock();
markOop header = lock->displaced_header();
base->set_obj(NULL);
// If it isn't recursive we either must swap old header or call the runtime
if (header != NULL) {
if (Atomic::cmpxchg_ptr(header, rcvr->mark_addr(), lock) != lock) {
// restore object for the slow case
base->set_obj(rcvr);
{
// Prevent any HandleMarkCleaner from freeing our live handles
HandleMark __hm(THREAD);
CALL_VM_NOCHECK(InterpreterRuntime::monitorexit(THREAD, base));
}
if (THREAD->has_pending_exception()) {
if (!suppress_error) illegal_state_oop = THREAD->pending_exception();
THREAD->clear_pending_exception();
}
}
}
}
}
}
語句CALL_VM(InterpreterRuntime::monitorenter(THREAD, mon), handle_exception);
和CALL_VM_NOCHECK(InterpreterRuntime::monitorexit(THREAD, base));
,更具體地說,函數InterpreterRuntime::monitorenter
和InterpreterRuntime::monitorexit
是在JVM中為同步方法和塊鎖定/解鎖基礎對象而調用的代碼。 代碼中的run
標簽是大量的字節碼解釋器switch
語句,它處理被解析的不同字節碼。
從這里,如果遇到同步塊操作碼( monitorenter
和monitorexit
字節碼),則運行以下case
語句(分別用於monitorenter
和monitorexit
):
CASE(_monitorenter): {
oop lockee = STACK_OBJECT(-1);
// derefing's lockee ought to provoke implicit null check
CHECK_NULL(lockee);
// find a free monitor or one already allocated for this object
// if we find a matching object then we need a new monitor
// since this is recursive enter
BasicObjectLock* limit = istate->monitor_base();
BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base();
BasicObjectLock* entry = NULL;
while (most_recent != limit ) {
if (most_recent->obj() == NULL) entry = most_recent;
else if (most_recent->obj() == lockee) break;
most_recent++;
}
if (entry != NULL) {
entry->set_obj(lockee);
markOop displaced = lockee->mark()->set_unlocked();
entry->lock()->set_displaced_header(displaced);
if (Atomic::cmpxchg_ptr(entry, lockee->mark_addr(), displaced) != displaced) {
// Is it simple recursive case?
if (THREAD->is_lock_owned((address) displaced->clear_lock_bits())) {
entry->lock()->set_displaced_header(NULL);
} else {
CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
}
}
UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1);
} else {
istate->set_msg(more_monitors);
UPDATE_PC_AND_RETURN(0); // Re-execute
}
}
CASE(_monitorexit): {
oop lockee = STACK_OBJECT(-1);
CHECK_NULL(lockee);
// derefing's lockee ought to provoke implicit null check
// find our monitor slot
BasicObjectLock* limit = istate->monitor_base();
BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base();
while (most_recent != limit ) {
if ((most_recent)->obj() == lockee) {
BasicLock* lock = most_recent->lock();
markOop header = lock->displaced_header();
most_recent->set_obj(NULL);
// If it isn't recursive we either must swap old header or call the runtime
if (header != NULL) {
if (Atomic::cmpxchg_ptr(header, lockee->mark_addr(), lock) != lock) {
// restore object for the slow case
most_recent->set_obj(lockee);
CALL_VM(InterpreterRuntime::monitorexit(THREAD, most_recent), handle_exception);
}
}
UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1);
}
most_recent++;
}
// Need to throw illegal monitor state exception
CALL_VM(InterpreterRuntime::throw_illegal_monitor_state_exception(THREAD), handle_exception);
ShouldNotReachHere();
}
同樣,調用相同的InterpreterRuntime::monitorenter
和InterpreterRuntime::monitorexit
函數來鎖定底層對象,但是在進程中有更多的開銷,這解釋了為什么與synchronized方法和synchronized塊有不同的時間。
顯然,同步方法和同步塊在使用時都有其優缺點,但問題是詢問哪個更快,並且基於初步測試和來自OpenJDK的源,它看起來好像是同步方法(單獨)確實比同步塊(單獨)更快。 您的結果可能會有所不同(特別是代碼越復雜),因此如果性能是一個問題,最好自己進行測試並從那里測量對您的案例有意義的內容。
這是相關的Java測試代碼:
public class Main
{
public static final Object lock = new Object();
private static long l = 0;
public static void SyncLock()
{
synchronized (lock) {
++l;
}
}
public static synchronized void SyncFunction()
{
++l;
}
public static class ThreadSyncLock implements Runnable
{
@Override
public void run()
{
for (int i = 0; i < 10000; ++i) {
SyncLock();
}
}
}
public static class ThreadSyncFn implements Runnable
{
@Override
public void run()
{
for (int i = 0; i < 10000; ++i) {
SyncFunction();
}
}
}
public static void main(String[] args)
{
l = 0;
try {
java.util.ArrayList<Thread> threads = new java.util.ArrayList<Thread>();
long start, end;
double avg1 = 0, avg2 = 0;
for (int x = 0; x < 1000; ++x) {
threads.clear();
for (int i = 0; i < 8; ++i) { threads.add(new Thread(new ThreadSyncLock())); }
start = System.currentTimeMillis();
for (int i = 0; i < 8; ++i) { threads.get(i).start(); }
for (int i = 0; i < 8; ++i) { threads.get(i).join(); }
end = System.currentTimeMillis();
avg1 += ((end - start) / 1000f);
l = 0;
threads.clear();
for (int i = 0; i < 8; ++i) { threads.add(new Thread(new ThreadSyncFn())); }
start = System.currentTimeMillis();
for (int i = 0; i < 8; ++i) { threads.get(i).start(); }
for (int i = 0; i < 8; ++i) { threads.get(i).join(); }
end = System.currentTimeMillis();
avg2 += ((end - start) / 1000f);
l = 0;
}
System.out.format("avg1: %f s\navg2: %f s\n", (avg1/1000), (avg2/1000));
l = 0;
} catch (Throwable t) {
System.out.println(t.toString());
}
}
}
希望可以幫助增加一些清晰度。
考慮到你的同步塊有一個goto,在它之后否定6條左右的指令,指令的數量實際上並沒有那么不同。
它實際上歸結為如何最好地跨多個訪問線程公開對象。
相反,實際中的同步方法應該比同步塊慢得多,因為同步方法會使更多的代碼順序。
但是,如果兩者都包含相同數量的代碼,則下面的測試支持的性能應該沒有太大差異。
支持班級
public interface TestMethod {
public void test(double[] array);
public String getName();
}
public class TestSynchronizedBlock implements TestMethod{
private static final Object lock = new Object();
public synchronized void test(double[] arr) {
synchronized (lock) {
double sum = 0;
for(double d : arr) {
for(double d1 : arr) {
sum += d*d1;
}
}
//System.out.print(sum + " ");
}
}
@Override
public String getName() {
return getClass().getName();
}
}
public class TestSynchronizedMethod implements TestMethod {
public synchronized void test(double[] arr) {
double sum = 0;
for(double d : arr) {
for(double d1 : arr) {
sum += d*d1;
}
}
//System.out.print(sum + " ");
}
@Override
public String getName() {
return getClass().getName();
}
}
主類
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
public class TestSynchronizedMain {
public static void main(String[] args) {
TestSynchronizedMain main = new TestSynchronizedMain();
TestMethod testMethod = null;
Random rand = new Random();
double[] arr = new double[10000];
for(int j = 0; j < arr.length; j++) {
arr[j] = rand.nextDouble() * 10000;
}
/*testMethod = new TestSynchronizedBlock();
main.testSynchronized(testMethod, arr);*/
testMethod = new TestSynchronizedMethod();
main.testSynchronized(testMethod, arr);
}
public void testSynchronized(final TestMethod testMethod, double[] arr) {
System.out.println("Testing " + testMethod.getName());
ExecutorService executor = Executors.newCachedThreadPool();
AtomicLong time = new AtomicLong();
AtomicLong startCounter = new AtomicLong();
AtomicLong endCounter = new AtomicLong();
for (int i = 0; i < 100; i++) {
executor.submit(new Runnable() {
@Override
public void run() {
// System.out.println("Started");
startCounter.incrementAndGet();
long startTime = System.currentTimeMillis();
testMethod.test(arr);
long endTime = System.currentTimeMillis();
long delta = endTime - startTime;
//System.out.print(delta + " ");
time.addAndGet(delta);
endCounter.incrementAndGet();
}
});
}
executor.shutdown();
try {
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
System.out.println("time taken = " + (time.get() / 1000.0) + " : starts = " + startCounter.get() + " : ends = " + endCounter);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
多次運行的主輸出
1. Testing TestSynchronizedBlock
time taken = 537.974 : starts = 100 : ends = 100
Testing TestSynchronizedMethod
time taken = 537.052 : starts = 100 : ends = 100
2. Testing TestSynchronizedBlock
time taken = 535.983 : starts = 100 : ends = 100
Testing TestSynchronizedMethod
time taken = 537.534 : starts = 100 : ends = 100
3. Testing TestSynchronizedBlock
time taken = 553.964 : starts = 100 : ends = 100
Testing TestSynchronizedMethod
time taken = 552.352 : starts = 100 : ends = 100
注意:測試是在Windows 8,64位,i7機器上完成的。 實際時間並不重要,但相對值是。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.