簡體   English   中英

如何確保任務同時執行?

[英]How to ensure tasks are executed simultaneously?

我正在實現一些多線程單元測試,但遇到一個問題,即很難確保實際上並行執行兩個作業-一個作業總是比另一個作業更早開始。 讓我們考慮一下我的測試場景的初始實現來演示行為:

static void Main(string[] args)
{
    var repeats = 1000000;
    var firstWinCount = 0;
    var secondWinCount = 0;
    int x = 0;

    long time1 = 0;
    long time2 = 0;

    long totalTimeDiff = 0;

    var sw = new Stopwatch();
    sw.Start();

    for (int i = 0; i < repeats; i++)
    {
        x = 0;
        var task1 = new Task(() =>
        {
            Interlocked.CompareExchange(ref x, 1, 0);
            time1 = sw.ElapsedMilliseconds;
        });
        var task2 = new Task(() =>
        {
            Interlocked.CompareExchange(ref x, 2, 0);
            time2 = sw.ElapsedMilliseconds;
        });
        task1.Start();
        task2.Start();
        Task.WaitAll(task1, task2);

        totalTimeDiff += Math.Abs(time1 - time2);

        if (x == 1)
        {
            firstWinCount++;
        }
        else
        {
            if (x == 2)
            {
                secondWinCount++;
            }
        }
    }
    Console.WriteLine("First  win count: {0}, percentage: {1}", firstWinCount, firstWinCount / (double)repeats * 100);
    Console.WriteLine("Second win count: {0}, percentage: {1}", secondWinCount, secondWinCount / (double)repeats * 100);

    Console.WriteLine("Avg sync diff: {0}ns", totalTimeDiff * 1000000 / repeats);
}

輸出為:

First  win count: 950538, percentage: 95,0538
Second win count: 49462, percentage: 4,9462
Avg sync diff: 1012ns

如我們所見,大多數時候,第一個任務開始執行的時間早於第二個任務,因為它首先進入線程池:

task1.Start();
task2.Start();

由於ThreadPool在計划任務的方式上是不可預測的, 因此絕對不能保證第一個任務要等到第二個任務開始后才能完成 因此很難確保我們正在實際測試多線程方案。

令人驚訝的是,我在互聯網上找不到類似的問題。

我對AutoResetEvents,鎖和Interlock同步構造的考慮和想法導致了以下任務同步解決方案:

int sync = 0;
var task1 = new Task(() =>
{
    Interlocked.Increment(ref sync);
    while (Interlocked.CompareExchange(ref sync, 3, 2) != 2) ;

    Interlocked.CompareExchange(ref x, 1, 0);
    time1 = sw.ElapsedMilliseconds;
});
var task2 = new Task(() =>
{
    while (Interlocked.CompareExchange(ref sync, 2, 1) != 1) ;

    Interlocked.CompareExchange(ref x, 2, 0);
    time2 = sw.ElapsedMilliseconds;
});

基本上,這種想法可確保在等待其他任務開始處理時,兩個線程都不會被阻塞(因此很可能有處理器時間)。 結果,我設法將同步時間差從〜1000納秒減少到了〜130納秒,並大大提高了短期運行任務並行執行的可能性:

First  win count: 23182, percentage: 2,3182
Second win count: 976818, percentage: 97,6818
Avg sync diff: 128ns

剩下的缺點是任務的順序仍然很明確:第一個任務總是等待第二個完成,第二個一旦知道第一個正在等待,就不再等待並開始執行它的工作。 因此,第二份工作很可能會首先開始。 據我所知,由於[相對較少的]線程切換,Exclusins(2.3%)是可能的。 我可以通過隨機化同步順序來解決它,但這是另一種復雜情況。

我想知道我是否正在重新發明輪子,是否有更好的方法來最大化同時執行兩個任務的可能性,甚至使每個任務的開始時間更早。

PS:我知道多線程方案通常要慢得多,而不是100納秒(同步構建中的任何線程切換或塊都至少慢1000倍),因此在大多數情況下,這種同步延遲並不重要。 但是,這對於測試非阻塞高性能代碼可能至關重要。

我將使用ManualResetEvent。

就像是:

var waitEvent = new ManualResetEvent(false);


var task1 = new Task(() =>
{
    waitEvent.WaitOne();
    Interlocked.CompareExchange(ref x, 1, 0);
    time1 = sw.ElapsedMilliseconds;
});
var task2 = new Task(() =>
{
    waitEvent.WaitOne();
    Interlocked.CompareExchange(ref x, 2, 0);
    time2 = sw.ElapsedMilliseconds;
});
task1.Start();
task2.Start();

// a startup delay? so the thread can be queued/start executing
// but still then, you're not aware how busy the threadpool is.
Thread.Sleep(1000);

waitEvent.Set();

Task.WaitAll(task1, task2);

因此,似乎沒有比使用互鎖同步的想法更好的解決方案了,因此我將其實現為可重用的類,並添加了啟動順序的隨機性以確保啟動順序的機會均等:

public class Operations
{
    private static int _runId = 0;

    public static void ExecuteSimultaneously(Action action1, Action action2)
    {
        Action slightlyEarlierStartingAction;
        Action slightlyLaterStartingAction;

        if (Interlocked.Increment(ref _runId) % 2 == 0)
        {
            slightlyEarlierStartingAction = action1;
            slightlyLaterStartingAction = action2;
        }
        else
        {
            slightlyEarlierStartingAction = action2;
            slightlyLaterStartingAction = action1;
        }

        int sync = 0;

        var taskA = new Task(() =>
        {
            Interlocked.Increment(ref sync);
            while (Interlocked.CompareExchange(ref sync, 3, 2) != 2) ;

            slightlyLaterStartingAction();
        });

        var taskB = new Task(() =>
        {
            while (Interlocked.CompareExchange(ref sync, 2, 1) != 1) ;

            slightlyEarlierStartingAction();
        });

        taskA.Start();
        taskB.Start();

        Task.WaitAll(taskA, taskB);
    }
}

在此實現中,同步精度約為130 ns ,每個動作贏得比賽的概率非常接近50%。

我發現了一種通過在高優先級的前台線程上調度這些任務來進一步微調同步精度的方法,但是我認為這對我來說是一個過大的殺傷力。 不過,如果有人覺得有用,請分享:

public class PriorityScheduler : TaskScheduler
{
    public static PriorityScheduler Highest = new PriorityScheduler(ThreadPriority.Highest);
    //public static PriorityScheduler AboveNormal = new PriorityScheduler(ThreadPriority.AboveNormal);
    //public static PriorityScheduler BelowNormal = new PriorityScheduler(ThreadPriority.BelowNormal);
    //public static PriorityScheduler Lowest = new PriorityScheduler(ThreadPriority.Lowest);

    private BlockingCollection<Task> _tasks = new BlockingCollection<Task>();
    private Thread[] _threads;
    private ThreadPriority _priority;
    private readonly int _maximumConcurrencyLevel = 2;//Math.Max(1, Environment.ProcessorCount);

    public PriorityScheduler(ThreadPriority priority)
    {
        _priority = priority;
    }

    public override int MaximumConcurrencyLevel
    {
        get { return _maximumConcurrencyLevel; }
    }

    protected override IEnumerable<Task> GetScheduledTasks()
    {
        return _tasks;
    }

    protected override void QueueTask(Task task)
    {
        _tasks.Add(task);

        if (_threads == null)
        {
            _threads = new Thread[_maximumConcurrencyLevel];
            for (int i = 0; i < _threads.Length; i++)
            {
                int local = i;
                _threads[i] = new Thread(() =>
                {
                    foreach (Task t in _tasks.GetConsumingEnumerable())
                        base.TryExecuteTask(t);
                });
                _threads[i].Name = string.Format("PriorityScheduler: ", i);
                _threads[i].Priority = _priority;
                _threads[i].IsBackground = false;
                _threads[i].Start();
            }
        }
    }

    protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
    {
        return false; // we might not want to execute task that should schedule as high or low priority inline
    }
}

public class Operations
{
    private static int _runId = 0;

    public static void ExecuteSimultaneously(Action action1, Action action2)
    {
        Action slightlyEarlierStartingAction;
        Action slightlyLaterStartingAction;

        if (Interlocked.Increment(ref _runId) % 2 == 0)
        {
            slightlyEarlierStartingAction = action1;
            slightlyLaterStartingAction = action2;
        }
        else
        {
            slightlyEarlierStartingAction = action2;
            slightlyLaterStartingAction = action1;
        }

        int sync = 0;
        var cancellationToken = new CancellationToken();

        var taskA = Task.Factory.StartNew(() =>
        {
            Interlocked.Increment(ref sync);
            while (Interlocked.CompareExchange(ref sync, 3, 2) != 2) ;

            slightlyLaterStartingAction();
        }, cancellationToken, TaskCreationOptions.None, PriorityScheduler.Highest);

        var taskB = Task.Factory.StartNew(() =>
        {
            while (Interlocked.CompareExchange(ref sync, 2, 1) != 1) ;

            slightlyEarlierStartingAction();
        }, cancellationToken, TaskCreationOptions.None, PriorityScheduler.Highest);

        Task.WaitAll(taskA, taskB);
    }
}

這使我可以將同步精度優化到〜100 ns

First  win count: 4992559, percentage: 49,92559
Second win count: 5007441, percentage: 50,07441
Avg sync diff: 98ns

警告:使用優先級最高的線程可能會限制計算機的響應速度,尤其是在沒有免費處理器內核的情況下。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM