简体   繁体   English

在筹集活动时我应该如何实施“安静时期”?

[英]How should I implement a “quiet period” when raising events?

I'm using a subscriber/notifier pattern to raise and consume events from my .Net middle-tier in C#. 我正在使用订阅者/通知程序模式来提升和使用C#中的.Net中间层事件。 Some of the events are raised in "bursts", for instance, when data is persisted from a batch program importing a file. 例如,当从导入文件的批处理程序中持久保存数据时,某些事件会以“突发”形式引发。 This executes a potentially long-running task, and I'd like to avoid firing the event several times a second by implementing a "quiet period", whereby the event system waits until the event stream slows down to process the event. 这执行一个可能长时间运行的任务,我想通过实现“静默期”来避免每秒多次触发事件,从而事件系统等待直到事件流减慢以处理事件。

How should I do this when the Publisher takes an active role in notifying subscribers? 当发布者在通知订阅者方面发挥积极作用时,我该如何做? I don't want to wait until an event comes in to check to see if there are others waiting out the quiet period... 我不想等到事件进来检查是否还有其他人等待安静的时期...

There is no host process to poll the subscription model at the moment. 目前没有主机进程来轮询订阅模型。 Should I abandon the publish/subscribe pattern or is there a better way? 我应该放弃发布/订阅模式还是有更好的方法?

I am not sure if I understood your question correctly, but I would try to fix the problem at source - make sure the events are not raised in "bursts". 我不确定我是否正确理解你的问题,但我会尝试在源头解决问题 - 确保事件不会在“爆发”中引发。 You could consider implementing batch operations, which could be used from the file importing program. 您可以考虑实施批处理操作,可以从文件导入程序中使用。 This would be treated as a single event in your middletier and raise a single event. 这将被视为您的middletier中的单个事件并举起一个单一事件。

I think it will be very tricky to implement some reasonable solution if you can't make the change outlined above - you could try to wrap your publisher in a "caching" publisher, which would implement some heuristic to cache the events if they are coming in bursts. 如果您无法进行上述更改,我认为实施一些合理的解决方案将非常棘手 - 您可以尝试将您的发布者包装在“缓存”发布者中,如果他们即将到来,它将实现一些启发式缓存事件在阵阵。 The easiest would be to cache an event if another one of the same type is being currently processed (so your batch would cause at least 2 events - one at the very beginning, and one at the end). 如果当前正在处理另一个相同类型的事件,最简单的方法是缓存一个事件(因此,您的批处理将导致至少2个事件 - 一个在最开始,一个在最后)。 You could wait for a short time and only raise an event when the next one hasn't come during that time, but you get a time lag even if there is a single event in the pipeline. 您可以等待很短的时间,并且只在下一个事件未在此期间发生时引发事件,但即使管道中存在单个事件,您也会遇到时滞。 You also need to make sure you will raise the event from time to time even if there is constant queue of events - otherwise the publishers will potentially get starved. 您还需要确保即使有不断的事件排队,您也会不时地举起活动 - 否则出版商可能会变得饥饿。

The second option is tricky to implement and will contain heuristics, which might go very wrong... 第二个选项很难实现,并且将包含启发式,这可能会非常错误......

Here's a rough implementation that might point you in a direction. 这是一个粗略的实现,可能会指向您的方向。 In my example, the task that involves notification is saving a data object. 在我的示例中,涉及通知的任务是保存数据对象。 When an object is saved, the Saved event is raised. 保存对象时,将引发Saved事件。 In addition to a simple Save method, I've implemented BeginSave and EndSave methods as well as an overload of Save that works with those two for batch saves. 除了简单的Save方法之外,我还实现了BeginSave和EndSave方法以及Save的重载,它可以与这两个方法一起用于批量保存。 When EndSave is called, a single BatchSaved event is fired. 调用EndSave时,会触发单个BatchSaved事件。

Obviously, you can alter this to suit your needs. 显然,您可以根据自己的需要进行更改。 In my example, I kept track of a list of all objects that were saved during a batch operation, but this may not be something that you'd need to do...you may only care about how many objects were saved or even simply that a batch save operation was completed. 在我的示例中,我跟踪了批处理操作期间保存的所有对象的列表,但这可能不是您需要做的事情...您可能只关心保存了多少对象甚至只是批量保存操作已完成。 If you anticipate a large number of objects being saved, then storing them in a list as in my example may become a memory issue. 如果您预计会保存大量对象,那么将它们存储在列表中(如我的示例中)可能会成为内存问题。

EDIT: I added a "threshold" concept to my example that attempts to prevent a large number of objects being held in memory. 编辑:我在我的例子中添加了一个“阈值”概念,试图阻止大量对象被保存在内存中。 This causes the BatchSaved event to fire more frequently, though. 但是,这会导致BatchSaved事件更频繁地触发。 I also added some locking to address potential thread safety, though I may have missed something there. 我还添加了一些锁定来解决潜在的线程安全问题,尽管我可能错过了那些东西。

class DataConcierge<T>
{
    // *************************
    // Simple save functionality
    // *************************

    public void Save(T dataObject)
    {
        // perform save logic

        this.OnSaved(dataObject);
    }

    public event DataObjectSaved<T> Saved;

    protected void OnSaved(T dataObject)
    {
        var saved = this.Saved;
        if (saved != null)
            saved(this, new DataObjectEventArgs<T>(dataObject));
    }

    // ************************
    // Batch save functionality
    // ************************

    Dictionary<BatchToken, List<T>> _BatchSavedDataObjects = new Dictionary<BatchToken, List<T>>();
    System.Threading.ReaderWriterLockSlim _BatchSavedDataObjectsLock = new System.Threading.ReaderWriterLockSlim();

    int _SavedObjectThreshold = 17; // if the number of objects being stored for a batch reaches this threshold, then those objects are to be cleared from the list.

    public BatchToken BeginSave()
    {
        // create a batch token to represent this batch
        BatchToken token = new BatchToken();

        _BatchSavedDataObjectsLock.EnterWriteLock();
        try
        {
            _BatchSavedDataObjects.Add(token, new List<T>());
        }
        finally
        {
            _BatchSavedDataObjectsLock.ExitWriteLock();
        }
        return token;
    }

    public void EndSave(BatchToken token)
    {
        List<T> batchSavedDataObjects;
        _BatchSavedDataObjectsLock.EnterWriteLock();
        try
        {
            if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
                throw new ArgumentException("The BatchToken is expired or invalid.", "token");

            this.OnBatchSaved(batchSavedDataObjects); // this causes a single BatchSaved event to be fired

            if (!_BatchSavedDataObjects.Remove(token))
                throw new ArgumentException("The BatchToken is expired or invalid.", "token");
        }
        finally
        {
            _BatchSavedDataObjectsLock.ExitWriteLock();
        }
    }

    public void Save(BatchToken token, T dataObject)
    {
        List<T> batchSavedDataObjects;
        // the read lock prevents EndSave from executing before this Save method has a chance to finish executing
        _BatchSavedDataObjectsLock.EnterReadLock();
        try
        {
            if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
                throw new ArgumentException("The BatchToken is expired or invalid.", "token");

            // perform save logic

            this.OnBatchSaved(batchSavedDataObjects, dataObject);
        }
        finally
        {
            _BatchSavedDataObjectsLock.ExitReadLock();
        }
    }

    public event BatchDataObjectSaved<T> BatchSaved;

    protected void OnBatchSaved(List<T> batchSavedDataObjects)
    {
        lock (batchSavedDataObjects)
        {
            var batchSaved = this.BatchSaved;
            if (batchSaved != null)
                batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects));
        }
    }

    protected void OnBatchSaved(List<T> batchSavedDataObjects, T savedDataObject)
    {
        // add the data object to the list storing the data objects that have been saved for this batch
        lock (batchSavedDataObjects)
        {
            batchSavedDataObjects.Add(savedDataObject);

            // if the threshold has been reached
            if (_SavedObjectThreshold > 0 && batchSavedDataObjects.Count >= _SavedObjectThreshold)
            {
                // then raise the BatchSaved event with the data objects that we currently have
                var batchSaved = this.BatchSaved;
                if (batchSaved != null)
                    batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects.ToArray()));

                // and clear the list to ensure that we are not holding on to the data objects unnecessarily
                batchSavedDataObjects.Clear();
            }
        }
    }
}

class BatchToken
{
    static int _LastId = 0;
    static object _IdLock = new object();

    static int GetNextId()
    {
        lock (_IdLock)
        {
            return ++_LastId;
        }
    }

    public BatchToken()
    {
        this.Id = GetNextId();
    }

    public int Id { get; private set; }
}

class DataObjectEventArgs<T> : EventArgs
{
    public T DataObject { get; private set; }

    public DataObjectEventArgs(T dataObject)
    {
        this.DataObject = dataObject;
    }
}

delegate void DataObjectSaved<T>(object sender, DataObjectEventArgs<T> e);

class BatchDataObjectEventArgs<T> : EventArgs
{
    public IEnumerable<T> DataObjects { get; private set; }

    public BatchDataObjectEventArgs(IEnumerable<T> dataObjects)
    {
        this.DataObjects = dataObjects;
    }
}

delegate void BatchDataObjectSaved<T>(object sender, BatchDataObjectEventArgs<T> e);

In my example, I choose to use a token concept in order to create separate batches. 在我的示例中,我选择使用令牌概念来创建单独的批处理。 This allows smaller batch operations running on separate threads to complete and raise events without waiting for a larger batch operation to complete. 这允许在不同线程上运行的较小批处理操作完成并引发事件,而无需等待更大的批处理操作完成。

I made separete events: Saved and BatchSaved. 我制作了separete事件:Saved和BatchSaved。 However, these could just as easily be consolidated into a single event. 但是,这些可以很容易地合并到一个事件中。

EDIT: fixed race conditions pointed out by Steven Sudit on accessing the event delegates. 编辑:Steven Sudit在访问活动代表时指出的固定竞争条件。

EDIT: revised locking code in my example to use ReaderWriterLockSlim rather than Monitor (ie the "lock" statement). 编辑:在我的示例中修改了锁定代码以使用ReaderWriterLockSlim而不是Monitor(即“lock”语句)。 I think there were a couple of race conditions, such as between the Save and EndSave methods. 我认为存在一些竞争条件,例如Save和EndSave方法之间。 It was possible for EndSave to execute, causing the list of data objects to be removed from the dictionary. EndSave可能会执行,导致数据对象列表从字典中删除。 If the Save method was executing at the same time on another thread, it would be possible for a data object to be added to that list, even though it had already been removed from the dictionary. 如果Save方法在另一个线程上同时执行,则可以将数据对象添加到该列表中,即使它已经从字典中删除了。

In my revised example, this situation can't happen and the Save method will throw an exception if it executes after EndSave. 在我修改过的示例中,这种情况不会发生,如果在EndSave之后执行,Save方法将抛出异常。 These race conditions were caused primarily by me trying to avoid what I thought was unnecessary locking. 这些竞争条件主要是由于我试图避免我认为不必要的锁定。 I realized that more code needed to be within a lock, but decided to use ReaderWriterLockSlim instead of Monitor because I only wanted to prevent Save and EndSave from executing at the same time; 我意识到需要更多代码才能锁定,但决定使用ReaderWriterLockSlim而不是Monitor,因为我只想阻止Save和EndSave同时执行; there wasn't a need to prevent multiple threads from executing Save at the same time. 没有必要阻止多个线程同时执行Save。 Note that Monitor is still used to synchronize access to the specific list of data objects retrieved from the dictionary. 请注意,Monitor仍用于同步对从字典中检索的特定数据对象列表的访问。

EDIT: added usage example 编辑:添加用法示例

Below is a usage example for the above sample code. 以下是上述示例代码的用法示例。

    static void DataConcierge_Saved(object sender, DataObjectEventArgs<Program.Customer> e)
    {
        Console.WriteLine("DataConcierge<Customer>.Saved");
    }

    static void DataConcierge_BatchSaved(object sender, BatchDataObjectEventArgs<Program.Customer> e)
    {
        Console.WriteLine("DataConcierge<Customer>.BatchSaved: {0}", e.DataObjects.Count());
    }

    static void Main(string[] args)
    {
        DataConcierge<Customer> dc = new DataConcierge<Customer>();
        dc.Saved += new DataObjectSaved<Customer>(DataConcierge_Saved);
        dc.BatchSaved += new BatchDataObjectSaved<Customer>(DataConcierge_BatchSaved);

        var token = dc.BeginSave();
        try
        {
            for (int i = 0; i < 100; i++)
            {
                var c = new Customer();
                // ...
                dc.Save(token, c);
            }
        }
        finally
        {
            dc.EndSave(token);
        }
    }

This resulted in the following output: 这导致以下输出:

DataConcierge<Customer>.BatchSaved: 17 DataConcierge <Customer> .BatchSaved:17

DataConcierge<Customer>.BatchSaved: 17 DataConcierge <Customer> .BatchSaved:17

DataConcierge<Customer>.BatchSaved: 17 DataConcierge <Customer> .BatchSaved:17

DataConcierge<Customer>.BatchSaved: 17 DataConcierge <Customer> .BatchSaved:17

DataConcierge<Customer>.BatchSaved: 17 DataConcierge <Customer> .BatchSaved:17

DataConcierge<Customer>.BatchSaved: 15 DataConcierge <Customer> .BatchSaved:15

The threshold in my example is set to 17, so a batch of 100 items causes the BatchSaved event to fire 6 times. 我的示例中的阈值设置为17,因此一批100个项目导致BatchSaved事件触发6次。

Here's one idea that's just fallen out of my head. 这是一个刚刚失控的想法。 I don't know how workable it is and can't see an obvious way to make it more generic, but it might be a start. 我不知道它是多么可行,也看不出一种明显的方式使它更通用,但它可能是一个开始。 All it does is provide a buffer for button click events (substitute with your event as necessary). 它所做的就是为按钮点击事件提供一个缓冲区(必要时替换你的事件)。

class ButtonClickBuffer
{
    public event EventHandler BufferedClick;

    public ButtonClickBuffer(Button button, int queueSize)
    {
        this.queueSize= queueSize;
        button.Click += this.button_Click;
    }

    private int queueSize;
    private List<EventArgs> queuedEvents = new List<EventArgs>();

    private void button_Click(object sender, EventArgs e)
    {
        queuedEvents.Add(e);
        if (queuedEvents.Count >= queueSize)
        {
            if (this.BufferedClick!= null)
            {
                foreach (var args in this.queuedEvents)
                {
                    this.BufferedClick(sender, args);
                }
                queuedEvents.Clear();
            }
        }
    }
}

So your subscriber, instead of subscribing as: 所以您的订阅者,而不是订阅:

this.button1.Click += this.button1_Click;

Would use a buffer, specifying how many events to wait for: 将使用缓冲区,指定要等待的事件数:

ButtonClickBuffer buffer = new ButtonClickBuffer(this.button1, 5);
buffer.BufferedClick += this.button1_Click;

It works in a simple test form I knocked up, but it's far from production-ready! 它工作在一个简单的测试形式我敲了,但它远没有生产就绪!

You said you didn't want to wait for an event to see if there is a queue waiting, which is exactly what this does. 你说你不想等待一个事件来查看是否有队列在等待,这正是这样做的。 You could substitute the logic inside the buffer to spawn a new thread which monitors the queue and dispatches events as necessary. 您可以替换缓冲区内的逻辑来生成一个新线程,该线程监视队列并根据需要调度事件。 God knows what threading and locking issues might arise from that! 上帝知道可能会出现什么线程和锁定问题!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM