简体   繁体   English

测量上下文切换的时间

[英]Measuring the time for a context switch

I am getting acquainted with the MicroC/OS-II kernel and multi-tasking. 我熟悉MicroC / OS-II内核和多任务处理。 I have programmed the following two tasks that uses semaphores: 我编写了以下两个使用信号量的任务:

#define TASK1_PRIORITY      6  // highest priority
#define TASK2_PRIORITY      7

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }  
    OSTimeDlyHMSM(0, 0, 0, 11);  
    OSSemPost(aSemaphore_task2);  
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1; 
    OSTimeDlyHMSM(0, 0, 0, 4);                                 
    OSSemPost(aSemaphore_task1);
  }
}

Now I want to measure the context switch time, ie, the time it takes to for the processor to switch between these two tasks. 现在我想测量上下文切换时间,即处理器在这两个任务之间切换所需的时间。

Is this done by just using a function timer() like: 这是通过使用函数timer()如:

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }    
     OSTimeDlyHMSM(0, 0, 0, 11);
     OSSemPost(aSemaphore_task2);
     timer(start);
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    timer(stop):
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1;  
    OSTimeDlyHMSM(0, 0, 0, 4);                                
    OSSemPost(aSemaphore_task1);
  }
}

or have I gotten this completely wrong? 或者我完全错了?

I'm afraid you won't be able to measure context switch time with any of µC/OS primitives. 我担心您将无法使用任何μC/ OS原语测量上下文切换时间。 The context switch time is far too small to be measured by µC/OS soft timers which are most likely based on a multiple of the system tick (hence a few ms) - even if it depends on the specific µC/OS port to your CPU architecture. 上下文切换时间太小,无法通过μC/ OS软定时器测量,这很可能是基于系统节拍的倍数(因此几毫秒) - 即使它取决于CPU的特定μC/ OS端口建筑。

You will have to directly access a HW timer of your processor - you probably want to configure its frequency to the maximum it can handle. 您必须直接访问处理器的硬件计时器 - 您可能希望将其频率配置为它可以处理的最大频率。 Set it to be a free running timer (you don't need any interrupt) and use its counting value as a time base to measure the switching time. 将其设置为自由运行定时器(您不需要任何中断)并使用其计数值作为测量切换时间的时基。

Or you can read the ASM of OS_TASK_SW() for your architecture and compute the number of cycles required ;) 或者,您可以为您的体系结构读取OS_TASK_SW()的ASM并计算所需的周期数;)

For doing performance measurements, the standard approach is to first calibrate your tools. 对于性能测量,标准方法是首先校准您的工具。 In this case it is your timer, or the suggested clock (if you use C++). 在这种情况下,它是您的计时器,或建议的时钟(如果您使用C ++)。

To calibrate it, you need to call it many times (eg 1000) and see how long each takes on average. 要进行校准,您需要多次调用它(例如1000次)并查看每次平均需要多长时间。 Now you know the cost of measuring the time. 现在你知道测量时间的成本了。 In this case, it is likely to be in a similar range (at best) to the feature you are trying to measure - the context switch. 在这种情况下,它可能与您尝试测量的功能(最多)处于相似的范围内 - 上下文切换。

So the calibration is important. 因此校准很重要。

Let us know how you go. 让我们知道你是怎么走的。

You can use OSTimeGet API to get execution time. 您可以使用OSTimeGet API来获取执行时间。 uCOS doesn't use timer() function to get execution time. uCOS不使用timer()函数来获取执行时间。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM