简体   繁体   中英

Measuring the time for a context switch

I am getting acquainted with the MicroC/OS-II kernel and multi-tasking. I have programmed the following two tasks that uses semaphores:

#define TASK1_PRIORITY      6  // highest priority
#define TASK2_PRIORITY      7

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }  
    OSTimeDlyHMSM(0, 0, 0, 11);  
    OSSemPost(aSemaphore_task2);  
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1; 
    OSTimeDlyHMSM(0, 0, 0, 4);                                 
    OSSemPost(aSemaphore_task1);
  }
}

Now I want to measure the context switch time, ie, the time it takes to for the processor to switch between these two tasks.

Is this done by just using a function timer() like:

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }    
     OSTimeDlyHMSM(0, 0, 0, 11);
     OSSemPost(aSemaphore_task2);
     timer(start);
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    timer(stop):
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1;  
    OSTimeDlyHMSM(0, 0, 0, 4);                                
    OSSemPost(aSemaphore_task1);
  }
}

or have I gotten this completely wrong?

I'm afraid you won't be able to measure context switch time with any of µC/OS primitives. The context switch time is far too small to be measured by µC/OS soft timers which are most likely based on a multiple of the system tick (hence a few ms) - even if it depends on the specific µC/OS port to your CPU architecture.

You will have to directly access a HW timer of your processor - you probably want to configure its frequency to the maximum it can handle. Set it to be a free running timer (you don't need any interrupt) and use its counting value as a time base to measure the switching time.

Or you can read the ASM of OS_TASK_SW() for your architecture and compute the number of cycles required ;)

For doing performance measurements, the standard approach is to first calibrate your tools. In this case it is your timer, or the suggested clock (if you use C++).

To calibrate it, you need to call it many times (eg 1000) and see how long each takes on average. Now you know the cost of measuring the time. In this case, it is likely to be in a similar range (at best) to the feature you are trying to measure - the context switch.

So the calibration is important.

Let us know how you go.

You can use OSTimeGet API to get execution time. uCOS doesn't use timer() function to get execution time.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM