简体   繁体   中英

Threading example C#

I found this book on internet. Threading in C# by Joseph. I tried its example

class ThreadTest
{
    static void Main()
    {
        Thread t = new Thread(WriteY);          // Kick off a new thread
        t.Start();                               // running WriteY()

        // Simultaneously, do something on the main thread.
        for (int i = 0; i < 10000000; i++) Console.Write("x");
        Console.ReadLine();
    }
    static void WriteY()
    {
        for (int i = 0; i < 10000000; i++) Console.Write("y");
    }
}

The prob is, when i run this program (i gave higher values in for loops to observe) my cpu utilization sticks to 100%. I didnt want this, i mean, is there anyway to reduce to make this program less cpu intensive ? i am just new to multithreading concept so i thought i should ask in advance.

Multi-threading can improve your application if you can use multiple resources at the same time. For instance, if you have multiple core's, or multiple CPU's, I believe that the above example should perform better.

Or, if you have a thread that uses the CPU, and another thread that simultaneously uses the disk for instance, it also will perform better if you use multi-threading.

If however, you have one single CPU or one single core, the example above won't perform better. It will perform even worse.

There is no way you can reduce the utilization, because you are using two threads (most likely on a dual-core) which are both work-intensive (they loop and print something). Maybe reducing the thread priority can help, but I don't think that's the point of this example.

The loop in the WriteY function will execute as quickly as possible. So it will use 100% of CPU. If you want it to be less resource intensive, there's two things you can do:

  • Change the priority of the thread. This way, your app will still use 100% of CPU, but the thread will 'slowdown' if another thread needs CPU resources

  • Add a pause in your WriteY function:

     static void WriteY() { for (int i = 0; i < 10000000; i++) { Console.Write("y"); Thread.Sleep(100); } } 

Add Thread.Sleep(num of millseconds) after Console.Write becase the loop will fully utilize the cpu.

class ThreadTest 
{
     static void Main()
     {
         Thread t = new Thread(WriteY);
          // Kick off a new thread
         t.Start();
         // running WriteY()
         // Simultaneously, do something on the main thread.
         for (int i = 0; i < 10000000; i++)
         {
              Console.Write("x");
              Console.ReadLine();     
         }
     }     

    static void WriteY()
    {
         for (int i = 0; i < 10000000; i++) 
         {
               Console.Write("y");
               Thread.Sleep(1000); // let the thread `sleep` for one seconds before running.     
         } 
    } 
}

update

Well, you may use This example , using .Net 4 Parallel Extensions in case you have multiple core.

var result = from ipaddress in new[]
{
  "111.11.11.11",
  "22.22.22.22",
  "22.33.44.55"
  /* or pulled from whatever source */
}
.AsParallel().WithDegreeOfParallelism(6)
let p = new Ping().Send(IPAddress.Parse(ipaddress))
select new
{
  site,
  Result = p.Status,
  Time = p.RoundtripTime
}

/* process the information you got*/

First off, try the single-threaded equivalent of that program. You'll probably find it uses 100% of one of your cores too, maybe even more than that (assuming you have more than one core, obviously more than 100% / 1 isn't possible). Example code is example code, and often not realistic in all regards.

A lot of problems are associated with 100% CPU utilisation, and therefore one can be lead to think 100% CPU == bad stuff.

Actually, 100% CPU == the expensive piece of electronics is doing the job you paid money for it to do!

Unfortunately, what you paid money for it to do, is to follow the instructions in computer programs. If a computer program tells it to go into a tight infinite loop, then it'll spend as close to 100% CPU doing that as possible (different schedulers are better than others at letting other threads do something else). This is the classic bad case of 100% CPU. Yes, it's doing what it was told, but what it was told is pointless, will never come to an end, and sadly is so "efficient" that it's really good at keeping other threads out of the way.

Let's consider another case though:

  1. The amount of work that'll be done is bounded - at some point it's finished.
  2. You have nothing else you want the computer to do.

Here the closer to 100% the better. Every % below 100 indicates the CPU sitting waiting for something to happen. If we can either make that "something" happen quicker (maybe faster disks and memory) or if we can let the CPU work on another part of the problem, then we'll get to our finished point faster. Therefore, if we replace the code with a multi-threaded approach that lets it make use of the CPU while another thread is waiting, and if the overhead of doing so doesn't cancel out the benefits, then we get a performance boost. (Also, it means we can replace something that uses x% of one core with x% of all the cores, and also be faster for that reason).

Realistically, there's only a few times we want a particular job done and don't care about anything else. Indeed, even when we do, we tend to get freaked out by UI hanging in the meantime, forgetting that "make it not look its locked up and will never come back" falls into the category of "anything else".

So. In the real world, what do we do.

First we check there's a real problem. If it's at 100% CPU for a while, but everything (including other processes) are able to do their job, then that's actually fine - the CPU is always doing something, but it's not because one bunch of threads has screwed it all, it's because all the threads with something to do are getting to do it. Happy days.

Then we check we'll actually have this situation. If you've a multi-threaded approach using ax threads that are each going to spend most of their time waiting on I/O, then they aren't going to follow the same pattern as your example. If the performance is critical for that particular task, you might actually be looking for ways to restructure it so you can throw more threads at the problem, so there's more time when the CPU is doing something useful, and less when every thread is waiting on something.

If we do find that CPU utilisation from the process is hurting everything, then we can do a few different things:

  1. Just use one thread. Is it actually important for this process to complete as fast as possible, over and above the considerations of all other processes? A lot of things we don't actually want this for. Pretty much most things really.

  2. Reduce the thread priority. Let's consider this a for-completion-only answer though. There are some pretty subtle risks with doing this which can end up with "priority inversion" (briefly a high-priority thread ends up waiting on a low-priority thread, which means that only the low-priority thread gets to run, and you get exactly the opposite relative priority in practice to what you wanted).

  3. Manually give up the CPU with Yield or Sleep . Though, if you're considering this, you have to ask "what makes this any different to just arbitrarily introducing inefficiencies?". If you don't have a good answer, then again maybe single-threaded offers the best total use of your machine's CPUs than multi-threaded.

  4. Does it need to be running all the time. You say something about monitoring above. How rapid a response do you really need? If it takes .01 seconds to check all the things you are monitoring with your multi-threaded approach, and you'd be happy to know about it 2 seconds after it happened, then your process is 200times more efficient than it needs to be, at the expense of other processes. Kick things off from a timer instead. (And if it takes a single-thread .5seconds to do it all itself, then again, why go multi?)

All of the above only considers the case where you are using multi-threading to make a particular task complete faster. It's worth noting that this is only a subset of the whole range of multi-threading patterns. For example, if you took the timer approach above but with a single thread then doing the work, but you did so in a process that is also doing other things, then this still counts as multi-threading; there's one thread doing that task and other threads doing other tasks and hopefully the over-all responsiveness is good.

The whole idea of multithreading is to get job done sooner by using more computing resources (threads, which are distributed between cores), which leads to higher CPU utilization.

If you want to lower your CPU utilization, don't use multithreading, stick your program to single thread. It will run longer, but consume less CPU (of course, there're lots of optimizations to reduce CPU footprint, but they're not about multithreading).

If you want to monitor 300 nodes in your network, that's another thing at all. Your example is wrong here, because you try computing-intensive task. Network monitoring is not computing-intensive, it consists of "request-wait-process response" loops, which are well parallelized: even one CPU can effectively process response from one node while waiting for response from another. More to that, because network wait is in fact an i/o wait, this wait could be easily offloaded to your OS so it doesn't consume CPU.

There's a good chapter on threading (and I/O waits) in Richter's "CLR via C#, 3rd edition", which I highly recommend for solutions to your problem.

I read that there are 2 main different types of activity that your processor can be working on. I may have the wrong wording / syntax in these statements, if someone can correct the syntax, I'd be grateful.

  1. Computational Work (Computationally Bound): Where a pure computational task has been given to the processor and the processor can work on it without requiring any input from external devices or components.

  2. Input Based Work (I/O Bound): When your processor is working on something, but also needs to read or write something to the disk, or it needs to wait for Network activity. An example of this is reading a file from the disk, or downloading a file.

The main differences between them is that computational is supposed to be utilized without waits, in order to get the task completed in the fastest amount of time possible. Eg:

for(int i = 0; i<= 10000; i++)
{

}

There is no interaction with any 'slow' parts of your system, so for something like this, you dont mind the calculation/computation draining cpu, because its likely to be finished in the space of a micro-second anyway.

This is particularly important for things like mining for bit-coins, or brute-forcing combinations.

You don't add 'Sleep' to these, because it will unnecessarily slow you down.

If however, your work load is Input based, where it needs to read or write to your hard-drive or network; activities which are considered slow in comparison to pure mathematical work, then adding Thread.Sleep(x) is not a bad thing, as sometimes your hard-drive / Ram speed to yield data is not as fast as your processor would like.

Threading is particularly interesting in these 2 different topics, For computational work, where you are expecting the thread to be running at 100% non-stop for a duration, then you are best not exceeding your Processor Count in thread count.

Eg: Environment.ProcessorCount

In fact, I would almost recommend working with a Thread Count that is Environment.ProcessorCount -1 (in the case of dual core or higher) Using 100% of all cores/processors can lead to thread locking, which can actually impede performance.

I experimented with this, and found that on a dual core system, I could do more loops/iterations with a single core, than using both cores fully.

If I am on a quad core, I find I can get more by using 3 vs all 4 fully.

(Not forgetting, one of those processors has to share the OS functionality, as well as rendering the windows form application GUI - if it has one)

However, if you are developing an application that uses Input Based Computations, where it needs to interact with a lot of slow devices or dependencies, then exceeding your processor count might not be a bad thing.

eg: If each thread has a lot of thread.sleeps within them, then you can strategically plan for your threads to sleep, while other threads work.

I've done this in the past with a Multi-Threaded Lab Monitor, which was designed to monitor the status of lab machines at work; For each lab machine, a thread would run. But it only actually did work once every 10 minutes.

The original question, didn't get the concept of multi threading fully.

Since you have single thread and their is no delay (thread) processor is occupied 100%.

even If you create multiple tasks (say 100) then all task will execute parallelly and processor utilization will remain 100%

Change this:

for (int i = 0; i < 10000000; i++) Console.Write("x");

Into this code:

for (int i = 0; i < 10000000; i++) 
    {
       Console.Write("x");
       Thread.Sleep(5);
    }

Use

Thread.Sleep(x); //where x >= 0

or

Thread.Yield();

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM