简体   繁体   中英

How to get better performance out of GPIO pins on Raspberry pi in Windows IOT core

I have the following code running on a raspberry pi 3 running windows 10 using C#

GPIO init ....

_gpioController = GpioController.GetDefault();
                _motorPin = _gpioController.OpenPin(Convert.ToInt32(RaspberryGPIOpin);
                _motorPin.SetDriveMode(GpioPinDriveMode.Output);   

GPIO pin on off

_motorPin.Write(GpioPinValue.High);
_motorPin.Write(GpioPinValue.Low);

The problem is that in my application to turn on and off the GPI pin takes 100 milliseconds, but I need this it happen in less that 25 milliseconds to change the direction of a servo.

Is there a way to speed up the on off of the GPIO pins?

or should I be looking at a hardware controller of some sort, to control the servo. I would prefer not to do this. My code is also running in a thread as well. should I remove the threading?

I have a much simpler application where the code does work...https://github.com/StuartSmith/RaspberryPi-Control-Sg90-Example

Microsoft provides a complete test result on toggling GPIO bit using raspberry pi 2, find it in https://developer.microsoft.com/en-us/windows/iot/docs/lightningperformance .

So you can see the result varies with IoT version, driver model, .NET native toolchain, and even programming languages. But in the worst case, approximately 10kHz can be achieved.

I haven't tested on the latest IoT Redstone 1 release, but I'm guessing it should have similar performance with TH2.

So, in general, choose the lightning driver over the default inbox driver , it's supposed to have better performance on GPIO ports.

Also disabling the .NET native toolchain will have noticeably better performance as well.

I'm seeing you're using the GPIO pin to drive the servo, the software timing should be good enough in this case. However if you want to use it for clock source that needs high precision, don't trust on the software timing(lighting provider), the software jitter is always unpredictable . One good alternative is to use the build-in DMA controller , which uses hardware timing and should have precision within 1 micro-second.

How did you measure 100ms? Is 100ms how long it took to run the two Write() calls, or was there other code in there?

We measured each Write() call at 3.6 microseconds (from a C++ app).

The first Write() call after you open a pin and set drive mode may take longer than subsequent calls due to the way the underlying stack works. Did your measurement include the first Write() call?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM