简体   繁体   中英

Face Detection in UWP

I am trying to use the sample code (Face Detection) that Microsoft provides on the official documentation page .

I am trying to develop a UWP application that recognizes faces in video format.


Problem

Some methods do not seem to exist in the source code and the IDE marks them as

Cannot resolve symbol 'GetLatestFrame'

Cannot resolve symbol 'ProcessNextFrameAsync'

Cannot resolve symbol 'SetupVisualization'


Source Code

using System;
using System.Collections.Generic;
using System.Threading;
using Windows.Foundation;
using Windows.Graphics.Imaging;
using Windows.Media;
using Windows.Media.FaceAnalysis;
using Windows.System.Threading;
using Windows.UI.Xaml.Controls;

// The Blank Page item template is documented at https://go.microsoft.com/fwlink/?LinkId=402352&clcid=0x409

namespace Network
{
    /// <summary>
    /// An empty page that can be used on its own or navigated to within a Frame.
    /// </summary>
    public sealed partial class Network : Page
    {

        private IAsyncOperation<FaceTracker> faceTracker;
        private ThreadPoolTimer frameProcessingTimer;
        private SemaphoreSlim frameProcessingSemaphore = new SemaphoreSlim(1);

        public Network()
        {
            this.InitializeComponent();

            this.faceTracker = FaceTracker.CreateAsync();
            TimeSpan timerInterval = TimeSpan.FromMilliseconds(66); // 15 fps
            this.frameProcessingTimer = Windows.System.Threading.ThreadPoolTimer.CreatePeriodicTimer(new Windows.System.Threading.TimerElapsedHandler(ProcessCurrentVideoFrame), timerInterval);


        }

        public async void ProcessCurrentVideoFrame(ThreadPoolTimer timer)
        {
            if (!frameProcessingSemaphore.Wait(0))
            {
                return;
            }

            VideoFrame currentFrame = await GetLatestFrame();

            // Use FaceDetector.GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported to dynamically
            // determine supported formats
            const BitmapPixelFormat faceDetectionPixelFormat = BitmapPixelFormat.Nv12;

            if (currentFrame.SoftwareBitmap.BitmapPixelFormat != faceDetectionPixelFormat)
            {
                return;
            }

            try
            {
                IList<DetectedFace> detectedFaces = await faceTracker.ProcessNextFrameAsync(currentFrame);

                var previewFrameSize = new Windows.Foundation.Size(currentFrame.SoftwareBitmap.PixelWidth, currentFrame.SoftwareBitmap.PixelHeight);
                var ignored = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
                {
                    this.SetupVisualization(previewFrameSize, detectedFaces);
                });
            }
            catch (Exception e)
            {
                // Face tracking failed
            }
            finally
            {
                frameProcessingSemaphore.Release();
            }

            currentFrame.Dispose();
        }

    }
}

Question

Did I miss to add the methods that are provided on the documentation page ?

Do I need to add the methods manually? Do I need to create another class?

GetLatestFrame() and SetupVisualization() are custom method, GetLatestFrame() is used to get video frame.You could refer this to get frame from the media capture preview stream.

SetupVisualization() creates the visualization using the frame dimensions and face results.

  private void SetupVisualization(Windows.Foundation.Size framePixelSize, IList<DetectedFace> foundFaces)
    {
        this.VisualizationCanvas.Children.Clear();

        if (this.currentState == ScenarioState.Streaming && framePixelSize.Width != 0.0 && framePixelSize.Height != 0.0)
        {
            double widthScale = this.VisualizationCanvas.ActualWidth / framePixelSize.Width;
            double heightScale = this.VisualizationCanvas.ActualHeight / framePixelSize.Height;

            foreach (DetectedFace face in foundFaces)
            {
                // Create a rectangle element for displaying the face box but since we're using a Canvas
                // we must scale the rectangles according to the frames's actual size.
                Rectangle box = new Rectangle()
                {
                    Width = face.FaceBox.Width * widthScale,
                    Height = face.FaceBox.Height * heightScale,
                    Margin = new Thickness(face.FaceBox.X * widthScale, face.FaceBox.Y * heightScale, 0, 0),
                    Style = HighlightedFaceBoxStyle
                };
                this.VisualizationCanvas.Children.Add(box);
            }
        }
    }

As for ProcessNextFrameAsync() , which is method under Windows.Media.FaceAnalysis namespace, it asynchronously processes a video frame for face detection.

It's recommended to use the BasicFaceDetection to do face detection, which is official sample, you could try it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM