I'm trying to make a prototype for a photobooth-ish setup where the interface is being shown on a html page. I've managed to embed a video within a canvas element that basically uses the integrated/external webcam on my computer to show the user's face/body depending on the distance from the screen.
Problem: What I need is to be able to eliminate the background such that ONLY the person's face/body is visible and the rest is transparent. I need this so that the div tag housing this could be overlayed on top of a background such that it appears if the person standing in front of the device is standing in a different background setting (space,mountains,castles etc. as illustrated on the UI) than where they actually are in the room. How can I use some image processing code within this and how can I achieve this effect?
The code I'm working with so far:
<div id=outerdiv>
<video id="video" autoplay></video>
<canvas id="canvas" >
<script>
// Put event listeners into place
window.addEventListener("DOMContentLoaded", function() {
// Grab elements, create settings, etc.
var canvas = document.getElementById("canvas"),
context = canvas.getContext("2d"),
video = document.getElementById("video"),
videoObj = { "video": true },
errBack = function(error) {
console.log("Video capture error: ", error.code);
};
// Put video listeners into place
if(navigator.getUserMedia) { // Standard
navigator.getUserMedia(videoObj, function(stream) {
video.src = stream;
video.play();
}, errBack);
}
else if(navigator.webkitGetUserMedia) { // WebKit-prefixed
navigator.webkitGetUserMedia(videoObj, function(stream){
video.src = window.webkitURL.createObjectURL(stream);
video.play();
}, errBack);
}
}, false);
</script>
</canvas>
</div>
The effect would look something like this (the image is off the internet and the idea is to be able to detect the person, eliminate the background, replace the black area with a transparent region - all in a live video feed being captured from the webcam):
I don't know if you've ever used the "photobooth" app on mac os, but they do a similar operation. However they ask the user to first get out of the scene, this way the program gets a true background image, think of it like a calibration step. Then afterwards it can do a true background subtraction. this could really simplify your problem as opposed to doing a frame by frame background subtractions. where you look for differences between subsequent frames, this is much more difficult.
so if you can do "offline calibration" try that
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.