简体   繁体   中英

Eye separation using WebVR

I'm trying to make a WebVR environment using Three.js. I exported some scenes from Cinema4D and loaded them in with the Colladaloader of Three.js. Now I wanted to try this environment in my Google Cardboard but I needed to have the split screen for both my eyes, of course.

I used the npm module three-stereo-effect to achieve the VR effect, but it's overlapping when using it in a cardboard. I looked it up and saw that a lot of WebVR examples had a rounded rectangle for each eye ( example of what I mean ), not a straight rectangle, I thought I needed to find matrices to fix that (When looking at the examples of this repository ). But then I downloaded a VR tunnel racing game and saw that they used straight rectangles and the vision was fine.

Now I'm thinking the eyeSeparation of my stereo effect is incorrect, I saw someone use the property eyeSeparation on the StereoEffect module and tried that out, but I think I shouldn't just be guessing a value...

Am I on the right track here to find a solution? Or am I looking in the total wrong direction why my 3D scene does not give a good vision when using a Cardboard?

This is the code I'm experimenting with.

import {sets} from './data/';

import * as THREE from 'three';
import threeOrbitControls from 'three-orbit-controls';
import ColladaLoader from 'three-collada-loader';
import threeStereoEffect from 'three-stereo-effect';

import {BufferLoader} from './modules/sound';
import {SpawnObject} from './modules/render';

const OrbitControls = threeOrbitControls(THREE);
const StereoEffect = threeStereoEffect(THREE);

let scene, camera, renderer;
let audioCtx, bufferLoader;

const notes = [];
let stereoEffect = null;

const init = () => {
  window.AudioContext = window.AudioContext || window.webkitAudioContext;

  audioCtx = new AudioContext();
  bufferLoader = new BufferLoader(audioCtx);

  bufferLoader.load(sets.drums)
    .then(data => spawnObject(data));


  initEnvironment();

};

const spawnObject = data => {

  for (let i = 0;i < 5;i ++) {
    const bol = new SpawnObject(`object.dae`, audioCtx, data[0], scene, false);
    notes.push(bol);
  }

  // console.log(notes);
};

const initEnvironment = () => {

  scene = new THREE.Scene();
  camera = new THREE.PerspectiveCamera(
    45, window.innerWidth / window.innerHeight,
    1, 10000
  );

  renderer = new THREE.WebGLRenderer();
  renderer.setSize(window.innerWidth, window.innerHeight);

  stereoEffect = new StereoEffect(renderer);
  // stereoEffect.eyeSeparation = 1;
  stereoEffect.setSize(window.innerWidth, window.innerHeight);

  console.log(stereoEffect);

  document.querySelector(`main`).appendChild(renderer.domElement);

  camera.position.set(0, 0, 2);
  camera.lookAt(scene.position);

  new OrbitControls(camera);

  //LIGHTS
  const light = new THREE.PointLight(0xFFFFFF);
  light.position.set(0, 0, 9);
  light.castShadow = true;
  light.shadow.mapSize.width = 1024;
  light.shadow.mapSize.height = 1024;
  light.shadow.camera.near = 10;
  light.shadow.camera.far = 100;
  scene.add(light);

  // const hemiLight = new THREE.HemisphereLight(0xffffff, 0xffffff, 0.6);
  // hemiLight.color.setHSL(0.6, 1, 0.6);
  // hemiLight.groundColor.setHSL(0.095, 1, 0.75);
  // hemiLight.position.set(0, 500, 0);
  // scene.add(hemiLight);
  //
  // const dirLight = new THREE.DirectionalLight(0xffffff, 1);
  // dirLight.color.setHSL(0.1, 1, 0.95);
  // dirLight.position.set(- 1, 1.75, 1);
  // dirLight.position.multiplyScalar(50);
  // scene.add(dirLight);
  // dirLight.castShadow = true;


  //FLOOR
  const matFloor = new THREE.MeshPhongMaterial();
  const geoFloor = new THREE.BoxGeometry(2000, 1, 2000);
  const mshFloor = new THREE.Mesh(geoFloor, matFloor);

  matFloor.color.set(0x212E39);
  mshFloor.receiveShadow = true;
  mshFloor.position.set(0, - 1, 0);

  scene.add(mshFloor);


  //ENVIRONMENT
  const loader = new ColladaLoader();

  loader.load(`../assets/environment.dae`, collada => {
    collada.scene.traverse(child => {
      child.castShadow = true;
      child.receiveShadow = true;
    });

    scene.add(collada.scene);
    render();
  });
};

const render = () => {

  // stereoEffect.render(scene, camera);
  // effect.render(scene, camera);

  renderer.shadowMap.enabled = true;
  renderer.shadowMap.type = THREE.PCFSoftShadowMap;

  renderer.gammaInput = true;
  renderer.gammaOutput = true;

  renderer.setClearColor(0xdddddd, 1);
  stereoEffect.render(scene, camera);

  requestAnimationFrame(render);
};



init();

From the PDF "Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays" :

2.1.1 Binocular Disparity

Binocular disparity is the positional difference between the two retinal projections of a given point in space. This positional difference results from the fact that the two eyes are laterally separated and therefore see the world from the two slightly different vantage points. For the average person the mean lateral separation also known as the interocular is 65mm. Most of the population has an eye separation within ±10mm of the average interocular.

It would seem, with a little testing with friends with a variety of face shapes you will find a happy average for the eyeSeparation value for the device and the people using it. I would then also provide some settings panel which allows a few different settings of the eyeSeparation for users to choose from if they find disparity or overlap in their stereo experience. Normally I think this would be done with a keyboard connected to the same system to dial in the stereo alignment, but you're in cardboard, so the user may need trial and error to get it right.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM