[英]Three.js: Cannot display mesh created with texture array
我正在基於你放置在沙盒世界中的立方體構建一個非常原始的游戲(一個完全獨特的概念,它將徹底改變我們所知道的游戲)並且我正在處理塊生成。 這是我到目前為止所擁有的:
我的塊在 object 文字中定義:
import * as THREE from 'three';
const loader = new THREE.TextureLoader();
interface BlockAttrs
{
breakable: boolean;
empty: boolean;
}
export interface Block
{
attrs: BlockAttrs;
mat_bottom?: THREE.MeshBasicMaterial;
mat_side?: THREE.MeshBasicMaterial;
mat_top?: THREE.MeshBasicMaterial;
}
interface BlockList
{
[key: string]: Block;
}
export const Blocks: BlockList = {
air:
{
attrs:
{
breakable: false,
empty: true,
},
},
grass:
{
attrs:
{
breakable: true,
empty: false,
},
mat_bottom: new THREE.MeshBasicMaterial({map: loader.load("/tex/dirt.png")}),
mat_side: new THREE.MeshBasicMaterial({map: loader.load("/tex/grass-side.png")}),
mat_top: new THREE.MeshBasicMaterial({map: loader.load("/tex/grass-top.png")}),
},
};
這是我的Chunk
class:
import * as THREE from 'three';
import { BufferGeometryUtils } from 'three/examples/jsm/utils/BufferGeometryUtils';
import { Block, Blocks } from './blocks';
const px = 0; const nx = 1; const py = 2;
const ny = 3; const pz = 4; const nz = 5;
export default class Chunk
{
private static readonly faces = [
new THREE.PlaneGeometry(1,1)
.rotateY(Math.PI / 2)
.translate(0.5, 0, 0),
new THREE.PlaneGeometry(1,1)
.rotateY(-Math.PI / 2)
.translate(-0.5, 0, 0),
new THREE.PlaneGeometry(1,1)
.rotateX(-Math.PI / 2)
.translate(0, 0.5, 0),
new THREE.PlaneGeometry(1,1)
.rotateX(Math.PI / 2)
.translate(0, -0.5, 0),
new THREE.PlaneGeometry(1,1)
.translate(0, 0, 0.5),
new THREE.PlaneGeometry(1,1)
.rotateY(Math.PI)
.translate(0, 0, -0.5)
];
private structure: Array<Array<Array<Block>>>;
public static readonly size = 16;
private materials = Array<THREE.MeshBasicMaterial>();
private terrain = Array<THREE.BufferGeometry>();
constructor ()
{
this.structure = new Array<Array<Array<Block>>>(Chunk.size);
for (let x = 0; x < Chunk.size; x++)
{
this.structure[x] = new Array<Array<Block>>(Chunk.size);
for (let y = 0; y < Chunk.size; y++)
{
this.structure[x][y] = new Array<Block>(Chunk.size);
for (let z = 0; z < Chunk.size; z++)
if ((x+y+z) % 2)
this.structure[x][y][z] = Blocks.grass;
else
this.structure[x][y][z] = Blocks.air;
}
}
}
private blockEmpty (x: number, y: number, z: number): boolean
{
let empty = true;
if (
x >= 0 && x < Chunk.size &&
y >= 0 && y < Chunk.size &&
z >= 0 && z < Chunk.size
) {
empty = this.structure[x][y][z].attrs.empty;
}
return empty;
}
private generateBlockFaces (x: number, y: number, z: number): void
{
if (this.blockEmpty(x+1, y, z))
{
this.terrain.push(Chunk.faces[px].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_side);
}
if (this.blockEmpty(x, y, z+1))
{
this.terrain.push(Chunk.faces[nx].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_side);
}
if (this.blockEmpty(x, y-1, z))
{
this.terrain.push(Chunk.faces[py].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_bottom);
}
if (this.blockEmpty(x, y+1, z))
{
this.terrain.push(Chunk.faces[ny].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_top);
}
if (this.blockEmpty(x, y, z-1))
{
this.terrain.push(Chunk.faces[pz].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_side);
}
if (this.blockEmpty(x-1, y, z))
{
this.terrain.push(Chunk.faces[nz].clone().translate(x, y, z));
this.materials.push(this.structure[x][y][z].mat_side);
}
}
public generateTerrain (): THREE.Mesh
{
this.terrain = new Array<THREE.BufferGeometry>();
this.materials = new Array<THREE.MeshBasicMaterial>();
for (let x = 0; x < Chunk.size; x++)
for (let y = 0; y < Chunk.size; y++)
for (let z = 0; z < Chunk.size; z++)
if (!this.structure[x][y][z].attrs.empty)
this.generateBlockFaces(x, y, z);
return new THREE.Mesh(
BufferGeometryUtils.mergeBufferGeometries(this.terrain),
this.materials
);
}
}
我知道網格創建器應該與 model 分離,但現在我正在試驗。 class 的工作原理如下:
首先, constructor()
創建一個Block
的 3D 矩陣。 我已將其設置為以air
和grass
的棋盤圖案創建它,因此其他所有方塊都是空的。
接下來,我從場景中調用generateTerrain()
:
this.chunk = new Chunk();
this.add(this.chunk.generateTerrain());
調用此方法時,它會為每個非空塊進入generateBlockFaces
,並將適當的PlaneGeometry
s 推送到terrain
數組中,並將適當的THREE.MeshBasicMaterial
送到materials
數組中。 然后我使用BufferGeometryUtils.mergeBufferGeometries
合並幾何,並創建通過合並幾何和materials
數組的網格。
我遇到的問題是,在傳遞new THREE.MeshNormalMaterial
或任何其他材料時創建網格效果很好,但當我傳遞materials
數組時卻不行。 傳遞數組會創建 object (和console.log
顯示它創建時沒有錯誤),但它不是用場景繪制的。
我是否誤以為materials
數組將為每個面分配材質? 我究竟做錯了什么?
在文檔中找到對THREE.UVMapping的引用后,我解決了它。 將幾何圖形發送到 GPU 時,紋理坐標需要是頂點坐標的雙向。 為此,我在我的塊中定義了以下三個屬性:
uv_bottom: [
stone_row / Textures.rows, (stone_col+1) / Textures.cols,
(stone_row+1) / Textures.rows, (stone_col+1) / Textures.cols,
stone_row / Textures.rows, stone_col / Textures.cols,
(stone_row+1) / Textures.rows, stone_col / Textures.cols,
],
uv_side: [
stone_row / Textures.rows, (stone_col+1) / Textures.cols,
(stone_row+1) / Textures.rows, (stone_col+1) / Textures.cols,
stone_row / Textures.rows, stone_col / Textures.cols,
(stone_row+1) / Textures.rows, stone_col / Textures.cols,
],
uv_top: [
stone_row / Textures.rows, (stone_col+1) / Textures.cols,
(stone_row+1) / Textures.rows, (stone_col+1) / Textures.cols,
stone_row / Textures.rows, stone_col / Textures.cols,
(stone_row+1) / Textures.rows, stone_col / Textures.cols,
],
Textures.rows
和Textures.cols
引用了我的紋理圖集(所有紋理都存儲在 png 網格中的文件)具有的列數和行數,並且每個塊都有自己的row
和col
文件,引用其 position。 然后,我創建了一個private uv = Array<Array<number>>();
在我的Chunk
class 中並修改地形生成器以將塊的 uv arrays 推到它。 例如,這是對正z面的處理方式(請注意,為了提高效率,我交換了y
和z
):
if (this.blockEmpty(x, z+1, y))
{
this.terrain.push(Chunk.faces[pz].clone().translate(x, y, z));
this.uv.push(this.structure[x][z][y].uv_side);
}
現在, BufferGeometry
只接受'uv'
arrays 作為類型(在本例中為Float32Array
),所以我必須從this.uv
的扁平版本構造一個。 這就是地形生成器 function 現在的樣子:
public generateTerrain (): THREE.Mesh
{
this.terrain = new Array<THREE.BufferGeometry>();
for (let x = 0; x < Chunk.base; x++)
for (let z = 0; z < Chunk.base; z++)
for (let y = 0; y < Chunk.build_height; y++)
if (!this.structure[x][z][y].attrs.empty)
this.generateBlockFaces(x, z, y);
const geometry = BufferGeometryUtils.mergeBufferGeometries(this.terrain);
geometry.setAttribute('uv', new THREE.BufferAttribute(new Float32Array(this.uv.flat()), 2));
return new THREE.Mesh(geometry, Textures.material);
}
如您所見,我使用的材質來自Textures
class。 這是整個導入的文件:
import * as THREE from 'three';
export default class Textures
{
private static readonly loader = new THREE.TextureLoader();
public static readonly rows = 3;
public static readonly cols = 2;
public static readonly atlas = Textures.loader.load("/tex/atlas.png");
public static readonly material = new THREE.MeshBasicMaterial({map: Textures.atlas});
}
Textures.atlas.magFilter = THREE.NearestFilter;
Textures.atlas.minFilter = THREE.NearestFilter;
就是這樣:地形現在生成渲染每個塊的紋理,我對此非常高興:D
對於想要使用材質數組的人:
const materials = [
new THREE.MeshBasicMaterial( { color: 'red' } ),
new THREE.MeshBasicMaterial( { color: 'blue' } )
];
const geometries = [
new THREE.PlaneGeometry( 1, 1 ),
new THREE.PlaneGeometry( 1, 1 )
];
geometries[ 1 ].rotateX( Math.PI * -0.5 );
// Add groups that set the materialIndex of each vertex in a group
// For this code all the vertices in each geometry have the same material.
// If you load in a model that already has multiple textures you don't need to do this.
geometries[ 0 ].addGroup( 0, geometries[0].attributes.position.count, 0 );
geometries[ 1 ].addGroup( 0, geometries[1].attributes.position.count, 1 );
// Setting true on the second argument enables groups for the merged geometry.
const mergedGeometry = BufferGeometryUtils.mergeBufferGeometries( geometries, true );
const multiMaterialMesh = new THREE.Mesh( mergedGeometry, materials );
scene.add( multiMaterialMesh );
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.