I'm porting some of my OpenGL code to WebGL and the fact that JavaScript doesn't have genuine arrays is sad. I can use Float32Array
's (and other other ArrayBuffer
types), but that doesn't seem to help performance.
As an experiment to compare Array
vs Float32Array
vs Float64Array
performance, I timed bubble sort on 100000 floats to see if there was any difference:
function bubbleSort(array) {
var N = array.length;
for (var i = 0; i < N; i++)
for (var j = i; j < N-1; j++)
if (array[j] > array[j+1]) {
var tmp = array[j];
array[j] = array[j+1];
array[j+1] = tmp;
}
}
// var nums = new Array(100000); // regular 'JS' array
// var nums = new Float32Array(100000); // actual buffer of 32-bit floats
var nums = new Float64Array(100000); // actual buffer of 64-bit floats
for (var i = 0; i < nums.length; i++)
nums[i] = Math.random() * 1000;
bubbleSort(nums);
for (var i = 0; i < nums.length; i++)
console.log(nums[i]);
Not much difference. Really the compiler would need some static type information for the array
argument for bubbleSort
to really get decent performance. Are we just stuck with bad array performance in JS? Any way around this? Short of using ASM.js that is...
You should see this answer: What is the performance of Objects/Arrays in JavaScript? (specifically for Google V8)
In particular the following points:
So yes, it looks like Array indexing is slow in JS. If it's true that array writes are faster than reads in performance test, then it is likely that the dynamic allocation isn't picky about where it puts the data (fastest first?), whereas when it comes to reading, that will be a lot of memory addresses to jump between and so will be far slower.
I'd be interested to see if you declare an array using literal syntax [0, 1, 2, ..., 100000], would the performance be any better?
(I'll set up a JSPerf if I have the time).
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.