简体   繁体   中英

Understanding offscreen canvas to better performance

Hey everyone so I have a very complex canvas editor that allows a user to pick a video background, add text, gifs and Lottie animations using the Konvajs and Gifler libraries. It has come a long way however I am trying to speed up the performance of my canvas application. I've read a lot about offscreen canvas but I don't quite understand it. Say I have a regular HTML canvas object how would I create an offscreen canvas and spit that back out to the browser? I would ideally like to be able to get images from the canvas in an array at 30 fps with no latency. I also have another concern that offscreen canvas as of yet does not seem to be widely supported according to caniuse.com . Whenever I try to create an offscreen canvas from my canvas I always get:

Failed to execute 'transferControlToOffscreen' on 
'HTMLCanvasElement': Cannot transfer control from a canvas that has a rendering context.

As I said, I am just trying to figure out how to render my animation smoothly but am not sure how to go about it. Any help here would be great. Here is the code.

<template>
  <div>
    <button @click="render">Render</button>
    <h2>Backgrounds</h2>
    <template v-for="background in backgrounds">
      <img
        :src="background.poster"
        class="backgrounds"
        @click="changeBackground(background.video)"
      />
    </template>
    <h2>Images</h2>
    <template v-for="image in images">
      <img
        :src="image.source"
        @click="addImage(image)"
        class="images"
      />
    </template>
    <br />
    <button @click="addText">Add Text</button>
    <button v-if="selectedNode" @click="removeNode">
      Remove selected {{ selectedNode.type }}
    </button>
    <label>Font:</label>
    <select v-model="selectedFont">
      <option value="Arial">Arial</option>
      <option value="Courier New">Courier New</option>
      <option value="Times New Roman">Times New Roman</option>
      <option value="Desoto">Desoto</option>
      <option value="Kalam">Kalam</option>
    </select>
    <label>Font Size</label>
    <input type="number" v-model="selectedFontSize" />
    <label>Font Style:</label>
    <select v-model="selectedFontStyle">
      <option value="normal">Normal</option>
      <option value="bold">Bold</option>
      <option value="italic">Italic</option>
    </select>
    <label>Color:</label>
    <input type="color" v-model="selectedColor" />
    <button
      v-if="selectedNode && selectedNode.type === 'text'"
      @click="updateText"
    >
      Update Text
    </button>
    <template v-if="selectedNode && selectedNode.lottie">
    <input type="text" v-model="text">
    <button @click="updateAnim(selectedNode.image)">
      Update Animation
    </button>
    </template>
    <br />
    <video
      id="preview"
      v-show="preview"
      :src="preview"
      :width="width"
      :height="height"
      preload="auto"
      controls
    />
    <a v-if="file" :href="file" download="dopeness.mp4">download</a>
    <div id="container"></div>
  </div>
</template>
<script>
import lottie from "lottie-web";
import * as anim from "../AEAnim/anim.json";
import * as anim2 from "../AEAnim/anim2.json";
import * as anim3 from "../AEAnim/anim3.json";
import * as anim4 from "../AEAnim/anim4.json";
import * as anim5 from "../AEAnim/anim5.json";

export default {
  data() {
    return {
      source: null,
      stage: null,
      layer: null,
      video: null,
      animations: [],
      text: "",
      animationData: null,
      captures: [],
      backgrounds: [
        {
          poster: "/api/files/stock/3oref310k1uud86w/poster/poster.jpg",
          video:
            "/api/files/stock/3oref310k1uud86w/main/1080/3oref310k1uud86w_1080.mp4"
        },
        {
          poster: "/api/files/stock/3yj2e30tk5x6x0ww/poster/poster.jpg",
          video:
            "/api/files/stock/3yj2e30tk5x6x0ww/main/1080/3yj2e30tk5x6x0ww_1080.mp4"
        },
        {
          poster: "/api/files/stock/2ez931ik1mggd6j/poster/poster.jpg",
          video:
            "/api/files/stock/2ez931ik1mggd6j/main/1080/2ez931ik1mggd6j_1080.mp4"
        },
        {
          poster: "/api/files/stock/yxrt4ej4jvimyk15/poster/poster.jpg",
          video:
            "/api/files/stock/yxrt4ej4jvimyk15/main/1080/yxrt4ej4jvimyk15_1080.mp4"
        },
        {
          poster:
            "https://images.costco-static.com/ImageDelivery/imageService?profileId=12026540&itemId=100424771-847&recipeName=680",
          video: "/api/files/jedi/surfing.mp4"
        },
        {
          poster:
            "https://thedefensepost.com/wp-content/uploads/2018/04/us-soldiers-afghanistan-4308413-1170x610.jpg",
          video: "/api/files/jedi/soldiers.mp4"
        }
      ],
      images: [
        { source: "/api/files/jedi/solo.jpg" },
        { source: "api/files/jedi/yoda.jpg" },
        { source: "api/files/jedi/yodaChristmas.jpg" },
        { source: "api/files/jedi/darthMaul.jpg" },
        { source: "api/files/jedi/darthMaul1.jpg" },
        { source: "api/files/jedi/trump.jpg" },
        { source: "api/files/jedi/hat.png" },
        { source: "api/files/jedi/trump.png" },
        { source: "api/files/jedi/bernie.png" },
        { source: "api/files/jedi/skywalker.png" },
        { source: "api/files/jedi/vader.gif" },
        { source: "api/files/jedi/vader2.gif" },
        { source: "api/files/jedi/yoda.gif" },
        { source: "api/files/jedi/kylo.gif" },
        {
          source: "https://media3.giphy.com/media/R3IxJW14a3QNa/source.gif",
          animation: anim
        },
        {
        source: "https://bestanimations.com/Text/Cool/cool-story-3.gif",
        animation: anim2
        },
        {
          source: "https://freefrontend.com/assets/img/css-text-animations/HTML-CSS-Animated-Text-Fill.gif",
          animation: anim3
        },
        {
          source: "api/files/jedi/zoomer.gif",
          animation: anim4
        },
        {
          source: "api/files/jedi/slider.gif",
          animation: anim5
        }
      ],
      backgroundVideo: null,
      imageGroups: [],
      anim: null,
      selectedNode: null,
      selectedFont: "Arial",
      selectedColor: "black",
      selectedFontSize: 20,
      selectedFontStyle: "normal",
      width: 1920,
      height: 1080,
      texts: [],
      preview: null,
      file: null,
      canvas: null
    };
  },
  mounted: function() {
    this.initCanvas();
  },
  methods: {
    changeBackground(source) {
      this.source = source;
      this.video.src = this.source;
      this.anim.stop();
      this.anim.start();
      this.video.play();
    },
    removeNode() {
      if (this.selectedNode && this.selectedNode.type === "text") {
        this.selectedNode.transformer.destroy(
          this.selectedNode.text.transformer
        );
        this.selectedNode.text.destroy(this.selectedNode.text);
        this.texts.splice(this.selectedNode.text.index - 1, 1);
        this.selectedNode = null;
        this.layer.draw();
      } else if (this.selectedNode && this.selectedNode.type == "image") {
        this.selectedNode.group.destroy(this.selectedNode);
        this.imageGroups.splice(this.selectedNode.group.index - 1, 1);
        if (this.selectedNode.lottie) {
          clearTimeout(this.animations.interval);
          this.selectedNode.lottie.destroy();
          this.animations.splice(this.selectedNode.lottie.index - 1, 1);
        }
        this.selectedNode = null;
        this.layer.draw();
      }
    },
    async addImage(imageToAdd, isUpdate) {
      let lottieAnimation = null;
      let imageObj = null;
      const type = imageToAdd.source.slice(imageToAdd.source.lastIndexOf("."));
      const vm = this;
      function process(img) {
        return new Promise((resolve, reject) => {
          img.onload = () => resolve({ width: img.width, height: img.height });
        });
      }
      imageObj = new Image();
      imageObj.src = imageToAdd.source;
      imageObj.width = 200;
      imageObj.height = 200;
      await process(imageObj);

      if (type === ".gif" && !imageToAdd.animation) {
        const canvas = document.createElement("canvas");
        canvas.setAttribute("id", "gif");
        async function onDrawFrame(ctx, frame) {
          ctx.drawImage(frame.buffer, frame.x, frame.y);
          // redraw the layer
          vm.layer.draw();
        }
        gifler(imageToAdd.source).frames(canvas, onDrawFrame);

        canvas.onload = async () => {
          canvas.parentNode.removeChild(canvas);
        };
        imageObj = canvas;
        const gif = new Image();
        gif.src = imageToAdd.source;
        const gifImage = await process(gif);
        imageObj.width = gifImage.width;
        imageObj.height = gifImage.height;
      } else if (imageToAdd.animation) {
        if(!isUpdate){this.text = "new text";}
        const canvas = document.createElement("canvas");
        canvas.style.width = 1920;
        canvas.style.height= 1080;
        canvas.setAttribute("id", "animationCanvas");
        const ctx = canvas.getContext("2d");
        const div = document.createElement("div");
        div.setAttribute("id", "animationContainer");
        div.style.display = "none";
        canvas.style.display = "none";
        this.animationData = imageToAdd.animation.default;
        for(let i =0; i <this.animationData.layers.length; i++){
          for(let b =0; b<this.animationData.layers[i].t.d.k.length; b++){
            this.animationData.layers[i].t.d.k[b].s.t = this.text;
          }
        }
         lottieAnimation = lottie.loadAnimation({
          container: div, // the dom element that will contain the animation
          renderer: "svg",
          loop: true,
          autoplay: true,
          animationData: this.animationData
        });
        lottieAnimation.imgSrc = imageToAdd.source;
        lottieAnimation.text = this.text;
        const svg = await div.getElementsByTagName("svg")[0];
        const timer = setInterval(async () => {

          const xml = new XMLSerializer().serializeToString(svg);
          const svg64 = window.btoa(xml);
          const b64Start = "data:image/svg+xml;base64,";
          const image64 = b64Start + svg64;
          imageObj = new Image({ width: canvas.width, height: canvas.height });
          imageObj.src = image64;
          await process(imageObj);
          ctx.clearRect(0, 0, canvas.width, canvas.height);
          ctx.drawImage(imageObj, 0, 0, canvas.width, canvas.height);
           this.layer.batchDraw();
        }, 1000 / 30);
        this.animations.push({ lottie: lottieAnimation, interval: timer });
        imageObj = canvas;
        canvas.onload = async () => {
          canvas.parentNode.removeChild(canvas);
        };
      }
      const image = new Konva.Image({
        x: 50,
        y: 50,
        image: imageObj,
        width: imageObj.width,
        height: imageObj.height,
        position: (0, 0),
        strokeWidth: 10,
        stroke: "blue",
        strokeEnabled: false
      });

      const group = new Konva.Group({
        draggable: true
      });
      // add the shape to the layer
      addAnchor(group, 0, 0, "topLeft");
      addAnchor(group, imageObj.width, 0, "topRight");
      addAnchor(group, imageObj.width, imageObj.height, "bottomRight");
      addAnchor(group, 0, imageObj.height, "bottomLeft");
      imageObj = null;
      image.on("click", function () {
        vm.hideAllHelpers();
        vm.selectedNode = {
          type: "image",
          group,
          lottie: lottieAnimation,
          image: imageToAdd
        };
        if(lottieAnimation && lottieAnimation.text){vm.text = lottieAnimation.text}
        group.find("Circle").show();

        vm.layer.draw();
      });
      image.on("mouseover", function(evt) {
        if (vm.selectedNode && vm.selectedNode.type === "image") {
          const index = image.getParent().index;
          const groupId = vm.selectedNode.group.index;
          if (index != groupId) {
            evt.target.strokeEnabled(true);
            vm.layer.draw();
          }
        } else {
          evt.target.strokeEnabled(true);
          vm.layer.draw();
        }
      });
      image.on("mouseout", function(evt) {
        evt.target.strokeEnabled(false);
        vm.layer.draw();
      });
      vm.hideAllHelpers();
      group.find("Circle").show();
      group.add(image);
      vm.layer.add(group);
      vm.imageGroups.push(group);

      vm.selectedNode = {
        type: "image",
        group,
        lottie: lottieAnimation,
        image: imageToAdd
      };
      function update(activeAnchor) {
        const group = activeAnchor.getParent();

        let topLeft = group.get(".topLeft")[0];
        let topRight = group.get(".topRight")[0];
        let bottomRight = group.get(".bottomRight")[0];
        let bottomLeft = group.get(".bottomLeft")[0];
        let image = group.get("Image")[0];

        let anchorX = activeAnchor.getX();
        let anchorY = activeAnchor.getY();

        // update anchor positions
        switch (activeAnchor.getName()) {
          case "topLeft":
            topRight.y(anchorY);
            bottomLeft.x(anchorX);
            break;
          case "topRight":
            topLeft.y(anchorY);
            bottomRight.x(anchorX);
            break;
          case "bottomRight":
            bottomLeft.y(anchorY);
            topRight.x(anchorX);
            break;
          case "bottomLeft":
            bottomRight.y(anchorY);
            topLeft.x(anchorX);
            break;
        }

        image.position(topLeft.position());

        let width = topRight.getX() - topLeft.getX();
        let height = bottomLeft.getY() - topLeft.getY();
        if (width && height) {
          image.width(width);
          image.height(height);
        }
      }
      function addAnchor(group, x, y, name) {
        let stage = vm.stage;
        let layer = vm.layer;

        let anchor = new Konva.Circle({
          x: x,
          y: y,
          stroke: "#666",
          fill: "#ddd",
          strokeWidth: 2,
          radius: 8,
          name: name,
          draggable: true,
          dragOnTop: false
        });

        anchor.on("dragmove", function() {
          update(this);
          layer.draw();
        });
        anchor.on("mousedown touchstart", function() {
          group.draggable(false);
          this.moveToTop();
        });
        anchor.on("dragend", function() {
          group.draggable(true);
          layer.draw();
        });
        // add hover styling
        anchor.on("mouseover", function() {
          let layer = this.getLayer();
          document.body.style.cursor = "pointer";
          this.strokeWidth(4);
          layer.draw();
        });
        anchor.on("mouseout", function() {
          let layer = this.getLayer();
          document.body.style.cursor = "default";
          this.strokeWidth(2);
          layer.draw();
        });

        group.add(anchor);
      }
    },
    async updateAnim(image){
     this.addImage(image, true);
      this.removeNode();

    },
    hideAllHelpers() {
      for (let i = 0; i < this.texts.length; i++) {
        this.texts[i].transformer.hide();
      }
      for (let b = 0; b < this.imageGroups.length; b++) {
        this.imageGroups[b].find("Circle").hide();
      }
    },
    async startRecording(duration) {
      const chunks = []; // here we will store our recorded media chunks (Blobs)
      const stream = this.canvas.captureStream(30); // grab our canvas MediaStream
      const rec = new MediaRecorder(stream, {
        videoBitsPerSecond: 20000 * 1000
      });
      // every time the recorder has new data, we will store it in our array
      rec.ondataavailable = e => chunks.push(e.data);
      // only when the recorder stops, we construct a complete Blob from all the chunks
      rec.onstop = async e => {
        this.anim.stop();

        const blob = new Blob(chunks, {
          type: "video/webm"
        });

        this.preview = await URL.createObjectURL(blob);
        const video = window.document.getElementById("preview");
        const previewVideo = new Konva.Image({
          image: video,
          draggable: false,
          width: this.width,
          height: this.height
        });
        this.layer.add(previewVideo);

        console.log("video", video);
        video.addEventListener("ended", () => {
          console.log("preview ended");
          if (!this.file) {
            const vid = new Whammy.fromImageArray(this.captures, 30);
            this.file = URL.createObjectURL(vid);
          }
          previewVideo.destroy();
          this.anim.stop();
          this.anim.start();
          this.video.play();
        });
        let seekResolve;

        video.addEventListener("seeked", async () => {
          if (seekResolve) seekResolve();
        });
        video.addEventListener("loadeddata", async () => {
          let interval = 1 / 30;
          let currentTime = 0;
          while (currentTime <= duration && !this.file) {
            video.currentTime = currentTime;
            await new Promise(r => (seekResolve = r));

            this.layer.draw();
            let base64ImageData = this.canvas.toDataURL("image/webp");
            this.captures.push(base64ImageData);
            currentTime += interval;
            video.currentTime = currentTime;
          }

          this.layer.draw();
        });
      };
      rec.start();
      setTimeout(() => rec.stop(), duration);
    },
    async render() {
      this.captures = [];
      this.preview = null;
      this.file = null;
      this.hideAllHelpers();
      this.selectedNode = null;
      this.video.currentTime = 0;
      this.video.loop = false;
      const duration = this.video.duration * 1000;
      this.startRecording(duration);
      this.layer.draw();
    },
    updateText() {
      if (this.selectedNode && this.selectedNode.type === "text") {
        const text = this.selectedNode.text;
        const transformer = this.selectedNode.transformer;
        text.fontSize(this.selectedFontSize);
        text.fontFamily(this.selectedFont);
        text.fontStyle(this.selectedFontStyle);
        text.fill(this.selectedColor);
        this.layer.draw();
      }
    },
    addText() {
      const vm = this;
      const text = new Konva.Text({
        text: "new text " + (vm.texts.length + 1),
        x: 50,
        y: 80,
        fontSize: this.selectedFontSize,
        fontFamily: this.selectedFont,
        fontStyle: this.selectedFontStyle,
        fill: this.selectedColor,
        align: "center",
        width: this.width * 0.5,
        draggable: true
      });
      const transformer = new Konva.Transformer({
        node: text,
        keepRatio: true,
        enabledAnchors: ["top-left", "top-right", "bottom-left", "bottom-right"]
      });
      text.on("click", async () => {
        for (let i = 0; i < this.texts.length; i++) {
          let item = this.texts[i];
          if (item.index === text.index) {
            let transformer = item.transformer;
            this.selectedNode = { type: "text", text, transformer };
            this.selectedFontSize = text.fontSize();
            this.selectedFont = text.fontFamily();
            this.selectedFontStyle = text.fontStyle();
            this.selectedColor = text.fill();
            vm.hideAllHelpers();
            transformer.show();
            transformer.moveToTop();
            text.moveToTop();
            vm.layer.draw();
            break;
          }
        }
      });
      text.on("mouseover", () => {
        transformer.show();
        this.layer.draw();
      });
      text.on("mouseout", () => {
        if (
          (this.selectedNode &&
            this.selectedNode.text &&
            this.selectedNode.text.index != text.index) ||
          (this.selectedNode && this.selectedNode.type === "image") ||
          !this.selectedNode
        ) {
          transformer.hide();
          this.layer.draw();
        }
      });
      text.on("dblclick", () => {
        text.hide();
        transformer.hide();
        vm.layer.draw();
        let textPosition = text.absolutePosition();

        let stageBox = vm.stage.container().getBoundingClientRect();

        let areaPosition = {
          x: stageBox.left + textPosition.x,
          y: stageBox.top + textPosition.y
        };

        let textarea = document.createElement("textarea");
        window.document.body.appendChild(textarea);
        textarea.value = text.text();
        textarea.style.position = "absolute";
        textarea.style.top = areaPosition.y + "px";
        textarea.style.left = areaPosition.x + "px";
        textarea.style.width = text.width() - text.padding() * 2 + "px";
        textarea.style.height = text.height() - text.padding() * 2 + 5 + "px";
        textarea.style.fontSize = text.fontSize() + "px";
        textarea.style.border = "none";
        textarea.style.padding = "0px";
        textarea.style.margin = "0px";
        textarea.style.overflow = "hidden";
        textarea.style.background = "none";
        textarea.style.outline = "none";
        textarea.style.resize = "none";
        textarea.style.lineHeight = text.lineHeight();
        textarea.style.fontFamily = text.fontFamily();
        textarea.style.transformOrigin = "left top";
        textarea.style.textAlign = text.align();
        textarea.style.color = text.fill();
        let rotation = text.rotation();
        let transform = "";
        if (rotation) {
          transform += "rotateZ(" + rotation + "deg)";
        }
        let px = 0;
        let isFirefox =
          navigator.userAgent.toLowerCase().indexOf("firefox") > -1;
        if (isFirefox) {
          px += 2 + Math.round(text.fontSize() / 20);
        }
        transform += "translateY(-" + px + "px)";
        textarea.style.transform = transform;
        textarea.style.height = "auto";
        textarea.focus();

        // start
        function removeTextarea() {
          textarea.parentNode.removeChild(textarea);
          window.removeEventListener("click", handleOutsideClick);
          text.show();
          transformer.show();
          transformer.forceUpdate();
          vm.layer.draw();
        }

        function setTextareaWidth(newWidth) {
          if (!newWidth) {
            // set width for placeholder
            newWidth = text.placeholder.length * text.fontSize();
          }
          // some extra fixes on different browsers
          let isSafari = /^((?!chrome|android).)*safari/i.test(
            navigator.userAgent
          );
          let isFirefox =
            navigator.userAgent.toLowerCase().indexOf("firefox") > -1;
          if (isSafari || isFirefox) {
            newWidth = Math.ceil(newWidth);
          }

          let isEdge =
            document.documentMode || /Edge/.test(navigator.userAgent);
          if (isEdge) {
            newWidth += 1;
          }
          textarea.style.width = newWidth + "px";
        }

        textarea.addEventListener("keydown", function(e) {
          // hide on enter
          // but don't hide on shift + enter
          if (e.keyCode === 13 && !e.shiftKey) {
            text.text(textarea.value);
            removeTextarea();
          }
          // on esc do not set value back to node
          if (e.keyCode === 27) {
            removeTextarea();
          }
        });

        textarea.addEventListener("keydown", function(e) {
          let scale = text.getAbsoluteScale().x;
          setTextareaWidth(text.width() * scale);
          textarea.style.height = "auto";
          textarea.style.height =
            textarea.scrollHeight + text.fontSize() + "px";
        });

        function handleOutsideClick(e) {
          if (e.target !== textarea) {
            text.text(textarea.value);
            removeTextarea();
          }
        }
        setTimeout(() => {
          window.addEventListener("click", handleOutsideClick);
        });
        // end
      });
      text.transformer = transformer;
      this.texts.push(text);
      this.layer.add(text);
      this.layer.add(transformer);
      this.hideAllHelpers();
      this.selectedNode = { type: "text", text, transformer };
      transformer.show();
      this.layer.draw();
    },
    initCanvas() {
      const vm = this;
      this.stage = new Konva.Stage({
        container: "container",
        width: vm.width,
        height: vm.height
      });
      this.layer = new Konva.Layer();
      this.stage.add(this.layer);

      let video = document.createElement("video");
      video.setAttribute("id", "video");
      video.setAttribute("ref", "video");
      if (this.source) {
        video.src = this.source;
      }
      video.preload = "auto";
      video.loop = "loop";
      video.style.display = "none";
      this.video = video;
      this.backgroundVideo = new Konva.Image({
        image: vm.video,
        draggable: false
      });
      this.video.addEventListener("loadedmetadata", function(e) {
        vm.backgroundVideo.width(vm.width);
        vm.backgroundVideo.height(vm.height);
      });
      this.video.addEventListener("ended", () => {
        console.log("the video ended");
        this.anim.stop();
        this.anim.start();
        this.video.loop = "loop";
        this.video.play();
      });

      this.anim = new Konva.Animation(function() {
        console.log("animation called");
        // do nothing, animation just need to update the layer
      }, vm.layer);

      this.layer.add(this.backgroundVideo);
      this.layer.batchDraw();
      const canvas = document.getElementsByTagName("canvas")[0];
      canvas.style.border = "3px solid red";
      this.canvas = canvas;
    }
  }
};
</script>
<style scoped>
body {
  margin: 0;
  padding: 0;
  background-color: #f0f0f0;
}
.backgrounds,
.images {
  width: 100px;
  height: 100px;
  padding-left: 2px;
  padding-right: 2px;
}
</style>

About the error message

Just like you can't request a context 'B' after you got a context 'A', you can't transfer a DOM canvas's control to an OffscreenCanvas after you did request a context from that canvas.

Here you are using Konva.js library (that I don't particularly know) to initialize your DOM canvas. This library will need to access one of the available contexts (apparently the "2D" one) from that canvas. This means that when you will gain access to that canvas, a context will already have been requested by the library and that you won't be able to transfer its control to an OffscreenCanvas.

There is this issue on the library's repo, which points out that no later than 12 days ago they added some initial support for OffscreenCanvases. So I invite you to look at their example on how to proceed with that library.


About OffscreenCanvas Performances

An OffscreenCanvas in itself doesn't offer any performance boost compared to a regular canvas. It won't magically make your code that was running at 10FPS to run at 60FPS.
What it allows is to not block the main thread , and to not be blocked by the main thread . And for this, you need to transfer it to a Web Worker .

This means you may use it

  • if you are afraid your canvas code can block the UI but you don't really require smooth animation always.
  • if you are afraid your main thread may slow down your canvas animation - eg if you have a lot of other stuff going on on the page.

But in your case, it seems that there is only your code running. So you will probably not win anything by going this route.


About OffscreenCanvas limitations

We saw that to really take advantages of an OffscreenCanvas, we need to run it in a parallel thread from a Web Worker. But Web Workers don't have access to the DOM.
This is a huge limitation and will make a lot of things way more harder to handle.

For instance, to draw your video, you currently have no other way than to use a <video> element to play it first. The Worker script can't access that <video> element, and it can't create one on its own thread. So the only solution is to create an ImageBitmap from the main thread and to pass it back to your Worker script.
All the hard work (video decoding + bitmap generation) is done on the main thread. It is worth noting that even though the createImageBitmap() method returns a Promise, when we use a video as source, the browser has no other choice than to create the Bitmap from the video synchronously .
So while getting that ImageBitmap for use in your worker, you are actually overloading the main thread, and if the main thread is locked doing something else, you will obviously have to wait it's done before getting your frame from the video.

An other big limitation is that currently* Web Workers can't react to DOM events. So you have to set up a proxy to pass the events received in the main thread to your Worker thread. Once again this requires your main thread to be free and a lot of new code.


About your code

Because, yeah, I believe this is where you should be looking if you want performance improvements.

I only gave it a quick look, but I already saw that you are using setInterval in a few places at high rate. Don't. If you need to animate something visible always use requestAnimationFrame , if you don't need the full speed, then add an inner logic to skip frames, but keep using rAF as the main engine.

You are asking the browser to do heavy actions every frames. For instance your svg part is creating a full new svg markup from the DOM node at every frame, then this markup is loaded in an <img> (this mean the browser has to launch an entire new DOM for that image), and rasterized on a canvas.
This in itself will be hard to cope at high frame-rate, and an OffscreenCanvas won't help.

You are storing all the images as stills to produce your final video. This will take a lot of memory.

There is probably a lot of other stuff like that in your code, so review it thoroughly and search for what make your code not able to reach the screen refresh rate. Improve what can be, search for alternatives (for instance MediaRecorder might be better than Whammy when supported) and good luck.


* There is an ongoing proposal to fix that issue

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM