I have a music app which is used to compose music, I can have up to 3 instruments playing and I have 2 effects available, pitch change and reverb.
I'm currently using the standard web audio API but I'm having issues sometimes when playing too many notes at once, audio gets messed up. Plus I liked some features that tone.js offers.
I wanted to try and use tone.js to address those issues (if possible) and have all features I previously had. The issue is that I quickly went through the docs but I don't really understand how I should structure things. I have 21 audio files, one for each note, for all 3 instruments, my questions are:
Any suggestions are welcome too. The website is: here
I'm not familiar with tone.js, but when trying to build high-quality audio with getDisplayMedia, in the past I've passed in MediaStreamConstraints that remove some of the default processing on the input track:
stream = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: { channels: 2, autoGainControl: false, echoCancellation: false, noiseSuppression: false }});
I'm still learning WebRTC, and, again, have not used tone.js, so I'm not sure if this is helpful at all, but I thought I'd share just in case.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.