[英]Send sound through microphone in javascript
I'm using Selenium to emulate an user on a web that has audio chat.我正在使用 Selenium 在具有音频聊天的网络上模拟用户。 I need to emulate the user speaking through the microphone.
我需要模拟用户通过麦克风说话。
I only got questions about listening to the microphone in javascript, but none about sending sound through the microphone using javascript.我只有关于在 javascript 中收听麦克风的问题,但没有关于使用 javascript 通过麦克风发送声音的问题。
My current attempt looks like this:我目前的尝试是这样的:
First I check that AudioContext is available首先我检查 AudioContext 是否可用
private boolean isAudioContextSupported() {
JavascriptExecutor js = (JavascriptExecutor) getDriver();
Object response = js.executeAsyncScript(
"var callback = arguments[arguments.length - 1];" +
"var context;" +
"try {" +
" window.AudioContext = window.AudioContext||window.webkitAudioContext;" +
" context = new AudioContext();" +
" callback('');" +
"}" +
"catch(e) {" +
" callback('Web Audio API is not supported in this browser');" +
"}");
String responseAsString = response == null?"":(String)response;
return responseAsString.isEmpty();
}
Second I try to do this to get audio from an url其次,我尝试这样做以从 url 获取音频
JavascriptExecutor js = (JavascriptExecutor) getDriver();
Object response = js.executeAsyncScript(
"var callback = arguments[arguments.length - 1];" +
"window.AudioContext = window.AudioContext || window.webkitAudioContext;" +
"var context = new AudioContext();" +
"var url = '<ogg file url>';" +
"var request = new XMLHttpRequest();" +
"request.open('GET', url, true);" +
"request.responseType = 'arraybuffer';" +
"request.onload = function() {" +
" context.decodeAudioData(request.response, function(buffer) {" +
" <send the buffer data through the microphone>" +
"}, callback(request.statusText));" +
"};" +
"request.send();" +
"callback('OK');"
);
The part I'm missing is how to send the buffer data (obtained from the ogg file) through the microphone.我缺少的部分是如何通过麦克风发送缓冲区数据(从 ogg 文件中获取)。
EDIT:编辑:
The answer in Chrome: fake microphone input for test purpose does not answer this question, I already read that one. Chrome 中的答案:用于测试目的的假麦克风输入没有回答这个问题,我已经读过那个了。
EDIT 2:编辑2:
There are some things to be considered:有一些事情需要考虑:
1) The solution I'm looking can include using another language or tool. 1)我正在寻找的解决方案可以包括使用另一种语言或工具。
2) I can't use hardware to emulate mic input (eg: output sound via speakers so the microphone can pick it up) 2)我不能使用硬件来模拟麦克风输入(例如:通过扬声器输出声音以便麦克风可以拾取)
I think you don't need to use JavascriptExecutor. 我认为您不需要使用JavascriptExecutor。
There is a hack for your problem. 有一个解决您的问题的方法。
Solution: 解:
Use java instead. 请改用Java。
Step 1: 第1步:
Execute the chat voice listener. 执行聊天语音监听器。
Step 2: 第2步:
Now play a random voice programmatically. 现在以编程方式播放随机声音。
Use: import javazoom.jl.player.Player;
使用:
import javazoom.jl.player.Player;
public void playAudio(String audioPath) {
try {
FileInputStream fileInputStream = new FileInputStream(audioPath);
Player player = new Player((fileInputStream));
player.play();
System.out.println("Song is playing");
}catch (Exception ex) {
System.out.println("Error with playing sound.");
ex.printStackTrace();
}
}
Step 3: 第三步:
To enable microphone access, kindly use the below argument: 要启用麦克风访问,请使用以下参数:
options.addArguments("use-fake-ui-for-media-stream");
Above code will play the sound for you and your chat listener can listen the played audio. 上面的代码将为您播放声音,并且您的聊天听众可以收听播放的音频。
I don't know much about running Selenium with Java.我对用 Java 运行 Selenium 不太了解。 But it looks like you can execute arbitrary JavaScript code before running the tests.
但看起来您可以在运行测试之前执行任意 JavaScript 代码。 I guess your code does at some point call
getUserMedia()
to get the microphone input.我猜你的代码在某些时候会调用
getUserMedia()
来获取麦克风输入。 Therefore it might work if you just replace that function with a function that returns a MediaStream
of your audio file.因此,如果您只是用返回音频文件的
MediaStream
的函数替换该函数,它可能会起作用。
navigator.mediaDevices.getUserMedia = () => {
const audioContext = new AudioContext();
return fetch('/your/audio/file.ogg')
.then((response) => response.arrayBuffer())
.then((arrayBuffer) => audioContext.decodeAudioData(arrayBuffer))
.then((audioBuffer) => {
const audioBufferSourceNode = audioContext.createBufferSource();
const mediaStreamAudioDestinationNode = audioContext.createMediaStreamDestination();
audioBufferSourceNode.buffer = audioBuffer;
// Maybe it makes sense to loop the buffer.
audioBufferSourceNode.loop = true;
audioBufferSourceNode.start();
audioBufferSourceNode.connect(mediaStreamAudioDestinationNode);
return mediaStreamAudioDestinationNode.stream;
});
};
Maybe you also have to disable the autoplay policy in order to make it work.也许您还必须禁用自动播放策略才能使其工作。
Unfortunately the code for Safari needs to be a bit more complicated because decodeAudioData()
doesn't return a promise in Safari.不幸的是,Safari 的代码需要更复杂一些,因为
decodeAudioData()
不会在 Safari 中返回承诺。 I did not add the workaround here to keep the code as simple as possible.我没有在此处添加解决方法以保持代码尽可能简单。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.