[英]Android Studio Google Glass: How to take a picture with voice command
理想情況下,我試圖在voice_trigger.xml中使用來自用戶的語音輸入,並將其存儲在strings.xml中,以便我可以比較Camera活動中的字符串變量,以便如果匹配,則拍照並保存。
我不太確定如何執行此操作。 聽起來我有解決此問題的正確想法嗎?
下方: voice_trigger.xml,strings.xml,CameraActivity.java
voice_trigger.xml:
<?xml version="1.0" encoding="utf-8"?>
<!-- For more information about voice trigger, check out: https://developers.google.com/glass/develop/gdk/starting-glassware -->
<trigger keyword="Visual tracker">
<!-- <input prompt="@string/glass_tracking_prompt" /> -->
<input interaction ="@string/take_picture"/>
<constraints network="true"
camera="true" />
</trigger>
strings.xml:
<resources>
<string name="app_name">GlassTracker</string>
<string name="title_activity_live_card_service">Tracking Prime LiveCard</string>
<string name="title_activity_live_card_renderer">Tracking Prime Activity</string>
<string name="action_stop">Close App</string>
<string name="action_tune_track">Tune Tracker</string>
<string name="action_start_track">Start Tracking</string>
<string name="action_stop_track">Stop Tracking</string>
<string name="hello_world">Hello visual tracker!</string>
<string name="glass_tracking_trigger">Visual tracker</string>
<string name="glass_tracking_prompt">Tap to tune</string>
<string name="take_picture">Take a picture</string>
</resources>
CameraActivity.java:
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.os.FileObserver;
import android.provider.MediaStore;
import android.speech.RecognizerIntent;
import com.google.android.glass.content.Intents;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
/**
* Created by
*/
public class CameraActivity extends Activity {
private CameraSurfaceView cameraView;
private static final int TAKE_PICTURE_REQUEST = 1;
//Take the picture only if the string take_picture from voice control allows for it.
private void takePicture() {
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent, TAKE_PICTURE_REQUEST);
}
@Override
/*Take out ints */
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == TAKE_PICTURE_REQUEST && resultCode == RESULT_OK) {
String thumbnailPath = data.getStringExtra(Intents.EXTRA_THUMBNAIL_FILE_PATH);
String picturePath = data.getStringExtra(Intents.EXTRA_PICTURE_FILE_PATH);
processPictureWhenReady(picturePath);
// TODO: Show the thumbnail to the user while the full picture is being
// processed.
}
super.onActivityResult(requestCode, resultCode, data);
}
private void processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it.
} else {
// The file does not exist yet. Before starting the file observer, you
// can update your UI to let the user know that the application is
// waiting for the picture (for example, by displaying the thumbnail
// image and a progress indicator).
final File parentDirectory = pictureFile.getParentFile();
FileObserver observer = new FileObserver(parentDirectory.getPath(),
FileObserver.CLOSE_WRITE | FileObserver.MOVED_TO) {
// Protect against additional pending events after CLOSE_WRITE
// or MOVED_TO is handled.
private boolean isFileWritten;
@Override
public void onEvent(int event, String path) {
if (!isFileWritten) {
// For safety, make sure that the file that was created in
// the directory is actually the one that we're expecting.
File affectedFile = new File(parentDirectory, path);
isFileWritten = affectedFile.equals(pictureFile);
if (isFileWritten) {
stopWatching();
// Now that the file is ready, recursively call
// processPictureWhenReady again (on the UI thread).
runOnUiThread(new Runnable() {
@Override
public void run() {
processPictureWhenReady(picturePath);
}
});
}
}
}
};
observer.startWatching();
}
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Initiate CameraView
cameraView = new CameraSurfaceView(this); //Calls CameraSurfaceView
// Set the view
this.setContentView(cameraView);
}
@Override
protected void onResume() {
super.onResume();
// Do not hold the camera during onResume
if (cameraView != null) {
cameraView.releaseCamera();
}
}
@Override
protected void onPause() {
super.onPause();
// Do not hold the camera during onPause
if (cameraView != null) {
cameraView.releaseCamera();
}
}
}
首先十分感謝!
在voice_triggers.xml文件中,將收聽語音命令。 當您對設備說命令時,voiceRes將使用您所說的內容進行填充,因為這是字符串類型的數組,因此理想情況下需要將其轉換為字符串。 然后,很容易根據用戶的說法進行比較。 另外,請確保在比較中包括方括號,否則if語句將不起作用。 這是我很快發現的解決方案,我敢肯定有很多方法可以做到這一點。
在onCreate()
方法中, this.setContentView(cameraView);
之后this.setContentView(cameraView);
ArrayList<String> voiceRes =getIntent().getExtras().getStringArrayList(RecognizerIntent.EXTRA_RESULTS);
String Voicedata = voiceRes.toString();
if(Voicedata.equals("[take a picture]")
{
takepicture();
}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.