After successfully deploying dozens of models where only the most trivial (one arg in/out) ever successfully return prediction results due to parsing and other argument errors, I went back to the official wide-and-deep model: official wide and deep tutorial and this: serving wide and deep tutorial continuation to try to export, deploy, and predict on ml-engine. I cannot get any permutation of text or json arguments to pass parsing. Here are some of my tests and the responses:
1)input file content, text:
25,0,0,"11th",7,"Male",40,"United-States","Machine-op-inspct","Own-child","Private"
response:
{"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"Could not parse example input, value: '25,0,0,\"11th\",7,\"Male\",40,\"United-States\",\"Machine-op-inspct\",\"Own-child\",\"Private\"'\n\t [[Node: ParseExample/ParseExample = ParseExample[Ndense=5, Nsparse=6, Tdense=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], dense_shapes=[[1], [1], [1], [1], [1]], sparse_types=[DT_STRING, DT_STRING, DT_STRING, DT_STRING, DT_STRING, DT_STRING], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](_arg_input_example_tensor_0_0, ParseExample/ParseExample/names, ParseExample/ParseExample/sparse_keys_0, ParseExample/ParseExample/sparse_keys_1, ParseExample/ParseExample/sparse_keys_2, ParseExample/ParseExample/sparse_keys_3, ParseExample/ParseExample/sparse_keys_4, ParseExample/ParseExample/sparse_keys_5, ParseExample/ParseExample/dense_keys_0, ParseExample/ParseExample/dense_keys_1, ParseExample/ParseExample/dense_keys_2, ParseExample/ParseExample/dense_keys_3, ParseExample/ParseExample/dense_keys_4, ParseExample/Const, ParseExample/Const, ParseExample/Const, ParseExample/Const, Pa...TRUNCATED\")"}
2)input file content, json:
{"age":25,"capital_gain":0,"capital_loss":0,"education":"11th","education_num":7,"gender":"Male","hours_per_week":40,"native_country":"United-States","occupation":"Machine-op-inspct","relationship":"Own-child","workclass":"Private"}
response:
{....failed: Expected tensor name: inputs, got tensor name: [u'hours_per_week', u'native_country',....}
3)input file content, json:
{"inputs":{"age":25,"capital_gain":0,"capital_loss":0,"education":"11th","education_num":7,"gender":"Male","hours_per_week":40,"native_country":"United-States","occupation":"Machine-op-inspct","relationship":"Own-child","workclass":"Private"}}
response:
{....Error processing input: Expected string, got {u'hours_per_week': 40, u'native_count....}
4)input file content, json:
{"inputs":"25,0,0,11th,7,Male,40,United-States,Machine-op-inspct,Own-child,Private"}
response:
{...."Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"Could not parse example input, value: '25,0,0,11th,7,Male,40,United-States,Machine-op-inspct,Own-child,Private'\n\t [[Node: ParseExample/ParseExample = ParseExample[Ndense=5,....}
I also tried with inner escaped quotes, various lists/arrays, etc.
Please tell me I just need to reformat my inputs in the predict request (and how) :) -Thanks
In the current state of affairs, the choice between a graph that accepts JSON and one that accepts tf.train.Example
are mutually exclusive, meaning, you have to export the graph slightly differently.
From the serving wide and deep tutorial continuation , change these lines from:
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
to
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
export_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)
For reference, see this sample , in particular, the *_serving_fn
definitions in model.py
(eg here ); that example also shows how to export a graph that expects CSV as input.
Another note, if you are using gcloud
to send the requests (vs. a request library), the input data format is not the full body the request to send: gcloud
constructs the request using each line in the file. So the body of an actual request sent to the server will look something like:
{
"instances": [
{
"age": 25,
"capital_gain": 0,
"capital_loss": 0,
"education": "11th",
"education_num": 7,
"gender": "Male",
"hours_per_week": 40,
"native_country": "United-States",
"occupation": "Machine-op-inspct",
"relationship": "Own-child",
"workclass": "Private"
}
]
}
while the corresponding --json-instances
file will look like:
{"age":25,"capital_gain":0,"capital_loss":0,"education":"11th","education_num":7,"gender":"Male","hours_per_week":40,"native_country":"United-States","occupation":"Machine-op-inspct","relationship":"Own-child","workclass":"Private"}
gcloud
takes the contents of each line and stuffs them into the array shown in the "actual" request above.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.