简体   繁体   English

IBM Watson CPLEX 在求解 LP 文件时不显示变量,也没有解决方案

[英]IBM Watson CPLEX Shows no Variables, no Solution when solving LP file

I'm migrating an application that formerly ran on IBM's DoCloud to their new API based off of Watson.我正在将以前在 IBM 的 DoCloud 上运行的应用程序迁移到基于 Watson 的新 API。 Since our application doesn't have data formatted in CSV nor a separation between the model and data layers it seemed simpler to upload an LP file along with a model file that reads the LP file and solves it.由于我们的应用程序没有 CSV 格式的数据,也没有模型层和数据层之间的分离,上传一个 LP 文件和一个读取 LP 文件并解决它的模型文件似乎更简单。 I can upload and it claims to solve correctly but returns empty solve status.我可以上传并且它声称可以正确解决但返回空的解决状态。 I've also output various model info (eg number of variables) and everything is zeroed out.我还输出了各种模型信息(例如变量的数量),并且一切都归零了。 I've confirmed the LP isn't blank - it has a trivial MILP.我已经确认 LP 不是空白的 - 它有一个微不足道的 MILP。

Here is my model code (most of which is taken directly from the example at https://dataplatform.cloud.ibm.com/exchange/public/entry/view/50fa9246181026cd7ae2a5bc7e4ac7bd ):这是我的模型代码(其中大部分直接取自https://dataplatform.cloud.ibm.com/exchange/public/entry/view/50fa9246181026cd7ae2a5bc7e4ac7bd 上的示例):

import os
import sys
from os.path import splitext

import pandas
from docplex.mp.model_reader import ModelReader
from docplex.util.environment import get_environment
from six import iteritems


def loadModelFiles():
    """Load the input CSVs and extract the model and param data from it
    """
    env = get_environment()
    inputModel = params = None
    modelReader = ModelReader()

    for inputName in [f for f in os.listdir('.') if splitext(f)[1] != '.py']:
        inputBaseName, ext = splitext(inputName)

        print(f'Info: loading {inputName}')

        try:
            if inputBaseName == 'model':
                inputModel = modelReader.read_model(inputName, model_name=inputBaseName)
            elif inputBaseName == 'params':
                params = modelReader.read_prm(inputName)
        except Exception as e:
            with env.get_input_stream(inputName) as inStream:
                inData = inStream.read()
            raise Exception(f'Error: {e} found while processing {inputName} with contents {inData}')

    if inputModel is None or params is None:
        print('Warning: error loading model or params, see earlier messages for details')

    return inputModel, params


def writeOutputs(outputs):
    """Write all dataframes in ``outputs`` as .csv.

    Args:
        outputs: The map of outputs 'outputname' -> 'output df'
    """
    for (name, df) in iteritems(outputs):
        csv_file = '%s.csv' % name
        print(csv_file)
        with get_environment().get_output_stream(csv_file) as fp:
            if sys.version_info[0] < 3:
                fp.write(df.to_csv(index=False, encoding='utf8'))
            else:
                fp.write(df.to_csv(index=False).encode(encoding='utf8'))
    if len(outputs) == 0:
        print("Warning: no outputs written")


# load and solve model
model, modelParams = loadModelFiles()
ok = model.solve(cplex_parameters=modelParams)

solution_df = pandas.DataFrame(columns=['name', 'value'])

for index, dvar in enumerate(model.solution.iter_variables()):
    solution_df.loc[index, 'name'] = dvar.to_string()
    solution_df.loc[index, 'value'] = dvar.solution_value

outputs = {}
outputs['solution'] = solution_df

# Generate output files
writeOutputs(outputs)

try:
    with get_environment().get_output_stream('test.txt') as fp:
        fp.write(f'{model.get_statistics()}'.encode('utf-8'))

except Exception as e:
    with get_environment().get_output_stream('excInfo') as fp:
        fp.write(f'Got exception {e}')

and a stub of the code that runs it (again, pulling heavily from the example):以及运行它的代码存根(再次从示例中大量提取):

prmFile = NamedTemporaryFile()
prmFile.write(self.ctx.cplex_parameters.export_prm_to_string().encode())
modelFile = NamedTemporaryFile()
modelFile.write(self.solver.export_as_lp_string(hide_user_names=True).encode())
modelMetadata = {
    self.client.repository.ModelMetaNames.NAME: self.name,
    self.client.repository.ModelMetaNames.TYPE: 'do-docplex_12.9',
    self.client.repository.ModelMetaNames.RUNTIME_UID: 'do_12.9'
}
baseDir = os.path.dirname(os.path.realpath(__file__))

def reset(tarinfo):
    tarinfo.uid = tarinfo.gid = 0
    tarinfo.uname = tarinfo.gname = 'root'
    return tarinfo

with NamedTemporaryFile() as tmp:
    tar = tarfile.open(tmp.name, 'w:gz')
    tar.add(f'{baseDir}/ibm_model.py', arcname='main.py', filter=reset)
    tar.add(prmFile.name, arcname='params.prm', filter=reset)
    tar.add(modelFile.name, arcname='model.lp', filter=reset)
    tar.close()

    modelDetails = self.client.repository.store_model(
        model=tmp.name,
        meta_props=modelMetadata
    )

    modelUid = self.client.repository.get_model_uid(modelDetails)

metaProps = {
    self.client.deployments.ConfigurationMetaNames.NAME: self.name,
    self.client.deployments.ConfigurationMetaNames.BATCH: {},
    self.client.deployments.ConfigurationMetaNames.COMPUTE: {'name': 'S', 'nodes': 1}
}
deployDetails = self.client.deployments.create(modelUid, meta_props=metaProps)
deployUid = self.client.deployments.get_uid(deployDetails)

solvePayload = {
    # we upload input data as part of model since only CSV data is supported in this interface
    self.client.deployments.DecisionOptimizationMetaNames.INPUT_DATA: [],
    self.client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA: [
        {
            "id": ".*"
        }
    ]
}

jobDetails = self.client.deployments.create_job(deployUid, solvePayload)
jobUid = self.client.deployments.get_job_uid(jobDetails)

while jobDetails['entity']['decision_optimization']['status']['state'] not in ['completed', 'failed',
                                                                                'canceled']:
    logger.debug(jobDetails['entity']['decision_optimization']['status']['state'] + '...')
    time.sleep(5)
    jobDetails = self.client.deployments.get_job_details(jobUid)

logger.debug(jobDetails['entity']['decision_optimization']['status']['state'])

# cleanup
self.client.repository.delete(modelUid)
prmFile.close()
modelFile.close()

Any ideas of what can be causing this or what a good test avenue is?关于什么可能导致这种情况或什么是好的测试途径的任何想法? It seems there's no way to view the output of the model for debugging, am I missing something in Watson studio?似乎没有办法查看模型的输出进行调试,我在 Watson Studio 中遗漏了什么吗?

I tryed something very similar from your code and the solution is included in the payload when the job is completed.我尝试了与您的代码非常相似的内容,并且在作业完成后,解决方案包含在有效负载中。

See this shared notebook: https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/cfbe34a0-52a8-436c-99bf-8df6979c11da/view?access_token=220636400ecdf537fb5ea1b47d41cb10f1b252199d1814d8f96a0280ec4a4e1e请参阅此共享笔记本: https : //dataplatform.cloud.ibm.com/analytics/notebooks/v2/cfbe34a0-52a8-436c-99bf-8df6979c11da/view?access_token=220636400ecdf537fb5ea1b47d41b28d2047d41b28d210a41b28d10a40e16ecb16e

I the last cells, after the job is completed, I print the status:我是最后一个单元格,作业完成后,我打印状态:

print(jobDetails['entity']['decision_optimization'])

and get并得到

{'output_data_references': [], 'input_data': [], 'solve_state': {'details': {'PROGRESS_GAP': '0.0', 'MODEL_DETAIL_NONZEROS': '3', 'MODEL_DETAIL_TYPE': 'MILP', 'MODEL_DETAIL_CONTINUOUS_VARS': '0', 'MODEL_DETAIL_CONSTRAINTS': '2', 'PROGRESS_CURRENT_OBJECTIVE': '100.0', 'MODEL_DETAIL_INTEGER_VARS': '2', 'MODEL_DETAIL_KPIS': '[]', 'MODEL_DETAIL_BOOLEAN_VARS': '0', 'PROGRESS_BEST_OBJECTIVE': '100.0'}, 'solve_status': 'optimal_solution'}, 'output_data': [{'id': 'test.txt', 'fields': ['___TEXT___'], 'values': [['IC0gbnVtYmVyIG9mIHZhcmlhYmxlczogMgogICAtIGJpbmFyeT0wLCBpbnRlZ2VyPTIsIGNvbnRpbnVvdXM9MAogLSBudW1iZXIgb2YgY29uc3RyYWludHM6IDIKICAgLSBsaW5lYXI9Mg==']]}, {'id': 'solution.json', 'fields': ['___TEXT___'], 'values': [['eyJDUExFWFNvbHV0aW9uIjogeyJ2ZXJzaW9uIjogIjEuMCIsICJoZWFkZXIiOiB7InByb2JsZW1OYW1lIjogIm1vZGVsIiwgIm9iamVjdGl2ZVZhbHVlIjogIjEwMC4wIiwgInNvbHZlZF9ieSI6ICJjcGxleF9sb2NhbCJ9LCAidmFyaWFibGVzIjogW3siaW5kZXgiOiAiMCIsICJuYW1lIjogIngiLCAidmFsdWUiOiAiNS4wIn0sIHsiaW5kZXgiOiAiMSIsICJuYW1lIjogInkiLCAidmFsdWUiOiAiOTUuMCJ9XSwgImxpbmVhckNvbnN0cmFpbnRzIjogW3sibmFtZSI6ICJjMSIsICJpbmRleCI6IDB9LCB7Im5hbWUiOiAiYzIiLCAiaW5kZXgiOiAxfV19fQ==']]}, {'id': 'solution.csv', 'fields': ['name', 'value'], 'values': [['x', 5], ['y', 95]]}], 'status': {'state': 'completed', 'running_at': '2020-03-09T06:45:29.759Z', 'completed_at': '2020-03-09T06:45:30.470Z'}}

which contains in output :其中包含output

'output_data': [{
        'id': 'test.txt',
        'fields': ['___TEXT___'],
        'values': [['IC0gbnVtYmVyIG9mIHZhcmlhYmxlczogMgogICAtIGJpbmFyeT0wLCBpbnRlZ2VyPTIsIGNvbnRpbnVvdXM9MAogLSBudW1iZXIgb2YgY29uc3RyYWludHM6IDIKICAgLSBsaW5lYXI9Mg==']]
    }, {
        'id': 'solution.json',
        'fields': ['___TEXT___'],
        'values': [['eyJDUExFWFNvbHV0aW9uIjogeyJ2ZXJzaW9uIjogIjEuMCIsICJoZWFkZXIiOiB7InByb2JsZW1OYW1lIjogIm1vZGVsIiwgIm9iamVjdGl2ZVZhbHVlIjogIjEwMC4wIiwgInNvbHZlZF9ieSI6ICJjcGxleF9sb2NhbCJ9LCAidmFyaWFibGVzIjogW3siaW5kZXgiOiAiMCIsICJuYW1lIjogIngiLCAidmFsdWUiOiAiNS4wIn0sIHsiaW5kZXgiOiAiMSIsICJuYW1lIjogInkiLCAidmFsdWUiOiAiOTUuMCJ9XSwgImxpbmVhckNvbnN0cmFpbnRzIjogW3sibmFtZSI6ICJjMSIsICJpbmRleCI6IDB9LCB7Im5hbWUiOiAiYzIiLCAiaW5kZXgiOiAxfV19fQ==']]
    }, {
        'id': 'solution.csv',
        'fields': ['name', 'value'],
        'values': [['x', 5], ['y', 95]]
    }
],

Hope this helps.希望这可以帮助。 Alain阿兰

Thanks to Alain for verifying the overall approach but the main issue was there was simply a bug in my code:感谢 Alain 验证了整体方法,但主要问题是我的代码中有一个错误:

After calling modelFile.write(...) it's necessary to call modelFile.seek(0) to reset the file pointer - otherwise it writes an empty file to the tar archive在调用modelFile.write(...) ,有必要调用modelFile.seek(0)来重置文件指针 - 否则它会将一个空文件写入 tar 存档

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM