简体   繁体   中英

What is the best way to make mujoco environment of my own?

I want to make a 3D model of reaction-wheel( https://github.com/simplefoc/Arduino-FOC-reaction-wheel-inverted-pendulum ), using mujoco. And then I'll use reinforcement learning in mujoco environment to keep its balance. Is it possible to build a env like openai gym[mujoco], and then start learning? Or should I just write an xml file and use it directily with mujoco(or mujoco py)? I would like to get some advices about,

  1. How to build xml files for mujoco
  2. How can I import ai in it

I feel so desperate right now, since I couldn't find helpful documents or videos about making and using my own mujoco environment. I hope I can get help from you.

Try using MuJoCo's native simulate utility. It is made exactly for this. Modify your XML, reload in simulate, until it looks right. This getting started section has more information on running simulate locally.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM