简体   繁体   中英

How does a Unity3d visual scripting framework work behind the scene?

I'd like to know what happens when I use a visual scripting tool in unity3d.

Let's say I want to make an FSM:

The first approach is when I don't use such a tool. In this case, I should do everything inside the source code(C#), meaning creating all the necessary classes and wiring up the whole state machine manually.

The second approach is when I use a tool like Bolt or NodeCanvas . I still need to write the State classes, but this time, wiring them up is done visually through a node-based graph editor.

The question is, how these tools convert that graph into something that Unity can use? Do they generate C# code based on the graph using a templating engine like T4 Templating Engine ? Or do they do something else?

They can either generate some C# code or save the graph definitions to a file, add a runtime execution engine to your game and interpret those while the game is running. They might also take a hybrid approach or do some performance optimizations . So either way is possible and you should consult the tools' design docs to make sure.

As for Bolt , this post on the developer's official blog, implies that current stable version doesn't generate any C# code. But apparently the team is actively working on a new version that supports code generation.

We are currently actively working on Bolt 2, a major new version that includes massive overhauls and new features such as C# generation, classes, vertical flow, tweening, generics, a fresh new look, and a lot more.

As for NodeCanvas , I'm not sure if it's generating any C# code, but judging by the package contents on the assets store, looks like it's working in a similar way to Bolt.

Unity has editor/playmode/runtime mode. In editor mode drawing your design is saved in file. Some tools use custom formats binaries,json or else. LogicForge for example,use Scriptable object to be fully compatible with Unity dependency and serialziation managment. Graph can be designed in editor and/or playmode. Logic forge supports both, cause in playmode you can see what your design is doing right away. Node can represent atomic function or complex component. It is good if visual tool gave you opportunity to design own nodes, also visually and create nodes automatically of any MonoBehaviour script dropped in design. Node can be dynamic or static, already coded. Dynamic nodes use Reflection to create functions playmode/runtime. They are optimized with cached and IL created Delegates. Code can be generated when you make change to graph, but you need to wait long Unity recompile, or can be generated on request. Tools can have option to compact the design creating code and more complex node from design.
Graph asset can be attached to GameObject and load with the scene, or can be loaded as resource runtime and then implement the design.They read serialized data and create components on the game object.

Some tools have FSM incorporated so you can switch between different graph designs/logics by condition. Logic Forge has general event based FSM and animation playables blender(you don't need to use Mecanim)

You have to understand lexer and parser to understand it fully.

When you write something in non-graphical way, computer turns that piece of code or codes in machine readable format, as computer can't even distinguish between a and b . It converts it into binary or hexa (which is later even turned into binary ).

Now the question still remains How visual scripting works?

Ok, the developer has their own lexer and parser, they iterate through those visual components and create code from that, then those are interpreted or compiled, depending on language.

As you want to know about Unity , it is based on C# , it will be compiled.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM