简体   繁体   English

角色的wso2 API管理器权限问题

[英]Issue with wso2 api manager permission for roles

I have two instances of wso2 api manager running on two different servers.Both of them are referring to same UM_DB . 我有两个在两个不同服务器上运行的wso2 api管理器实例,它们两个都引用同一个UM_DB。 I created a role by logging with admin credentials on one server .After that i checked for the role on other server by logging with admin credentials again.I found that there was role existing on other server but permission that i provided for that role does not exist on another server.Is that a bug with wso2 api manager or I missed something in configuration..? 我通过在一个服务器上使用管理员凭据登录创建了一个角色。此后,我通过再次使用管理员凭据登录来检查了另一台服务器上的角色。我发现另一台服务器上存在该角色,但是我没有为该角色提供的权限存在于另一台服务器上。是wso2 API管理器的错误还是我在配置中错过了某些东西?

You want to deploy two APIM instances in a cluster. 您要在集群中部署两个APIM实例。 It is better to refer the APIM clustering guide to setup it properly. 最好参考APIM群集指南以正确设置它。 There are tow things you need to understand.. when your deploying APIM in cluster 在集群中部署APIM时,您需要了解两件事。

  1. You must point both instance in to same database. 您必须将两个实例都指向同一个数据库。 There are can be three logical databases ie UM, Registry and AM database. 可以有三个逻辑数据库,即UM,注册表和AM数据库。 These three can be an one physical DB. 这三个可以是一个物理数据库。 However must pointed to same by the both instance. 但是必须由两个实例指向相同的位置。

  2. You must configure the Hazelcast based clustering using axis2.xml file. 您必须使用axis2.xml文件配置基于Hazelcast的群集。 This is required because, APIM uses Hazelcast based implementation to distribute the data in the caches. 这是必需的,因为APIM使用基于Hazelcast的实现在缓存中分发数据。 Sometime, In your scenario, i guess you have not configured this. 有时候,在您的情况下,我想您尚未配置它。 Therefore permission tree has not been distributed between two nodes. 因此,权限树尚未在两个节点之间分配。 Therefore lot of data that is stored in the caches for high performance. 因此,大量存储在高速缓存中的数据可提高性能。 therefore please make sure to configure this properly. 因此请确保正确配置。

I guess this would help you. 我想这对您有帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM