简体   繁体   English

微服务集中式数据库模型

[英]Microservices centralized database model

Currently we have some microservice , they have their own database model and migration what provided by GORM Golang package.目前我们有一些微服务,他们有自己的数据库模型和GORM Golang包提供的迁移。 We have a big old MySQL database which is against the microservices laws, but we can't replace it.我们有一个很大的旧 MySQL 数据库,它违反了微服务法,但我们无法替换它。 Im afraid when the microservices numbers start to growing, we will be lost in the many database model.恐怕当微服务数量开始增长时,我们会迷失在众多数据库模型中。 When I add a new column in a microservice I just type service migrate to the terminal (because there is a cli for run and migrate commands), and it is refresh the database.当我在微service migrate添加一个新列时,我只需在终端中键入service migrate (因为有一个用于运行和迁移命令的 cli),它会刷新数据库。

What is the best practice to manage it.管理它的最佳实践是什么。 For example I have 1000 microservice, noone will type the service migrate when someone refresh the models.例如我有 1000 个微service migrate当有人刷新模型时,没有人会输入service migrate I thinking about a centralized database service, where we just add a new column and it will store all the models with all migration.我在考虑一个集中的数据库服务,我们只需添加一个新列,它将存储所有迁移的所有模型。 The only problem, how will the services get to know about database model changes.唯一的问题是,服务将如何了解数据库模型更改。 This is how we store for example a user in a service:这是我们在服务中存储例如用户的方式:

type User struct {
    ID        uint           `gorm:"column:id;not null" sql:"AUTO_INCREMENT"`
    Name      string         `gorm:"column:name;not null" sql:"type:varchar(100)"`
    Username  sql.NullString `gorm:"column:username;not null" sql:"type:varchar(255)"`
}

func (u *User) TableName() string {
    return "users"
}

Depending on your use cases, MySQL Cluster might be an option.根据您的用例, MySQL Cluster可能是一种选择。 Two phase commits used by MySQL Cluster make frequent writes impractical, but if write performance isn't a big issue then I would expect MySQL Cluster would work out better than connection pooling or queuing hacks. MySQL Cluster 使用的两阶段提交使频繁写入变得不切实际,但如果写入性能不是一个大问题,那么我希望 MySQL Cluster 会比连接池或排队黑客更有效。 Certainly worth considering.当然值得考虑。

If I'm understanding your question correctly, you're trying to still use one MySQL instance but with many microservices.如果我正确理解了您的问题,那么您仍然会尝试使用一个 MySQL 实例,但具有许多微服务。

There are a couple of ways to make an SQL system work:有几种方法可以使 SQL 系统工作:

  1. You could create a microservice-type that handles data inserts/reads from the database and take advantage of connection pooling .您可以创建一个微服务类型来处理从数据库插入/读取的数据并利用连接池 And have the rest of your services do all their data read/writes through these services.并让您的其余服务通过这些服务读取/写入所有数据。 This will definitely add a bit of extra latency to all your writes/reads and likely be problematic at scale.这肯定会给您的所有写入/读取增加一些额外的延迟,并且可能会出现大规模问题。

  2. You could attempt to look for a multi-master SQL solution (eg CitusDB ) that scales easily and you can use a central schema for your database and just make sure to handle edge cases for data insertion (de-deuping etc.)您可以尝试寻找易于扩展的多主 SQL 解决方案(例如CitusDB ),并且您可以为您的数据库使用中央架构,并确保处理数据插入的边缘情况(de-deuping 等)

  3. You can use data-streaming architectures like Kafka or AWS Kinesis to transfer your data to your microservices and make sure they only deal with data through these streams.您可以使用KafkaAWS Kinesis等数据流架构将数据传输到微服务,并确保它们仅通过这些流处理数据。 This way, you can de-couple your database from your data.通过这种方式,您可以将数据库与数据分离。

The best way to approach it in my opinion is #3.在我看来,最好的方法是#3。 This way, you won't have to think about your storage at the computation layer of your microservice architecture.这样,您就不必在微服务架构的计算层考虑您的存储。

Not sure what service you're using for your microservices, but StdLib forces a few conversions (eg around only transferring data through HTTP) that helps folks wrap their head around it all.不确定您为微服务使用什么服务,但StdLib 会强制进行一些转换(例如仅通过 HTTP 传输数据),以帮助人们了解这一切。 AWS Lambda also works very well with Kinesis as a source to launch the function which could help with the #3 approach. AWS Lambda 还可以很好地与 Kinesis 一起作为启动函数的源,这有助于#3 方法。

Disclaimer: I'm the founder of StdLib .免责声明:我是StdLib的创始人。

If I understand your question correctly it seems to me that there may be multiple ways to achieve this.如果我正确理解您的问题,在我看来可能有多种方法可以实现这一目标。

One solution is to have a schema version somewhere in the database that your microservices periodically check.一种解决方案是在数据库中的某处放置一个架构版本,您的微服务会定期检查该版本。 When your database schema changes you can increase the schema version.当您的数据库架构更改时,您可以增加架构版本。 As a result of this if a service notices that the database schema version is higher than the current schema version of the service it can migrate the schema in the code which gorm allows.因此,如果服务注意到数据库模式版本高于服务的当前模式版本,它可以迁移gorm允许的代码中的模式。

Other options could depend on how you run your microservices.其他选项可能取决于您如何运行微服务。 For example if you run them using some orchestration platform (eg Kubernetes) you could put the migration code somewhere to run when your service initializes.例如,如果您使用某个编排平台(例如 Kubernetes)运行它们,您可以将迁移代码放在某个地方,以便在您的服务初始化时运行。 Then once you update the schema you can force a rolling refresh of your containers which would in turn trigger the migration.然后,一旦更新架构,您就可以强制滚动刷新容器,从而触发迁移。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM