简体   繁体   English

Dynamo Db流直接进入Elastic Search,而无需其他中间层

[英]Dynamo Db stream direct into Elastic Search without other middle layer

Can we directly stream dynamo db data to AWS elastic search service without using logstash because using logstash will incur extra cost? 我们是否可以在不使用Logstash的情况下直接将Dynamo数据库数据流式传输到AWS弹性搜索服务,因为使用Logstash会产生额外的费用? In all the articles that I have read online it was either with logstash or with lambda we can achieve this. 在我在线阅读的所有文章中,都可以使用logstash或lambda来实现。

It seems that you can: https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/ 看来您可以: https : //aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/

I've used DynamoDB on AWS in the past, setup up streams to push changes from DynamoDB to an endpoint and then used Logstash to read from the endpoint and write the changes to ES. 过去,我曾在AWS上使用DynamoDB,设置流以将更改从DynamoDB推送到端点,然后使用Logstash从端点读取并将更改写入ES。 It seems that now you can use a Lambda on the streams to write to ES without needing Logstash. 看来现在您可以在流上使用Lambda来写入ES,而无需Logstash。

That said, the Logstash approach would also migrate all the existing data in the dynamoDB table on start. 也就是说,Logstash方法还将在启动时迁移dynamoDB表中的所有现有数据。 The streams/lambda approach listed above apparently doesn't do this. 上面列出的stream / lambda方法显然没有做到这一点。 To provide that functionality the article mentions setting up an additional Kinesis Stream, using that as a second input to your ES writer lambda, then running some python code or similar to load all the existing data from the DB into the kinesis stream. 为了提供该功能,本文提到要设置一个额外的Kinesis Stream,将其用作ES编写器lambda的第二个输入,然后运行一些python代码或类似代码以将来自DB的所有现有数据加载到kinesis流中。

This seems like a lot of complexity and potentially more costly than just using Logstash to cover both scenarios. 与仅使用Logstash来覆盖这两种情况相比,这似乎很复杂,而且成本可能更高。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM