简体   繁体   English

如何在不更改代码库的情况下处理 aws dynamo db 400 KB 记录限制

[英]How to handle aws dynamo db 400 KB record limit without changing my codebase

In aws dynamo db we cannot store more than 400KB data in a single record [ Reference ].在 aws dynamo db 中,我们不能在单个记录中存储超过 400KB 的数据 [ 参考]。 Based on suggestions online I can either compress the data before storing or upload part of it to aws s3 bucket which I am fine by根据在线建议,我可以在存储之前压缩数据,也可以将部分数据上传到 aws s3 存储桶,我可以通过

But my application (javascript/express server plus many js lambdas/microservices) is too large and adding the above logic which require a heavy re-write and extensive testing.但是我的应用程序(javascript/express 服务器加上许多 js lambdas/微服务)太大并且添加了需要大量重写和广泛测试的上述逻辑。 Currently there is an immediate requirement from a big client that demands >400KB storage in db, so is there any alternative way to solve the problem that doesn't make me change my existing code to fetch the record from db.目前有一个大客户的直接要求,要求在 db 中存储 >400KB 的存储空间,所以有没有其他方法可以解决这个问题,而不是让我更改现有代码以从 db 中获取记录。

I was thinking more in these lines:我在这些方面考虑得更多:
My backend makes a dynamo db call to fetch the record as its doing now (we use a mix of vogels and aws-sdk to make db calls) -> The call is intercepted by a lambda (or something else) which handles the necessary compression/decompression/s3 with dynamodb and returns the data to the backend.我的后端调用 dynamo db 来获取记录,就像它现在所做的那样(我们混合使用 vogels 和 aws-sdk 来调用 db)-> 该调用被 lambda(或其他东西)拦截,它处理必要的压缩/decompression/s3 与dynamodb 并将数据返回到后端。

Is the above approach possible to do and if yes then how can i go about implementing it?上述方法是否可行,如果可以,那么我 go 如何实施呢? Or if you have a better way, please do tell.或者如果你有更好的方法,请告诉。

PS.附言。 Going forward I will definitely re-write my codebase to take care of this, what I am asking for is an immediate stopgap solution.展望未来,我一定会重新编写我的代码库来解决这个问题,我要求的是一个即时的权宜之计解决方案。

Split the data into multiple items.将数据拆分为多个项目。 You'll have to change a little client code but hopefully you have a data access layer so it's just a small change in one place.您将不得不更改一点客户端代码,但希望您有一个数据访问层,所以这只是一个地方的小改动。 If you don't have a DAL, from now on always have a DAL.如果您没有 DAL,从现在开始请始终拥有 DAL。 :) :)

For the payload of a big item, use the regular item as the manifest which can point at the segmented items.对于大项目的有效载荷,使用常规项目作为清单,可以指向分段项目。 Then batch get items those segmented items.然后批量获取那些分段的项目。

This assumes compression alone isn't always sufficient.这假设单独压缩并不总是足够的。 If it is, do that.如果是,就这样做。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM