简体   繁体   中英

Best practice for very large data [on hold]

We need to manage the very large size data. It will be around 50 million records per table.

What will be the best database or some other tools to manage these large-sized data?

I think u can use Hadoop for large database which is now a days very usefult to manage.

MongoDB database, Python language & Apache spark is best for Large or Tera Bytes data.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM