[英]Are Google Cloud Disks OK to use with SQLite?
Google Cloud disks are network disks that behave like local disks. Google Cloud 磁盘是类似于本地磁盘的网络磁盘。 SQLite expects a local disk so that locking and transactions work correctly.
SQLite 需要一个本地磁盘,以便锁定和事务正常工作。
A. Is it safe to use Google Cloud disks for SQLite? A. SQLite 使用谷歌云盘安全吗?
B. Do they support the right locking mechanisms? B. 他们是否支持正确的锁定机制? How is this done over the network?
这是如何通过网络完成的?
C. C。 How does disk IOP's and Throughput relate to SQLite performance?
磁盘 IOP 和吞吐量与 SQLite 性能有何关系? If I have a 1GB SQLite file with queries that take 40ms to complete locally, how many IOP's would this use?
如果我有一个 1GB SQLite 文件,其中的查询需要 40 毫秒才能在本地完成,这将使用多少 IOP? Which disk performance should I choose between (standard, balanced, SSD)?
我应该选择哪种磁盘性能(标准、平衡、SSD)?
Thanks.谢谢。
Related有关的
https://cloud.google.com/compute/docs/disks#pdspecs https://cloud.google.com/compute/docs/disks#pdspecs
Persistent disks are durable network storage devices that your instances can access like physical disks
永久磁盘是持久的网络存储设备,您的实例可以像物理磁盘一样访问它们
https://www.sqlite.org/draft/useovernet.html https://www.sqlite.org/draft/useovernet.html
the SQLite library is not tested in across-a- network scenarios, nor is that reasonably possible.
SQLite 库未在跨网络场景中进行测试,这也不太可能。 Hence, use of a remote database is done at the user's risk.
因此,使用远程数据库需要用户承担风险。
Yeah, the article you referenced, essentially stipulates that since the reads and writes are "simplified", at the OS level, they can be unpredictable resulting in "loss in translation" issues when going local-network-remote.是的,您引用的文章基本上规定,由于读取和写入是“简化的”,因此在操作系统级别,它们可能是不可预测的,导致在本地网络远程时出现“翻译丢失”问题。
They also point out, it may very well work totally fine in testing and perhaps in production for a time, but there are known side effects which are hard to detect and mitigate against -- so its a slight gamble.他们还指出,它在测试和生产中可能工作得很好,但有一些已知的副作用很难检测和缓解——所以这是一场轻微的赌博。
Again the implementation they are describing is not Google Cloud Disk, but rather simply stated as a remote networked arrangement.同样,他们描述的实现不是谷歌云盘,而是简单地描述为远程网络安排。
My point is more that Google Cloud Disk may be more "virtual" rather than purely networked attached storage... to my mind that would be where to look, and evaluate it from there.我的观点更多的是,谷歌云盘可能更“虚拟”,而不是纯粹的网络附加存储......在我看来,这将是从那里查看和评估它的地方。
Checkout this thread for some additional insight into the issues, https://serverfault.com/questions/823532/sqlite-on-google-cloud-persistent-disk查看此线程以获得对问题的更多了解, https://serverfault.com/questions/823532/sqlite-on-google-cloud-persistent-disk
Additionally, I was looking around and I found this thread, where one poster suggest using SQLite as a read-only asset, then deploying updates in a far more controlled process.此外,我环顾四周,发现了这个帖子,其中一位海报建议使用 SQLite 作为只读资产,然后在更受控的过程中部署更新。 https://news.ycombinator.com/item?id=26441125
https://news.ycombinator.com/item?id=26441125
the persistend disk acts like a normal disk in your vm.持久磁盘就像虚拟机中的普通磁盘一样。 and is only accessable to one vm at a time.
并且一次只能访问一个虚拟机。
so it's safe to use, you won't lose any data.因此使用安全,不会丢失任何数据。
For the performance part.对于表演部分。 you just have to test it.
你只需要测试它。 for your specific workload.
为您的特定工作量。 if you have plenty of spare ram, and your database is read heavy, and seldom writes.
如果您有足够的备用内存,并且您的数据库读取繁重,并且很少写入。 the whole database will be cached by the os (linux) disk cache.
整个数据库将被 os (linux) 磁盘缓存缓存。 so it will be crazy fast.
所以它会很快发疯。 even on hdd storage.
即使在硬盘存储上。
but if you are low on spare ram.但是如果您的备用内存不足。 than the database won't be in the os cache.
比数据库不会在操作系统缓存中。 and writes are always synced to disk.
并且写入始终同步到磁盘。 and that causes lots of I/O operations.
这会导致大量的 I/O 操作。 in that case use the highest performing disk you can / are willing to afford.
在这种情况下,请使用您可以/愿意负担的最高性能磁盘。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.