简体   繁体   English

使用rails选择什么数据库。 大分贝

[英]What db to chose, using rails. Large db

I'm using very large database, some of tables have more than 30.000.000 entries, now I'm using mysql, but for some search queries i must to wait for more than 1-2 minutes. 我正在使用非常大的数据库,某些表具有超过30.000.000的条目,现在我正在使用mysql,但是对于某些搜索查询,我必须等待1-2分钟以上。 Is it any way to improve speed? 有什么办法提高速度吗? Also, isn't any other db, which will work more fast? 另外,还有其他数据库可以更快地工作吗? Also, what to use in rails? 另外,在rails中使用什么? now i using simple %like%, is it any other ways to search via some field? 现在我使用简单的%like%,是否还有其他通过某个字段进行搜索的方式?

Using %LIKE% for full text searching with that many rows will not yield performance. %LIKE%用于具有那么多行的全文搜索将不会产生性能。 I would highly recommend using something like Sphinx and the excellent ThinkingSphinx gem. 我强烈建议您使用SphinxThinkingSphinx出色的宝石。 It works with MySQL and Rails/ActiveRecord out-of-the-box and configuration is simple. 它可以与MySQL和Rails / ActiveRecord一起使用,并且配置简单。

This guide describes how to install the Sphinx search daemon onto a particular platform. 指南介绍了如何将Sphinx搜索守护程序安装到特定平台上。

This guide describes how to install and use the thinking-sphinx gem. 指南介绍了如何安装和使用thinking-sphinx gem。 (Note that Rails 2 and 3 have different install guides so make sure you follow the correct one). (请注意,Rails 2和3具有不同的安装指南,因此请确保遵循正确的指南)。

After the install, you can define indexes on the columns of the model you're interested in searching through. 安装后,您可以在要搜索的模型的列上定义索引。

Ryan Bates did a Railscasts episode on full text searching with ThinkingSphinx which is a great way to start. 瑞安·贝茨(Ryan Bates)使用ThinkingSphinx在全文搜索中进行了Railscasts插曲 ,这是一个很好的起点。

Happy coding! 编码愉快!

If you want to stick with mysql, then go with myisam. 如果要坚持使用mysql,请选择myisam。

Come up with a strategy for creating indexes intelligently. 提出一种智能创建索引的策略。

Pre-calculate complex results strategically. 战略性地预先计算复杂结果。

Only return the data you need. 只返回您需要的数据。

Denormalize parts of the db to incr perf. 将部分数据库非规范化以提高性能。

Use many slaves to read from 使用许多奴隶来读取

Split up the db into many dbs 将数据库分成许多数据库

You will need to test a lot. 您将需要进行很多测试。

as per %like% 按照%like%

I ran into a similar situation where i was using LIKE wild cards for searching a large table, with 3, 4 minute times. 我遇到了类似的情况,当时我使用LIKE通配符搜索3、4分钟时间来搜索一张大桌子。 I was able to solve some of the issue by indexing the column and using MATCH() This might not work in your case if you need accuracy. 我可以通过为列建立索引并使用MATCH()解决一些问题,如果您需要准确性,这可能无法解决您的问题。

refer to http://dev.mysql.com/doc/refman//5.5/en/fulltext-search.html 参考http://dev.mysql.com/doc/refman//5.5/en/fulltext-search.html

as a NOTE: i am using a combo of both search and match, you can see here http://anotherfeed.com/feedmap.php?search=facebook the first 50 results returns LIKE, the last 50 return MATCH and displays the php index score as by mysql. 作为注:我正在使用搜索和匹配的组合,您可以在这里看到http://anotherfeed.com/feedmap.php?search=facebook前50个结果返回LIKE,后50个返回MATCH并显示php索引分数由mysql决定。 The first 50 are always more accurate, since i am only indexing titles and not full descriptions. 前50个总是更准确,因为我只是索引标题而不是完整的描述。

$query2="SELECT *, MATCH(pagename) AGAINST('".urldecode($_REQUEST['search'])."') AS score FROM anotherfeedv3.af_freefeed WHERE MATCH(pagename) AGAINST('".urldecode($_REQUEST['search'])."') LIMIT $start, $limit";

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM