简体   繁体   中英

Optimizing a large mySQL table

I have a table of location codes that is about 2.3 million rows, with no indices. My site needs to routinely query it for things like distinct state codes, counties filtered down by state, cities filtered down by county and/or state, etc... The problem is that because it is so big and that it has now indices, queries can be slow.

My question is, should I make indices on the columns and various combinations of columns I'd be querying on? What indices should I make? I'm imagining these indices:

  • Unique/primary index on feature_id (obviously)
  • Index on state_numeric
  • Index on state_numeric, county_numeric

在此处输入图片说明

Here's a link to the image of the table

It would be best to make individual indexes on each of the commonly used fields. Note that updates and inserts will be a bit slower, so don't make more indexes than needed. If you are running a lot of queries that routinely use specific combinations of fields, then composite indexes on those would be good as well.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM