简体   繁体   中英

docker-compose and django-haystack

I am trying to get docker-compose and django-haystack to work together. I am using the following settings (web and elastic search in different containers) and seeing errors when I try to build my index.

I have narrowed the problem a bit and it looks like the elasticsearch container is working as expected. However, haystack backend is unable to make the connection.

All containers:

root@movie-new:/home/django/movie# docker-compose ps
        Name                       Command               State                       Ports
---------------------------------------------------------------------------------------------------------------
movie_data_1            /docker-entrypoint.sh true       Up      5432/tcp
movie_db_1              /docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp
movie_elasticsearch_1   /docker-entrypoint.sh elas ...   Up      0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
movie_nginx_1           /usr/sbin/nginx                  Up      0.0.0.0:80->80/tcp
movie_web_1             bash -c python manage.py m ...   Up      8000/tcp
movie_web_run_1         /bin/bash                        Up      8000/tcp
movie_web_run_3         /bin/bash                        Up      8000/tcp

Inside my web conatiner..

root@movie-new:/home/django/movie# docker-compose run --rm web /bin/bash
root@0351ddc88229:/usr/src/app# curl -XGET http://elasticsearch:9200/
{
  "name" : "Dream Weaver",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.1.1",
    "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
    "build_timestamp" : "2015-12-15T13:05:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.3.1"
  },
  "tagline" : "You Know, for Search"
}

settings.py

HAYSTACK_CONNECTIONS = {
    'default': {
        'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
        'URL': 'http://elasticsearch:9200/',
        'INDEX_NAME': 'haystack',
        'TIMEOUT' : 120
    },
}

root@4b397e3ad5dc:/usr/src/app# python manage.py rebuild_index

WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
Failed to clear Elasticsearch index: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7fc67c8192d0>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7fc67c8192d0>: Failed to establish a new connection: [Errno 111] Connection refused)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/haystack/backends/elasticsearch_backend.py", line 234, in clear
    self.conn.indices.delete(index=self.index_name, ignore=404)
  File "/usr/local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/elasticsearch/client/indices.py", line 198, in delete
    params=params)
  File "/usr/local/lib/python2.7/site-packages/elasticsearch/transport.py", line 307, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 89, in perform_request
    raise ConnectionError('N/A', str(e), e)
ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7fc67c8192d0>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7fc67c8192d0>: Failed to establish a new connection: [Errno 111] Connection refused)

I think what's happening is that the script assumes elasticsearch is already available, but when all the containers start at the same time, elasticsearch might still be starting and not available yet.

Before running the migration either sleep a fixed number of seconds, or retry the connection a few times with a short sleep between retries.

Where you set the connection

connections.create_connection()

write

connections.create_connection(
    alias='default', 
    hosts=['http://elasticsearch:9200'], 
    timeout=60)

Here the documentation https://elasticsearch-dsl.readthedocs.io/en/latest/configuration.html#single-connection-with-an-alias

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM