简体   繁体   中英

Created A Sitemap But Google Couldn't Fetch Error

In Django, I've created a sitemap for my project. I didn't used built in django sitemap framework. I created a view and template and point it from urls.py. When I open it in explorer sitemap works fine. But when I add my sitemap to my searh console google give me couldn't fetch error.

Couldn't figure out what is the problem. Any idea?

My View:

class SitemapPost(ListView):

    model = Movies
    ordering = ['-pk']
    template_name = "front/sitemap-post.xml"
    content_type='application/xml'


    def get_context_data(self, **kwargs):
        context = super().get_context_data(**kwargs)
        today = datetime.now()
        context['today'] = datetime.now()
        return context

Urls:

path('sitemap-post.xml', SitemapPost.as_view(),name='sitemap-post'),

template:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd http://www.google.com/schemas/sitemap-image/1.1 http://www.google.com/schemas/sitemap-image/1.1/sitemap-image.xsd" xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  {% for object in object_list %}
  <url>
    <loc>https://{{request.get_host}}{{ object.get_absolute_url }}</loc>
    <lastmod>{{today|date:"Y-m-d"}}</lastmod>
    <changefreq>daily</changefreq>
    <priority>0.9</priority>
  </url>
{% endfor %}
</urlset>

and here is my sitemap file: link在此处输入图片说明

Robots Txt:

Sitemap: https://olefilm.icu/sitemap-post.xml
User-Agent: *
Disallow: /headoffice/
Disallow: /admin/
Disallow: /accounts/login/

SOLVED**

I changed my domain name and tried again. It worked. I think this is a bug of Search Console.

Why not use the django sitemap framework? Its very good. Anyway, try adding this to your robots.txt file

Sitemap: linkToSitemap

User-agent: Googlebot
Disallow:

User-agent: AdsBot-Google
Disallow:

User-agent: Googlebot-Image
Disallow:

This allows Google to crawl your website as some severs automatically block it. Be sure to add it to your urls if you have not already

path('robots.txt', TemplateView.as_view(template_name="robots.txt", content_type="text/plain"),),

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM