简体   繁体   中英

JSoup.connect throws 403 error while apache.httpclient is able to fetch the content

I am trying to parse HTML dump of any given page. I used HTML Parser and also tried JSoup for parsing.

I found useful functions in Jsoup but I am getting 403 error while calling Document doc = Jsoup.connect(url).get();

I tried HTTPClient, to get the html dump and it was successful for the same url.

Why is JSoup giving 403 for the same URL which is giving content from commons http client? Am I doing something wrong? Any thoughts?

Working solution is as follows (Thanks to Angelo Neuschitzer for reminding to put it as a solution):

Document doc = Jsoup.connect(url).userAgent("Mozilla").get();
Elements links = doc.getElementsByTag(HTML.Tag.CITE.toString);
for (Element link : links) {
            String linkText = link.text();
            System.out.println(linkText);
}

So, userAgent does the trick :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM