简体   繁体   中英

Java Selenium fails to find all elements in the DOM using By.xpath(“//*”)

I have a strange scenario in which I am unable to find all elements in the DOM.

When viewing the DOM through Firefox / 'Inspect Elements', I clearly see some 'div' elements which are not present in the element-list generated with Java/Selenium:

List<WebElement> elements = webDriver.findElements(By.xpath("//*"));

I suspect that the line above does not provide any element that is a child of a non-visible element.

If my suspicion is not correct, then can anyone please explain the reason for what I'm seeing?

Otherwise, if this is indeed the case, then the only way around it would be to go over all non-visible elements and make them visible.

Is there any better way for handling this problem?

If yes - what is it?

If no - how do I make all elements visible (perhaps using JavascriptExecutor)?

Thanks

The other option is that the elements are in a frame. In that case you have to call webDriver.switchTo().frame(String name) . Don't forget to switch back afterwards, ideally with webDriver.switchTo().defaultContent().

I think that also invisible elements are accessible by Selenium. I've been accessing some elements that I'd made invisible myself. You cannot interact with them though.

Naturally, as Dmitry suggests, to get all elements this way is really not a feasible way.

I'd suggest to use a separate HTML Parser library in order to get the required information for all html document nodes. For instance

  1. Get the full page source, using driver.getPageSource();
  2. Use http://jsoup.org or any other parser to parse the document and extract the required data

Here is an example :

   String html = "<p>An <a href='http://example.com/'><b>example</b></a> link.</p>";
   Document doc = Jsoup.parse(html);
   Element link = doc.select("a").first();

   String text = doc.body().text(); // "An example link"
   String linkHref = link.attr("href"); // "http://example.com/"
   String linkText = link.text(); // "example""

   String linkOuterH = link.outerHtml(); 
    // "<a href="http://example.com"><b>example</b></a>"
   String linkInnerH = link.html(); // "<b>example</b>"

I would like to add that querying all elements via WebDriver is extremely slow operation. I've faced with this issue in my pet project, Page Recorder .
The solution was to use HtmlAgilityPack – .NET HTML Parser - in order to perform operations on all document nodes.

Problem found:

When I was viewing the webpage through Firefox / Inspect Element, the window was maximized.

When I was scraping the webpage with Java / Selenium, the window was not maximized.

In the specific webpage that I was working on, some elements (mostly advertisements) are added (becoming "unhidden") as soon as the window reaches a certain size (there is probably some javascript code running on the client side, which is responsible for this).

The current problem at hand is not within Selenium.

In order to solve it, one merely needs to add the following line:

webDriver.manage().window().maximize();

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM