简体   繁体   中英

How to call Keyword recursively in Robot Framework?

I have a test case to traverse all the links in a web site. To achieve this i'm fellowing below steps.

  1. Load home page
  2. Get all links using href tag
  3. Load each link got from step 2 in a loop.
  4. Get all links from the page identified in step 3.
  5. For each link from 04, continue step 02....

The sudo code of recursive keyword as fellow.

Visit All Links
     [Arguments]     ${link}
     Check Link ->It's keyword to load the page and ensure it's not giving 404.
     get links
     for each links
            Visit All Links
 

Current behaviour: It's giving maximum limit of started keywords exceeded error. To fix this, i have changed the _started_keywords_threshold value in context.py in robots/running. Then it's working fine.

Is there any other way to do this without changing the context.py. Or is there any simplest way to traverse all pages in robot framework?

I think I had a similar issue as you (eg I build a script to get all links and child links from a certain base url), and the way that I manage to over the limit was to instead of using recursion, get first all the links into a array and then check the array for the 404's.

The basic is:

*** Variables ***
@{allLinks}    ${site}    # base url as first list input in GLOBAL List

*** Test Cases ***
Main Test
   FOR    ${i}    IN RANGE    9999999    # While loop. Exits with the "Exit For Loop if [condition]"
       ${qtLinks}=    Get Length    ${allLinks}
       # for temp executions, limit the results
       # Exit For Loop If    ${qtLinks} > 20
       IF    ${qtLinks} > 1
           IF    ${i} >= ${qtLinks}
               Exit For Loop If    True    # exit loop if we searched all links for child links    
           END
       END
       Run Keyword    Get all Child Links    ${allLinks[${i}]}    ${allLinks}
   END

*** Keywwords ***
Get all Child Links
[Arguments]    ${myLink}    ${allLinks}
Go To    ${myLink}
Sleep    1 secs
${pagelinks}=    Get WebElements    xpath=(${links.href})
${qtd}=    Get Length    ${pagelinks}           
@{tLinks}    Create List
IF    ${qtd} > 0
    FOR    ${element}    IN    @{pagelinks}
       ${pLink}=    SeleniumLibrary.Get Element Attribute    ${element}    href
       Append To List    ${tLinks}    ${pLink}
    END
    Append To List    ${allLinks}    ${tLinks}     
END

So basically what does this do? It starts with a base link (${site}), and start a FOR cycle that will run until the index is equal to the array of links length (it basically means that the last link that you checked for child links didn't return any new link)

This of course fix's all the recursion issues :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM