简体   繁体   中英

How to pull data from Paginated JSON

I have say 300 items 10 show to a page. The page loads the JSON data and is limited to 10 (this cannot be changed)

I want to scrub through the 30 odd pages pulling each item and listing it.

url.com/api/some-name?page=1 etc

The script ideally will use the above URL as a rule and scrub through increments of 1 until all 10 from each page is populated.

Can this be done? How would I go about it? Any advice or assistance to this would help me greatly in learning and looking at methods people suggest.

const getInfo = async function(pageNo) {
const jsonUrl = "https://website.com/api/some-title";

  let actualUrl = jsonUrl + `?page=${pageNo}`;
  let jsonResults = await fetch(actualUrl).then(response => {
    return response.json();
  });
  return jsonResults;
};
const getEntireList = async function(pageNo) {
  const results = await getInfo(pageNo);
  console.log("Retreiving data from API for page:" + pageNo);
  if (results.length > 0) {
    return results.concat(await getEntireList(pageNo));
  } else {
    return results;
  }
};
(async () => {
  const entireList = await getEntireList();
  console.log(entireList);
})();

I can see some issues in your code.

  1. the initial call to getEntireList() should be initialised with the index of first page, maybe like const entireList = await getEntireList(1);
  2. The page number will need to be incremented at some point.
  3. results.concat() probably won't have the desired effect. json() returns an object, list, or value (depending on the server) and results will be one of those type. concat() operates on strings; so calling json() is (at best) redundant.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM