简体   繁体   中英

Extract visual text from Google Classic Site page using Apps Script in Google Sheets

I have about 5,000 Classic Google Sites pages that I need to have a Google Apps script under Google Sheets examine one by one, extract the data, and enter that data into the Google Sheet row by row.

I wrote an apps script to use one of the sheets called "Pages" that contains the exactly URL of each page row by row, to run down while doing the extraction.

在此处输入图像描述

That in return would get the HTML contents and I would then use regex to extract the data I want which is the values to the right of each of the following...

  • Job name
  • Domain owner
  • Urgency/Impact
  • ISOC instructions

Which would then write that date under the proper columns in the Google Sheet.

在此处输入图像描述

This worked except for one big problem. The HTML is not consistent. Also, ID's and tags were not used so really it makes trying to do this through SitesApp.getPageByUrl not possible.

Here is the code I came up with for that attempt.

function startCollection () {
  var masterList = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Pages");
  var startRow = 1;
  var lastRow = masterList.getLastRow();
  for(var i = startRow; i <= lastRow; i++) {
    var target = masterList.getRange("A"+i).getValue();
    sniff(target)
  };
}

function sniff (target) { 
  var pageURL = target;
  var pageContent = SitesApp.getPageByUrl(pageURL).getHtmlContent();
  Logger.log("Scraping: ", target);
  
  // Extract the job name
  var JobNameRegExp = new RegExp(/(Job name:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(\<\/td>)/m); 
  var JobNameValue = JobNameRegExp.exec(pageContent);
  var JobMatch = JobNameValue[2];
  if (JobMatch == null){
    JobMatch = "NOTE FOUND: " + pageURL;
  }
  
  // Extract domain owner
  var DomainRegExp = new RegExp(/(Domain owner:<\/b><\/td><td style='text-align:left;width:738px'><span style='font-family:arial,sans,sans-serif;font-size:13px'>)(.*?)(<\/span>)/m);
  var DomainValue = DomainRegExp.exec(pageContent);
    Logger.log("DUMP1:",SitesApp.getPageByUrl(pageURL).getHtmlContent());
  var DomainMatch = DomainValue[2];
  if (JobMatch == null){
    DomainMatch = "N/A";
  }
  
  // Extract Urgency & Impact
  var UrgRegExp = new RegExp(/(Urgency\/Impact:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(<\/td>)/m);
  var UrgValue = UrgRegExp.exec(pageContent);
  var UrgMatch = UrgValue[2];
  if (JobMatch == null){
    UrgMatch = "N/A";
  }
  
  // Extract ISOC Instructions
  var ISOCRegExp = new RegExp(/(ISOC instructions:<\/b><\/td><td style='text-align:left;width:738px'>)(.*?)(<\/td>)/m);
  var ISOCValue = ISOCRegExp.exec(pageContent);
  var ISOCMatch = ISOCValue[2];
  if (JobMatch == null){
    ISOCMatch = "N/A";
  }
  
  // Add record to sheet
  var row_data = {
    Job_Name:JobMatch,
    Domain_Owner:DomainMatch,
    Urgency_Impact:UrgMatch,
    ISOC_Instructions:ISOCMatch,
  };
  insertRowInTracker(row_data)
} 

function insertRowInTracker(rowData) {

    var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Jobs");
    var rowValues = [];
    var columnHeaders = sheet.getDataRange().offset(0, 0, 1).getValues()[0];
    Logger.log("Writing to the sheet: ", sheet.getName());
    Logger.log("Writing Row Data: ", rowData);
    columnHeaders.forEach((header) => {
                          rowValues.push(rowData[header]);
  });
  sheet.appendRow(rowValues);
  }  

So for my next idea, I have thought about using UrlFetchApp.fetch. The one problem I have though is that these pages on that Classics Google Site sit behind a non-shared with the public domain. While using SitesApp.getPageByUrl has the script ask for authorization and works, SitesApp.getPageByUrl does not meaning when it tries to call the direct page, it just gets the Google login page.

I might be able to work around this and turn them public, but I am still working on that.

I am running out of ideas fast on this one and hoping there is another way I have not thought of or seen. What I would really like to do is not even mess with the HTML content. I would like to use apps script under the Google Sheet to just look at the actual data presented on the page and then match a text and capture the value to the right of it.

For example have it go down the list of URLS on sheet called "Pages" and do the following for each page:

Find the following values:

  • Find the text "Job name:", capture the text to the right of it.
  • Find the text "Domain owner:", capture the text to the right of it.
  • Find the text "Urgency/Impact:", capture the text to the right of it.
  • Find the text "ISOC instructions:", capture the text to the right of it.

Write those values to a new row in sheet called "Jobs" as seen below. Then move on the the next URL in the sheet called "Pages" and repeat until all rows in the sheet "Pages" have been completed.

Example of the data I want to capture

在此处输入图像描述

I have created an exact copy of one of the pages for testing and is public. https://sites.google.com/site/2020dump/test

An inspect example

在此处输入图像描述

The raw HTML of the table which contains all the data I am after.

<tr>
<td style="width:190px"><b>Domain owner:</b></td>
<td style="text-align:left;width:738px">IT.FinanceHRCore&nbsp;</td>
</tr>
<tr>
<td style="width:190px">&nbsp;<b>Urgency/Impact:</b></td>
<td style="text-align:left;width:738px">Medium (3 - Urgency, 3 - Impact)&nbsp;</td>
</tr>
<tr>
<td style="width:190px"><b>ISOC instructions:</b></td>
<td style="text-align:left;width:738px">None&nbsp;</td>
</tr>
<tr>
<td style="width:190px"></td>
<td style="text-align:left;width:738px">&nbsp;</td>
</tr>
</tbody>
</table>

Any examples of how I can accomplish this? I am not sure how from an apps script perspective to go about not looking at HTML and only looking at the actual data displayed on the page. For example looking for the text "Job name:" and then grabbing the text to the right of it.

The goal at the end of the day is to transfer the data from each page into one big Google Sheet so we can kill off the Google Classic Site.

I have been scraping data with apps script using regular expressions for a while, but I will say that the formatting of this page does make it difficult.

A lot of the pages that I scrape have tables in them so I made a helper script that will go through and clean them up and turn them into arrays. Copy and paste the script below into a new google script:

function scrapetables(html,startingtable,extractlinksTF) {

  var totaltables = /<table.*?>/g
  var total = html.match(totaltables)
  var tableregex = /<table[\s\S]*?<\/table>/g;
  var tables = html.match(tableregex);
  
   
  var arrays = []
  var i = startingtable || 0;
  while (tables[i]) {
   
    var thistable = []
    var rows = tables[i].match(/<tr[\s\S]*?<\/tr>/g);
    if(rows) {
      var j = 0;
      while (rows[j]) {
        var thisrow = tablerow(rows[j])
        if(thisrow.length > 2) {
          thistable.push(tablerow(rows[j]))
        
         } else {thistable.push(thisrow)}
        j++
      }
      arrays.push(thistable);
      
    }
    i++
  }
  
  return arrays;
  
}

function removespaces(string) {
 var newstring = string.trim().replace(/[\r\n\t]/g,'').replace(/&nbsp;/g,' ');
 return newstring
  
}


function tablerow(row,extractlinksTF) {
  var cells = row.match(/<t[dh][\s\S]*?<\/t[dh]>/g);
  var i = 0;
  var thisrow = [];
  while (cells[i]) {
     thisrow.push(removehtmlmarkup(cells[i],extractlinksTF))
    i++
  }
  
  return thisrow
}

function removehtmlmarkup(string,extractlinksTF) {
 
  var string2 = removespaces(string.replace(/<\/?[A-Za-z].*?>/g,''))
  var obj = {string: string2}
  //check for link
  if(/<a href=.*?<\/a>/.test(string)) {
  obj['link'] = /<a href="(.*?)"/.exec(string)[1]
  
  }
  if(extractlinksTF) {
  return obj;
  } else {return string2}
  
}

Running this got close, but at the moment, this doesn't handle nested tables well so I cleaned up the input by sending only the table that we want by isolating it with a regular expression:

var tablehtml = /(<table[\s\S]{200,1000}Job Name[\s\S]*?<\/table>)/im.exec(html)[1]

Your parent function will then look like this:

function sniff(pageURL) {

  var html= SitesApp.getPageByUrl(pageURL).getHtmlContent();
  var tablehtml = /(<table[\s\S]{200,1000}Job Name[\s\S]*?<\/table>)/im.exec(html)[1]
var table = scrapetables(tablehtml);
var row_data = 
  {
  Job_Name: na(table[0][3][1]), //indicates the 1st table in the html, row 4, cell 2
  Domain_Owner: na(table[0][4][1]), // indicates 1st table in the html, row 5, cell 2 etc... 
  Urgency_Impact: na(table[0][5][1]),
  ISOC_Instructions: na(table[0][6][1])
  }

  insertRowInTracker(row_data)
}

function na(string) {

if(string) {
return string
} else { return 'N/A'}

}


The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM