I want to extract email form site recursively such that if a site has other links for email, it would traverse those pages as well and extract the email-id:s.
I did the following for depth level of 2:
wget -r -l 2 -O - some site name |grep -E -o "[a-z0-9.]+@[a-z0-9.-]+.[a-z]{2,4}">>some.txt
But when i execute this command it would just create a file " some.txt
" and would not extract any email-id:s.
Why not do a recursive wget and store the site locally, and then do a recursive grep (grep -r) across the site on your local file system? Just add an rm -Rf to the end of the script to delete the site when you are done.
As per using PHP: Point 1). Developers add eMail ID in HTML entity format (rish) HTML Entity :
Point 2). Emails are written on href="mailto:your@example.com". So we can take this for Regular expresion.
<?php
$str = '<div class="call-to-action ">
<a title="Email" class="contact contact-main contact-email "
href="mailto:info@canberraeyelaser.com.au?subject=Enquiry%2C%20sent%20from%20yellowpages.com.au&
body=%0A%0A%0A%0A%0A------------------------------------------%0AEnquiry%20via%20yellowpages.com.au%0Ahttp%3A%2F%2Fyellowpages.com.au%2Fact%2Fphillip%2Fcanberra-eye-laser-15333167-listing.html%3Fcontext%3DbusinessTypeSearch"
rel="nofollow" data-email="info@canberraeyelaser.com.au">
<span class="glyph icon-email border border-dark-blue with-text"></span><span class="contact-text">Email</span>
<a href="mailto:rishabhdubey20@gmail.com">
</a>
</div>';
// $str = file_get_contents(http://example.com) ; (to get emails from URL in place of file_get_contents i use to prefer CURL) .
$str = html_entity_decode($str);
$regex = "/mailto:([^?]*)/";
if ($rex = preg_match_all($regex, $str,$matches_out)) {
echo "Found a match!";
echo "<pre>";
var_dump($matches_out[0]);
} else {
echo "The regex pattern does not match. :(";
}
?>
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.