简体   繁体   中英

How I can optimize my PHP script to get phonetical readings of Japanese sentences from Yahoo! Japan API?

I wrote a PHP script which reads Japanese sentences from file, get the phonetical reading of each sentence using Yahoo. Japan API and writes them to output file, But the script is incredibly slow, it has processed only 50.000 sentences in the last 12 hours on the Apache running on my Mac OS X?Is the call to API the main bottleneck? How can I optimize it? Should I use a language other than PHP? Thanks!

Here's how the first 4 lines of the input (examples-utf.utf) file look like:

A: ムーリエルは20歳になりました。 Muiriel is 20 now.#ID=1282_4707
B: は 二十歳(はたち){20歳} になる[01]{になりました}
A: すぐに戻ります。 I will be back soon.#ID=1284_4709
B: 直ぐに{すぐに} 戻る{戻ります}

Here's the XML returned by API on the sentence "私は学生です": http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=YuLAPtSxg64LZ2dsAQnC334w1wGLxuq9cqp0MIGSO3QjZ1tbZCYaRRWkeRKdUCft7qej73DqEg--&grade=1&sentence=%E7%A7%81%E3%81%AF%E5%AD%A6%E7%94%9F%E3%81%A7%E3%81%99

My script follows:

<?php
    function getReading($wabun)
    {
        $res = "";
        $applicationID = "YuLAPtSxg64LZ2dsAQnC334w1wGLxuq9cqp0MIGSO3QjZ1tbZCYaRRWkeRKdUCft7qej73DqEg--";
        $grade = 1;
        $url = "http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=".$applicationID."&grade=".$grade."&sentence=".$wabun;    
        $doc = new DOMDocument();
        $doc->load($url);
        foreach ($doc->getElementsByTagName('Word') as $node) {
            $surface = $node->getElementsByTagName('Surface')->item(0)->nodeValue;
            $furigana = $node->getElementsByTagName('Furigana')->item(0)->nodeValue;
            $reading = (isset($furigana)) ? $furigana : $surface;
            $res .= $reading;
        }
        return $res;
    }
?>
<?php
    header('Content-Type: text/html;charset=utf-8');    
    $myFile = "examples-utf.utf";
    $outFile = "examples-output.utf";
    $file = fopen($myFile, 'r') or die("can't open read file");
    $out = fopen($outFile, 'w') or die("can't open write file");
    $i = 1; // line number
    $start = 3; // beginning of japanese sentence, after "A: "
    while($line = fgets($file))
    {
        // line starts at "A: "
        if($i&1)
        {
            $pos = strpos($line, "\t");
            $japanese = substr($line, $start, $pos - $start);

            $end = strpos($line, "#ID=", $pos + 1);
            $english = substr($line, $pos + 1, $end - $pos - 1);
            $reading = getReading($japanese);

            fwrite($out, $japanese."\n");
            fwrite($out, $english."\n");
            fwrite($out, $reading."\n");

        }
        ++$i;
    }
    fclose($out);
?>

From where I am (Berlin/Germany) the site jlp.yahooapis.jp has a ping latency of about 500ms, so it lasts nearly 7h just to do the 50.000 pings. Not to mention the data processing on the Yahoo-Server. So yes, I think the main bottleneck is using an webservice on another server.

I'm not sure which was the reason of this issue, but the latest version of the Yahoo: APIs is pretty smooth (Endpoint: https://jlp.yahooapis.jp/FuriganaService/V1/furigana )

I have posted a similar question here:

How to use the Yahoo! JAPAN Japanese Language Processing API

If this is a batch process, you could try running several of your scripts concurrently on separate lists.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM