繁体   English   中英

Python:无法在AWS Lambda中导入模块应用程序

[英]Python: Unable to import module app in AWS Lambda

我的app.zip文件的根目录中有文件app.py。 根据处理程序配置,还正确定义了函数处理程序( lambda_handler ): app.lambda_handler

但是,我收到错误消息: Unable to import module 'app': No module named app

我哪里做错了?

我的剧本:

from __future__ import print_function

import json
import urllib
import boto3
from collections import Counter
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from nltk.stem.porter import *
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
stemmer=PorterStemmer()
import sys  
reload(sys) 
sys.setdefaultencoding('utf8')


print('Loading function')

s3 = boto3.client('s3')

number_of_sentences=0
number_of_words=0
word_list=[]
stop_words=set(stopwords.words('english'))
stop_word_list=[ v for v in stop_words]
modal_verbs=['can', 'could', 'may', 'might', 'must', 'shall', 'should', 'will' ,'would','ought']
auxilary_verbs=['be','do','have']
stop_word_list=stop_word_list+modal_verbs+auxilary_verbs
print("Starting Trigram generation")
#Empty Trigram list 
tri_gram_list=[]

def lambda_handler(event, context):
    #print("Received event: " + json.dumps(event, indent=2))

    # Get the object from the event and show its content type
    '''
    '''
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    try:
        response = s3.get_object(Bucket=bucket, Key=key)
        print("CONTENT TYPE: " + response['ContentType'])
        text = response['Body'].read()
        print(type(text))
        for line in text.readlines():
            for line in open("input.txt","r").readlines():
                line=unicode(line, errors='ignore')
                if len(line)>1:
                    sentences=sent_tokenize(line)
                    number_of_sentences+=len(sentences)
                    for sentence in sentences: 
                        sentence=sentence.strip().lower()
                        #sentence = sentence.replace('+', ' ').replace('.', ' ').replace(',', ' ').replace(':', ' ').replace('(', ' ').replace(')', ' ').replace(''`'', ' ').strip().lower()
                        words_from_sentence=tokenizer.tokenize(line) 
                        words = [word for word in words_from_sentence if word not in stop_word_list]
                        number_of_words+=len(words)
                        stemmed_words = [stemmer.stem(word) for word in words]
                        word_list.extend(stemmed_words)
                        #generate Trigrams
                        tri_gram_list_t= [ " ".join([words[index],words[index+1],words[index+2]]) for index,value in enumerate(words) if index<len(words)-2]
                        #print tri_gram_list
                        tri_gram_list.extend(tri_gram_list_t)

        print number_of_words
        print number_of_sentences
        print("Conting frequency now...")
        count=Counter()
        for element in tri_gram_list:
            #print element, type(tri_gram_list)
            count[element]=count[element]+1
        print count.most_common(25)
        print "most common 25 words ARE:"
        for element in word_list:
            #print element, type(tri_gram_list)
            count[element]=count[element]+1
        print count.most_common(25)




        # body = obj.get()['Body'].read()

    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

我哪里做错了?

尝试检查日志输出。 它会为您提供比上述错误更多的信息。

最后,请记住,您需要使用Python 2语法,像这样替换调用:

print(number_of_words) print number_of_words print(number_of_words)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM