簡體   English   中英

在 google colab ( python 3.7 ) 分配之前引用的局部變量 'orgHigh'

[英]local variable 'orgHigh' referenced before assignment in google colab ( python 3.7 )

I'm having the local variable referenced before the assignment error and I have tried a lot of ways that I can use to fix this. Any help would be greatly appreciated.

錯誤;

Traceback(最近一次通話最后一次):文件“real-time-action-recognition/Offline_Recognition.py”,第 272 行,在 one_video()

文件“real-time-action-recognition/Offline_Recognition.py”,第 242 行,在 one_video 中

add_status(orig_frame ,x=orgHigh[0] ,y=orgHigh[1] ,s='Frame: {}'.format(frame_count),

UnboundLocalError: local variable 'orgHigh' referenced before assignment

Offline_Recognition.py

def one_video():
  capture = cv2.VideoCapture(args.video)
  fps_vid = capture.get(cv2.CAP_PROP_FPS)
  width = capture.get(cv2.CAP_PROP_FRAME_WIDTH)
  height = capture.get(cv2.CAP_PROP_FRAME_HEIGHT) 
  fourcc = cv2.VideoWriter_fourcc(*"XVID")
  out = cv2.VideoWriter('output.mp4', fourcc, fps_vid, (int(width), int(height)))
    

  frames = []  
  frame_count = 0
  
  #Set opencv parameters according to the size of the input frames
  if args.quality == '1080p':
      orgHigh =(0,40) 
      orgDown = (0,1020)
      font = cv2.FONT_HERSHEY_COMPLEX_SMALL
      fontScale = 3
      thickness = 3
      boxcolor= (255,255,255)
      
  if args.quality == '480p':
      orgHigh =(0,20) 
      orgDown = (0,460)
      font = cv2.FONT_HERSHEY_COMPLEX_SMALL
      fontScale = 1.5
      thickness = 2
      boxcolor= (255,255,255)
      
  action_label = label_dic(args.classInd_file)
  
  while (capture.isOpened()):
      ret, orig_frame = capture.read()
     
      if ret is True:
          frame = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2RGB)
      else:
          print('Not getting frames')
          break  
      
      #use .fromarray function to be able to apply different data augmentations
      frame = Image.fromarray(frame)
      
      if frame_count % args.sampling_freq < 6:
          frames.append(frame)
          print('Picked frame, ', frame_count)
      
      if len(frames) == args.delta*6:
          frames = transform(frames).cuda()
          scores_RGB = eval_video(frames[0:len(frames):6], 'RGB')[0,] 
          scores_RGBDiff = eval_video(frames[:], 'RGBDiff')[0,]           
          #Fusion
          final_scores = args.score_weights[0]*scores_RGB + args.score_weights[1] * scores_RGBDiff
          #Just like np.argsort()[::-1]
          scores_indcies = torch.flip(torch.sort(final_scores.data)[1],[0])
          #Prepare List of top scores for action checker
          TopScoresList = []
          for i in scores_indcies[:5]:
              TopScoresList.append(int(final_scores[int(i)]))
          #Check for "no action state"
          action_checker = Evaluation(TopScoresList, args.psi)
          #action_checker = True
          if not action_checker:
              print('No Assigned Action')
          for i in scores_indcies[:5]:
              print('%-22s %0.2f'% (action_label[str(int(i)+1)], final_scores[int(i)]))
          print('<----------------->')
          frames = []     
          

      
      add_status(orig_frame ,x=orgHigh[0] ,y=orgHigh[1] ,s='Frame: {}'.format(frame_count),
                font = font, fontScale = fontScale-.7 ,fontcolor=(0,255,255) ,thickness=thickness-1
                ,box_flag=True, alpha = .5, boxcolor=boxcolor, x_mode='center')
      
      if frame_count <= args.sampling_freq*args.delta:
          add_status(orig_frame ,x=orgDown[0] ,y=orgDown[1] ,s='Start Recognition ...',
                font = font, fontScale = fontScale ,fontcolor=(255,0,0) ,thickness=thickness
                ,box_flag=True, alpha = .5, boxcolor=boxcolor, x_mode='center')              
      else:
          
          if action_checker:
              add_status(orig_frame ,x=orgDown[0] ,y=orgDown[1] ,s='Detected Action: {}'.format(action_label[str(int(scores_indcies[0])+1)]),
               font = font, fontScale = fontScale ,fontcolor=(0,0,0) ,thickness=thickness
                ,box_flag=True, alpha = .5, boxcolor=boxcolor, x_mode='center')
                    
          else:
              add_status(orig_frame ,x=orgDown[0] ,y=orgDown[1] ,s='No Assigned Action',
               font = font, fontScale = fontScale ,fontcolor=(0,0,255) ,thickness=thickness
                ,box_flag=True, alpha = .5, boxcolor=boxcolor, x_mode='center')  
              
      frame_count += 1          
      out.write(orig_frame)
      
  
  # When everything done, release the capture
  capture.release()
  cv2.destroyAllWindows()    
  
  
if __name__ == '__main__':
    one_video()

看起來您的 if 語句中分配的任何變量都沒有后備/默認值。 因此,如果args.quality沒有值或不匹配 if 條件,則不會設置它們。 這是你的主要錯誤。

此外,這個腳本似乎是打算通過命令行運行的(我沒有使用過 Google Collab,但它似乎類似於命令行 arguments; Google Colab 命令行參數)? 我認為您傳遞和解析 arguments 的方式也需要改變。

如果您正在運行.python Offline_Recognition.py --quality 1080p那么您需要類似以下代碼來提取 arguments:

測試.py

import sys
import getopt

def main(argv):
   quality = ''
   
   try:
      opts, args = getopt.getopt(argv,"hq:",["ifile=","ofile="])
   except getopt.GetoptError:
      print('test.py -i <inputfile> -o <outputfile>')
      sys.exit(2)
   for opt, arg in opts:
      if opt == '-h':
         print('test.py --quality <hd_mode>')
         sys.exit()
      elif opt in ("-q", "--quality"):
         quality = arg
   print('HD Quality is "', quality)

if __name__ == "__main__":
    main(sys.argv[1:])

獲取選擇| Python - 命令行 Arguments

所以你可以創建一個 function 來提取 arguments 並將它們傳遞給你的 function one_video

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM