[英]Error drawing text on NSImage in PyObjC
我試圖使用PyObjC在圖像上覆蓋一些文本,同時努力回答我的問題“使用OS X內置的工具對圖像進行注釋” 。 通過引用CocoaMagic(RMagick的RubyObjC替代品) ,我得出了以下結論 :
#!/usr/bin/env python
from AppKit import *
source_image = "/Library/Desktop Pictures/Nature/Aurora.jpg"
final_image = "/Library/Desktop Pictures/.loginwindow.jpg"
font_name = "Arial"
font_size = 76
message = "My Message Here"
app = NSApplication.sharedApplication() # remove some warnings
# read in an image
image = NSImage.alloc().initWithContentsOfFile_(source_image)
image.lockFocus()
# prepare some text attributes
text_attributes = NSMutableDictionary.alloc().init()
font = NSFont.fontWithName_size_(font_name, font_size)
text_attributes.setObject_forKey_(font, NSFontAttributeName)
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
# output our message
message_string = NSString.stringWithString_(message)
size = message_string.sizeWithAttributes_(text_attributes)
point = NSMakePoint(400, 400)
message_string.drawAtPoint_withAttributes_(point, text_attributes)
# write the file
image.unlockFocus()
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
data.writeToFile_atomically_(final_image, false)
運行它時,我得到以下信息:
Traceback (most recent call last):
File "/Users/clinton/Work/Problems/TellAtAGlance/ObviouslyTouched.py", line 24, in <module>
message_string.drawAtPoint_withAttributes_(point, text_attributes)
ValueError: NSInvalidArgumentException - Class OC_PythonObject: no such selector: set
在文檔中查找drawAtPoint:withAttributes:時,它說:“只有在NSView具有焦點時,才應調用此方法。” NSImage不是NSView的子類,但我希望它能起作用,並且在Ruby示例中似乎很相似。
要進行這項工作,我需要更改什么?
我重寫了代碼,將它們逐行忠實地轉換為Objective-C Foundation工具。 它有效,沒有問題。 [如果有理由,我很樂意在這里發布。]
問題就變成了,怎么做:
[message_string drawAtPoint:point withAttributes:text_attributes];
與......不同
message_string.drawAtPoint_withAttributes_(point, text_attributes)
? 有沒有辦法知道哪個“ OC_PythonObject”引發了NSInvalidArgumentException?
以下是上述代碼中的問題:
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
->
text_attributes.setObject_forKey_(NSColor.blackColor(), NSForegroundColorAttributeName)
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
->
bits = NSBitmapImageRep.imageRepWithData_(image.TIFFRepresentation())
data = bits.representationUsingType_properties_(NSJPEGFileType, None)
確實有小錯別字。
請注意,代碼的中間部分可以替換為以下更易讀的變體:
# prepare some text attributes
text_attributes = {
NSFontAttributeName : NSFont.fontWithName_size_(font_name, font_size),
NSForegroundColorAttributeName : NSColor.blackColor()
}
# output our message
NSString.drawAtPoint_withAttributes_(message, (400, 400), text_attributes)
我通過查看NodeBox的源代碼, psyphography.py和cocoa.py的十二行(尤其是save和_getImageData方法)來了解這一點。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.