简体   繁体   中英

How to get the json from a text file in python

def calculateStats():
  pattern = "Online_R*.txt"
  for root, dirs, files in os.walk("."):
      for name in files:
          if fnmatch.fnmatch(name, pattern):
              if(fName in root):
                fileName = os.path.join(root, name)
                intializeCSV(fileName)
                data = []
                f = open(fileName, "r")
                lines = f.readlines()
                first_instance = True
                for i in range(len(lines)):
                  isSubstring = "ONLINE_DATA_RECEIVED_FROM_INFROMATION_RETRIVAL_SYSTEM" in lines[i].rstrip('\n')
                  isRating = "RATING_ON_KEYWORDS" in lines[i].rstrip('\n')
                  if(first_instance and isSubstring):
                    first_instance = False
                    continue
                  elif isSubstring:
                    data = lines[k].rstrip('\n')
                    data = json.loads(data)
                    print data
                    print data["inputs"]

Data:

2016-09-12 16:31:50.864000 ONLINE_DATA_RECEIVED_FROM_INFROMATION_RETRIVAL_SYSTEM
{u'debug': [u'time to fit model 0.06 s', u'time to generate suggestions 0.11 s', u'time to search documents 4.93 s', u'time to misc operations 0.02 s'], u'articles': [{u'is-saved': False, u'title': u'Computer Vision and Computer Graphics Analysis of Paintings and Drawings: An Introduction to the Literature', u'abstract': u'In the past few years, a number of scholars trained in computer vision; pattern recognition, image processing, computer graphics; and art history have developed rigorous computer methods for addressing an increasing number of problems in the history of art. In some cases; these computer methods aremore accurate than even highly trained connoisseurs, art historians and artists. Computer graphics models of artists\' studios and subjects allow scholars to explore "what if" scenarios and determine artists\' studio praxis. Rigorous computer ray-tracing software sheds light; on claims that; some artists employed optical tools. Computer methods win not replace tradition arthistorical methods of connoisseurship but enhance and extend them. As such, for these computer methods to be useful to the art community, they must continue to be refilled through application to a variety of significant art historical problems.', u'date': u'2009-01-01T00:00:00', u'publication-forum': u'COMPUTER ANALYSIS OF IMAGES AND PATTERNS, PROCEEDINGS', u'publication-forum-type': u'article', u'authors': u'D G Stork', u'keywords': u'pattern recognition, computer image analysis, brush stroke analysis, painting analysis, image forensics, compositing, computer graphics reconstructions, image processing, computer graphics, recognition, computer vision, graph, vision', u'id': u'575b005e12a085663bfef04f'}, {u'is-saved': False, u'title': u'Chomsky and Egan on computational theories of vision', u'abstract': u"Noam Chomsky and Frances Egan argue that David Marr's computational theoryof vision is not intentional, claiming that the formal scientific theory does not include description of visual content. They also argue that the theory is internalist in the sense of not describing things physically external to the perceiver. They argue that these claims hold for computational theories of vision in general. Beyond theories of vision, they argue that representational content does not figure as a topic within formal computationaltheories in cognitive science. I demonstrate that Chomsky's and Egan's claims about Marr's theory are false. Marr's computational theory contains a mathematical theory of visual content, based on empirical psychophysical evidence. It also contains mathematical descriptions of distal physical surfaces and objects, and of their optic projection to the perceiver. Much computational research on vision contains these types of intentional and externalist components within the formal, mathematical, theories. Chomsky's and Egan's claims demonstrate inadequate study and understanding of Marr's work and other research in this area. Computational theories of vision, by containing empirically based mathematical theories of visual content, to this extent present naturalizations of semantics.", u'date': u'2006-01-01T00:00:00', u'publication-forum': u'MINDS AND MACHINES', u'publication-forum-type': u'article', u'authors': u'A Silverberg', u'keywords': u'chomsky, computational theory, egan, marr, physical assumptions, visual content, 2.5-d sketch, 3-d representation, research, vision', u'id': u'575aff6012a085663bfef01a'}, {u'is-saved': False, u'title': u'Inspection and grading of agricultural and food products by computer vision systems - a review', u'abstract': u'Computer vision is a rapid, economic, consistent and objective inspection technique, which has expanded into many diverse industries. Its speed and accuracy satisfy ever-increasing production and quality requirements, hence aiding in the development of totally automated processes. This non-destructive method of inspection has found applications in the agricultural and food industry, including the inspection and grading of fruit and vegetable. Ithas also been used successfully in the analysis of grain characteristics and in the evaluation of foods such as meats, cheese and pizza. This paper reviews the progress of computer vision in the agricultural and food industry, then identifies areas for further research and wider application the technique. (C) 2002 Elsevier Science B.V. All rights reserved.', u'date': u'2002-01-01T00:00:00', u'publication-forum': u'COMPUTERS AND ELECTRONICS IN AGRICULTURE', u'publication-forum-type': u'article', u'authors': u'T Brosnan, D W Sun', u'keywords': u'computer vision, food, fruit, grain, image analysis and processing, vegetables, automation, characters, research, computer vision system, meats, vision', u'id': u'577b28bd12a0856ea8376b9e'}, {u'is-saved': False, u'title': u'Computer Vision Support for the Orthodontic Diagnosis', u'abstract': u"The following paper presents the achievement reached by our joined teams: Computer Vision System Group (ZKSW) in the Institute of Theoretical and Applied Informatics, Polish Academy of Sciences and Department of Orthodontics, Silesian Medical University. The cooperation began from the inspiration of late Prof. A. Mrozek. Computer Vision in supporting orthodontic diagnosismeans all the problems connected with proper acquisition, calibration and analysis of the diagnostic images of orthodontic patients. The aim of traditional cephalometric analysis is the quantitative confirmation of skeletal and/or soft tissue abnormalities on single images, assessment of the treatment plan, long term follow up of growth and treatment results. Beginning with the computerization of the methods used in traditional manual diagnosis in the simplest X-ray films of the patient's head we have developed our research towards engaging different methods of morphometrics, deformation analysis and using different imaging modalities: pairs of cephalograms (lateralan frontal), CT-scans, laser scans of dental models, laser scans of soft tissues, finally merging all the image information into patient's specific geometric and deformable model of the head. The model can be further exploited in the supporting of the surgical correction of jaw discrepancies. Our laboratory equipment allows us to design virtual operations, educational programs in a virtual reality with a CyberGlove device, and finally to verify the plan of intervention on stereo lithographic solid models received from a 3D printer.", u'date': u'2009-01-01T00:00:00', u'publication-forum': u'MAN-MACHINE INTERACTIONS', u'publication-forum-type': u'article', u'authors': u'A Tomaka, A Pisulska-Otremba', u'keywords': u'computer vision, orthodontic diagnosis, image acquisition, calibration, merging information, virtual reality, research, computer vision system, education, vision', u'id': u'577b28bd12a0856ea8376ba3'}, {u'is-saved': False, u'title': u'Computer vision for a robot sculptor', u'abstract': u"Before make computers can be active collaborators in design work, they must be equipped with some human-like visual and design skills. Towards this end, we report some advances in integrating computer vision and automated design in a computational model of ''artistic vision'' - the ability to see something striking in a subject and express it in a creative design. The Artificial Artist studies images of animals, then designs sculpture that conveys something of the strength, tension, and expression in the animals' bodies. It performs an anatomical analysis using conventional computer vision techniques constrained by high-level causal inference to find significant areas of the body, e.g., joints under stress. The sculptural form - kinetic mobiles - presents a number of mechanical and aesthetic design challenges, which the system solves in imagery using field-based computing methods. Coupled potential fields simultaneously enforce soft and hard constraints - e.g., the mobile should resemble the original animal and every subassembly of the mobile must be precisely balanced. The system uses iconic representations in all stages, obviating the need to translate between spatial and predicate representations and allowing a rich flow of information between vision and design.", u'date': u'1997-01-01T00:00:00', u'publication-forum': u'HUMAN VISION AND ELECTRONIC IMAGING II', u'publication-forum-type': u'proceedings paper', u'authors': u'M Brand', u'keywords': u'vision, causal analysis, potential fields, automated design, computer vision, robotics', u'id': u'575aff6012a085663bfef00b'}, {u'is-saved': False, u'title': u'Computer vision syndrome: A review', u'abstract': u'As computers become part Of Our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer Vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricatingeye drops and special computer glasses help relieve ocular surface-relatedsymptoms. More work needs to be,done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes. © 2005 Elsevier Inc. All rights reserved.', u'date': u'2005-01-01T00:00:00', u'publication-forum': u'SURVEY OF OPHTHALMOLOGY', u'publication-forum-type': u'review', u'authors': u'C Blehm, S Vishnu, A Khattak, S Mitra, R W Yee', u'keywords': u'asthenopia, computer vision syndrome, dry eye, ergonomics, eyestrain, glare, video display terminals, computer vision, vision', u'id': u'575aff6012a085663bfeeff7'}, {u'is-saved': False, u'title': u'Social impact of computer vision', u'abstract': u"From the viewpoint of the economic growth theorist, the broad social impact of improving computer vision should be to improve people's material well-being. Developing computer vision entails building knowledge of perception and interpretation into new devices which enhance the scope and depth of human capability. Some worry that saving lives and replacing tedious jobs through computer vision will burden society with increasing population and unemployment; such worries are unjustified because humans are ''the ultimate resource.'' Because development of computer vision has costs as well as benefits, developers who wish to have a positive social impact should pursue projects that promise to pay off in the open market, and should seek private instead of government funding as much as possible.", u'date': u'1997-01-01T00:00:00', u'publication-forum': u'EMERGING APPLICATIONS OF COMPUTER VISION - 25TH AIPR WORKSHOP', u'publication-forum-type': u'proceedings paper', u'authors': u'H Baetjer', u'keywords': u'computer vision, economic growth, capital, population, employment, funding, profit, perception, vision', u'id': u'575aff6012a085663bfef000'}, {u'is-saved': False, u'title': u'Nondestructive testing of specularly reflective objects using reflection three-dimensional computer vision technique', u'abstract': u'We review an optical method referred to as 3-D computer vision technique for nondestructive inspection of three-dimensional objects whose surfaces are specularly reflective. In the setup, a computer-generated cosinusoidal fringe pattern in the form of linear, parallel fringe lines of equal spacing is displayed on a TV monitor. The monitor is placed in front of the test object, whose specularly reflective surface behaves as a mirror. A virtual image (or mirror image) of the fringe lines is thus formed. For a planar surface, the fringe pattern of the image is undistorted. The fringe lines, however, are distorted according to the slope distribution if the surface is not flat. By digitizing the distorted fringe lines, employing a phase-shift technique, the fringe phase distribution is determined, hence enabling subsequent determination of the surface slope distribution. When applied to nondestructive flaw detection, two separate recordings of the virtual image of the fringe lines are made, one before and another after an incremental loadis applied on the test object. The difference of the two phase-fringe distributions, or the phase change, represents the change in surface slope of the object due to the deformation. As a subsurface flaw also affects surfacedeformation, both surface and subsurface flaws are thus revealed from anomalies in the surface slope change. The method is simple, robust, and applicable in industrial environments. (C) 2003 Society of Photo-Optical Instrumentation Engineers.', u'date': u'2003-01-01T00:00:00', u'publication-forum': u'OPTICAL ENGINEERING', u'publication-forum-type': u'article', u'authors': u'M YY Hung, H M Shang', u'keywords': u'machine vision, computer vision, optical measurement, nondestructive testing, surface quality evaluation, vision', u'id': u'57ac8b0712a0856bc72d8cca'}, {u'is-saved': False, u'title': u'Hybrid optoelectronic processing and computer vision techniques for intelligent debris analysis', u'abstract': u'Intelligent Debris Analysis (IDA) requires significant time and resources due to the large number of images to be processed. To address this problem,we propose a hybrid optoelectronic and computer vision approach. Two majorsteps are involved for IDA: patch-level analysis and particle level analysis. An optoelectronic detection system using two ferroelectric liquid crystal spatial light modulators is designed and constructed to perform patch-level analysis, and advanced computer vision techniques are developed to carry out more intelligent particle-level analysis. Since typically only a small portion of the debris filters require more sophisticated particle-level analysis, the proposed approach enables high-speed automated analysis of debris fitters due to the inherent parallelism provided by the optoelectronic system.', u'date': u'1998-01-01T00:00:00', u'publication-forum': u'ALGORITHMS, DEVICES, AND SYSTEMS FOR OPTICAL INFORMATION PROCESSING', u'publication-forum-type': u'article', u'authors': u'Q MJ Wu, C P Grover, A Dumitras, D Liew, A Jerbi', u'keywords': u'optical information processing, computer vision, image analysis, intelligent debris analysis, automation, vision', u'id': u'57bed02e12a0850d372e5f17'}, {u'is-saved': False, u'title': u'Vlfeat an open and portable library of computer vision algorithms', u'url': u'http://portal.acm.org/citation.cfm?id=1874249', u'abstract': u'VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research.', u'date': u'2010-01-01T00:00:00', u'publication-forum': u'International Multimedia Conference', u'publication-forum-type': u'conference', u'authors': u'Andrea Vedaldi, Brian Fulkerson', u'keywords': u'computer vision, image classification, object recognition, visual features, research, matching, vision', u'id': u'575aff6012a085663bfef012'}], u'keywords_local': {u'object recognition': {u'distance': 0.7300072813671763, u'angle': 96.66553497533552}, u'computer graphics': {u'distance': 0.7450305181430191, u'angle': 175.1162951377983}, u'graph': {u'distance': 0.6625181921678064, u'angle': 117.37932095235796}, u'reconfigurability': {u'distance': 0.5679946595851635, u'angle': 0.0}, u'course design': {u'distance': 0.8031378823919815, u'angle': 98.29399495312194}, u'research': {u'distance': 0.6153281573320046, u'angle': 137.52338924477087}, u'computer vision': {u'distance': 1.0, u'angle': 112.02639294117806}, u'image analysis': {u'distance': 0.5832147382377356, u'angle': 180.0}, u'education': {u'distance': 0.6887723921268714, u'angle': 111.53630557659233}, u'vision': {u'distance': 0.7595244667669305, u'angle': 136.46185691516604}}, u'keywords_semi_local': {u'glare': {u'distance': 0.15840304799865776, u'angle': 78.75687844118187}, u'neural networks': {u'distance': 0.2544935361506226, u'angle': 96.66553497533552}, u'robotics': {u'distance': 0.4166449886657276, u'angle': 157.7235521114761}, u'vision engineering': {u'distance': 0.23569778705554037, u'angle': 171.53672248243535}, u'ergonomics': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}, u'image classification': {u'distance': 0.49063995949774913, u'angle': 174.2087227649092}, u'obstacle detection': {u'distance': 0.3377380417460496, u'angle': 131.16295137798312}, u'employment': {u'distance': 0.384487541472409, u'angle': 130.97859424470386}, u'biological vision processes': {u'distance': 0.23569778705554037, u'angle': 171.53672248243535}, u'representation hierarchy': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'chirplet transform': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'sensor placement graph': {u'distance': 0.582323859465567, u'angle': 130.94290603000658}, u'image processing': {u'distance': 1.0, u'angle': 137.52338924477087}, u'gpu': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'high school teachers': {u'distance': 0.11613705056778983, u'angle': 0.0}, u'mediated reality': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'visual odometry': {u'distance': 0.3377380417460496, u'angle': 131.16295137798312}, u'distributed vision': {u'distance': 0.3496513364802925, u'angle': 95.29639511600689}, u'three dimensional representations': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'computer science education': {u'distance': 0.11613705056778983, u'angle': 0.0}, u'reconfigurable computing': {u'distance': 0.2859569313350985, u'angle': 102.37579920408332}, u'teaching': {u'distance': 0.628788707830077, u'angle': 152.95762642133994}, u'population': {u'distance': 0.5759898343270131, u'angle': 156.61518332125544}, u'tracking': {u'distance': 0.3789181276549178, u'angle': 149.6144887690062}, u'object modelling': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'potential fields': {u'distance': 0.24188910369708108, u'angle': 173.2111784523935}, u'asthenopia': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}, u'physical assumptions': {u'distance': 0.0, u'angle': 71.55289280630292}, u'perception': {u'distance': 0.43562497018771207, u'angle': 130.60310326514812}, u'eyestrain': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}}, u'inputs': [[u'hci', 1.0, 0.6142454219528725, 0.07306666297061, 0.0800478407947], [u'design', 0.0, 0.5468406837422238, 0.08760202801780537, 0.01], [u'usefulness', 1.0, 0.4562214561022063, 0.04479820099963043, 0.0656453052827], [u'graph', 1.0, 0.6448427829817995, 0.054873346672524956, 0.0374344673723], [u'reconfigurability', 0.0, 0.2808456351828042, 0.11391946280676753, 0.0436526373409], [u'computer vision', 1.0, 1.0, 0.9907708479613715, 1.0], [u'course design', 1.0, 0.6722828427604761, 0.087227437324513, 0.155838952273], [u'ergonomics', 0.0, 0.3744481774120078, 0.13317889968218008, 0.0638618603466], [u'reconfigurable computing', 0.0, 0.13771106030222424, 0.11923776509308054, 0.0473135364178], [u'mediated reality', 0.0, 0.4562214561022063, 0.16472183049428685, 0.104860423781], [u'education', 0.0, 0.3808226715583354, 0.08382258150001105, 0.01], [u'image processing', 0.0, 0.48646855984497794, 0.11701636909528229, 0.0635854457852], [u'fingerprint matching', 1.0, 0.3497471833044457, 0.032857452254179007, 0.0294450507842], [u'vision', 0.0, 0.8841315712906007, 0.04888374610072927, 0.01]]}

Error:

  File "online-analysis-script.py", line 67, in calculateStats
    data = json.loads(data)
  File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 367, in decode
    raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 5 - line 1 column 50 (char 4 - 49)

Problem: The problem here is i am reading a text file but when a try to read a line which is json and want to parse it so i can get the input fields, it throws some error regarding decode and encode issues. How can i achieve this functionality?

It seems that your line is not 100% json. It looks like the line was created by a "print (container)" statement in a previous python script and the output was redirected to create this line, hence the telltale 'u' in front of the strings.

The solution is easy, go back to your previous python script and do this

print (json.dumps(container))

And then run the script and redirect the output to a file.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM