I am trying to create a "pythonic" way of taking a small subset of a very large array in python.
I am currently taking in a csv with 58 columns and 4960 rows with the following codes:
def import_normal_csv(file):
# Create blank array
results = []
# Open file
with open(file) as csvfile:
# read in file changing values to floats
reader = csv.reader(csvfile, quoting=csv.QUOTE_NONNUMERIC)
for row in reader:
results.append(row)
return results
def main():
print(" Working SPAM Dataset... ")
# Create a raw data array without numpy
spam_raw_data = import_normal_csv('spam.csv')
# CREATE SUBSET OF SPAM_RAW_DATA HERE
random.shuffle(spam_raw_data)
I have seen various ways to do this using numpy
or pandas
, but I would like to do this naturally without those libraries. Instead of my massive array, how could I instead take in only...500 rows (or some arbitrary number significantly less than nearly 5000)?
You can use the builtin random
library, for example:
import random
random.sample(data, 500)
This will give you a list
of 500 list
s, each representing one row.
Use random.sample
:
subset_size = 500
random.sample(spam_raw_data, subset_size)
Also note your import_normal_csv
function can be simplified:
def import_normal_csv(file):
with open(file) as csvfile:
reader = csv.reader(csvfile, quoting=csv.QUOTE_NONNUMERIC)
return list(reader)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.