簡體   English   中英

張量流kmeans似乎沒有新的初始點

[英]tensorflow kmeans doesn't seem to take new initial points

通過在Tensorflow上獲得許多k均值試驗的平均距離最小的結果,我在數據中找到了最佳聚類。

但是我的代碼不會在每次試用中都更新初始質心,因此所有結果都是相同的。

這是我的代碼1-tensor_kmeans.py

import numpy as np
import pandas as pd
import random
import tensorflow as tf
from tensorflow.contrib.factorization import KMeans
from sklearn import metrics
import imp
import pickle

# load as DataFrame
pkl = 'fasttext_words_k.pkl'
with open(pkl, 'rb') as f:
    unique_words_in_fasttext = pickle.load(f).T

vector =[]
for i in range(len(unique_words_in_fasttext)):
    vector.append(list(unique_words_in_fasttext.iloc[i,:]))
vector = [np.array(f) for f in vector ]


# Import data
full_data_x = vector


# Parameters
num_steps = 100 # Total steps to train
batch_size = 1024 # The number of samples per batch
n_clusters = 1300 # The number of clusters
num_classes = 100 # The 10 digits
num_rows = 13074
num_features = 300 # Each image is 28x28 pixels


### tensor kmeans ###

# Input images
X = tf.placeholder(tf.float32, shape=[None , num_features])
# Labels (for assigning a label to a centroid and testing)
# Y = tf.placeholder(tf.float32, shape=[None, num_classes])


# K-Means Parameters
kmeans = KMeans(inputs=X, num_clusters=n_clusters, distance_metric='cosine',
                use_mini_batch=True, initial_clusters="random")


# Build KMeans graph
training_graph = kmeans.training_graph()

if len(training_graph) > 6: # Tensorflow 1.4+
    (all_scores, cluster_idx, scores, cluster_centers_initialized,
     cluster_centers_var, init_op, train_op) = training_graph
else:
    (all_scores, cluster_idx, scores, cluster_centers_initialized,
     init_op, train_op) = training_graph

cluster_idx = cluster_idx[0] # fix for cluster_idx being a tuple
avg_distance = tf.reduce_mean(scores)

# Initialize the variables (i.e. assign their default value)
init_vars = tf.global_variables_initializer()

# Start TensorFlow session
sess = tf.Session()

# Run the initializer
sess.run(init_vars, feed_dict={X: full_data_x})
sess.run(init_op, feed_dict={X: full_data_x})

# Training
for i in range(1, num_steps + 1):
    _, d, idx = sess.run([train_op, avg_distance, cluster_idx],
                         feed_dict={X: full_data_x})
    if i % 10 == 0 or i == 1:
        print("Step %i, Avg Distance: %f" % (i, d))


labels = list(range(num_rows))
# Assign a label to each centroid
# Count total number of labels per centroid, using the label of each training
# sample to their closest centroid (given by 'idx')
counts = np.zeros(shape=(n_clusters, num_classes))
for i in range(len(idx)):
    counts[idx[i]] += labels[i]

# Assign the most frequent label to the centroid
labels_map = [np.argmax(c) for c in counts]
labels_map = tf.convert_to_tensor(labels_map)


# Evaluation ops
# Lookup: centroid_id -> label
cluster_label = tf.nn.embedding_lookup(labels_map, cluster_idx)


# assign variables
cluster_list_k = idx

這是code1之外的代碼。

k_li=[]
rotation = 50

best_labels = []
best_k = -1
for i in range(rotation):
    import tensor_kmeans    
    k_li.append(tensor_kmeans.k)
    if len(k_li) > 0:
        for i in range(len(k_li)):
            if k_li[i] > best_k:
                best_labels = tensor_kmeans.cluster_list_k
                best_k = k_li[i]
    tensor_kmeans = imp.reload(tensor_kmeans)

在哪里可以找到問題? 我正在等待您的回答,謝謝。

每次調用KMeans() ,都應使用一個新的random_seed ,即

kmeans = KMeans(inputs=X, num_clusters=n_clusters, distance_metric='cosine',
                use_mini_batch=True, initial_clusters="random", random_seed=SOME_NEW_VALUE)

否則,函數KMeans()將假定random_seed=0 ,以便結果可重現(即結果始終相同)。

解決您的問題的一種簡單方法是使用code1 - tensor_kmeans.py創建一個函數,然后為每個試驗使用一個新的random_seed (作為輸入參數)調用此函數。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM