简体   繁体   English

执行代码后Jupyter Kernel忙

[英]Jupyter Kernel is busy after code execution

I'm using Jupyter Notebook with conda and Python 3. Recently, the kernel is busy even after the code execution is finished and the execution time is way longer than usual. 我正在使用带有conda和Python 3的Jupyter Notebook。最近,即使代码执行完成,内核也很忙,而且执行时间比平时长。 I have been searching around but there is no result. 我一直在搜索,但没有结果。 Any suggestions? 有什么建议么?

Edit: I'm sorry for being too general. 编辑:对不起,我太笼统了。 I am trying to identify the problem myself so any direction will be appreciated. 我正在尝试自己找出问题所在,因此我们将不胜感激。 After re-running the code a few times, it seems like whenever I run the following block of code, it happens: 重新运行几次代码后,似乎每当我运行以下代码块时,它就会发生:

train_X = np.array(train_X)
train_Y = np.array(train_Y)

The previous code is as following: 先前的代码如下:

# In[1]:    
import pandas as pd
from collections import OrderedDict    

# In[2]:   
df = pd.read_csv('df.csv')
people_list = df['ID'].unique()
product_list = df['product'].unique() 

# Out[2]:
    ID  product     M1  M2  M3  class
0   0   A           1   2   6   1
1   1   B           2   3   7   1
2   2   C           3   4   3   0
3   0   C           4   3   2   1
4   1   A           5   4   3   1
5   2   B           6   6   1   0  

# In[3]:    
people_dict = {}
target_dict = {}

for i in range(len(people_list)):
    key = people_list[i]
    new_df = df[df['ID'] == people_list[i]]
    new_df = new_df.transpose()
    new_df.columns = new_df.iloc[1]
    new_df = new_df[2:-1]   
    people_dict[key] = new_df
    target_dict[key] = df.iat[i, 5]

for key in people_dict.keys():
    for i in product_list:
        if i not in people_dict[key].columns:
            people_dict[key][i] = [0]*3
    people_dict[key] = people_dict[key].reindex(sorted(people_dict[key].columns), axis = 1)

# In[5]:    
people_values = OrderedDict()
target_values = OrderedDict()

# extract the value of the dataframe
for key in people_dict.keys():
    people_values[key] = people_dict[key].values
    target_values[key] = target_dict[key]

# In[6]:
n_samples = 1
timestes = 3
n_features = 3

train_input = list(people_values.values())
train_target = list(target_values.values())

train_X = []
train_Y = []

for i in range(len(train_input)):
    train_X.append(train_input[i])
    train_Y.append(train_target[i])

# In[7]:
train_X = np.array(train_X)
train_Y = np.array(train_Y)

Essentially, I am trying to do some classification with Keras LSTM and the input is historical sales of 1 person, the output is their class, 'good' or 'bad'. 本质上,我尝试使用Keras LSTM进行一些分类,输入是1个人的历史销售额,输出是他们的类别,“好”或“不好”。

The real dataset has 60k rows but I simplified the dataset so everyone can follow more easily. 真实的数据集有6万行,但是我简化了数据集,因此每个人都可以更轻松地关注。 When I worked with this dataset previously, I never encountered this issue. 以前使用此数据集时,我从未遇到过此问题。

Any suggestions are greatly appreciated, thank you. 任何建议都将不胜感激,谢谢。

It turns out that it is just an issue with converting a variable to a numpy array, as it is shown here . 事实证明,这只是一个变量转换为numpy的阵列的问题,因为它显示在这里 I just worked around it. 我只是解决它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM