简体   繁体   English

是否可以在预训练的 tensorflow 模型中删除类别?

[英]Is it possible to remove categories in a pretrained tensorflow model?

I am currently using Tensorflow Object Detection API for my human detection app.我目前正在将 Tensorflow Object Detection API 用于我的人体检测应用程序。 I tried filtering in the API itself which worked but I am still not contended by it because it's slow.我尝试在 API 本身中进行过滤,这有效,但我仍然没有被它所抗衡,因为它很慢。 So I'm wondering if I could remove other categories in the model itself to also make it faster.所以我想知道是否可以删除模型本身中的其他类别以使其更快。

If it is not possible, can you please give me other suggestions to make the API faster since I will be using two cameras.如果不可能,您能否给我其他建议以加快 API,因为我将使用两个摄像头。 Thanks in advance and also pardon my english :)提前致谢,也请原谅我的英语:)

Your questions addresses several topics for using neural network pretrained models.您的问题涉及使用神经网络预训练模型的几个主题。

Theoretical methods理论方法

  1. In general, you can always neutralize categories by removing the corresponding neurons in the softmax layer and compute a new softmax layer only with the relevant rows of the matrix.通常,您始终可以通过移除 softmax 层中的相应神经元来中和类别,并仅使用矩阵的相关行计算新的 softmax 层。
    This method will surely work (maybe that is what you meant by filtering ) but will not accelerate the network computation time by much, since most of the flops (multiplications and additions) will remain.这种方法肯定会起作用(也许这就是您所说的过滤),但不会大大加快网络计算时间,因为大多数触发器(乘法和加法)将保留。

  2. Similar to decision trees, pruning is possible but may reduce performance.与决策树类似,修剪是可能的,但可能会降低性能。 I will explain what pruning means, but note that the accuracy over your categories may remain since you are not just trimming, you are predicting less categories as well.我将解释修剪的含义,但请注意,您的类别的准确性可能会保持不变,因为您不仅在修剪,而且还预测了更少的类别。

  3. Transfer the learning to your problem.将学习转移到您的问题上。 See stanford's course in computer vision here .此处查看 stanford 的计算机视觉课程。 Most of the times I've seen that works good is by keeping the convolution layers as-is, and preparing a medium-size dataset of the objects you'd like to detect.大多数情况下,我看到的效果很好的是保持卷积层原样,并准备一个中等大小的要检测对象的数据集。

I will add more theoretical methods if you request, but the above are the most common and accurate I know.如果您要求,我会添加更多理论方法,但以上是我所知道的最常见和最准确的方法。

Practical methods实用方法

  1. Make sure you are serving your tensorflow model, and not just using an inference python code.确保您正在您的 tensorflow 模型提供服务,而不仅仅是使用推理 Python 代码。 This could significantly accelerate performance.这可以显着提高性能。

  2. You can export the parameters of the network and load them in a faster framework such as CNTK or Caffe .您可以导出网络的参数并加载到更快的框架中,例如CNTKCaffe These frameworks work in C++/CSharp and can inference much faster.这些框架在 C++/CSharp 中工作并且可以更快地进行推理。 Make sure you load the weights correctly, some frameworks use different order in tensor dimensions when saving/loading (little/big endian-like issues) .确保正确加载权重,一些框架在保存/加载时使用不同的张量维度顺序(类似小/大端的问题)

  3. If your application perform inference on several images, you can distribute the computation via several GPUs.如果您的应用程序对多个图像执行推理,您可以通过多个 GPU 分配计算。 **This can also be done in tensorflow, see Using GPUs . **这也可以在 tensorflow 中完成,请参阅使用 GPU

Pruning a neural network修剪神经网络

Maybe this is the most interesting method of adapting big networks for simple tasks.也许这是使大型网络适应简单任务的最有趣的方法。 You can see a beginner's guide here .您可以 在此处查看初学者指南。

Pruning means that you remove parameters from your network, specifically the whole nodes/neurons in a decision tree/neural network (resp).修剪意味着您从网络中删除参数,特别是决策树/神经网络(resp)中的整个节点/神经元。 To do that in object detection, you can do as follows (simplest way):要在对象检测中做到这一点,您可以执行以下操作(最简单的方法):

  1. Randomly prune neurons from the fully connected layers.从完全连接的层中随机修剪神经元。
  2. Train one more epoch (or more) with low learning rate, only on objects you'd like to detect.以低学习率再训练一个(或更多)epoch,仅针对您想要检测的对象。
  3. (optional) Perform the above several times for validation and choose best network. (可选)多次执行上述操作以进行验证并选择最佳网络。

The above procedure is the most basic one, but you can find plenty of papers that suggest algorithms to do so.上面的过程是最基本的过程,但是你可以找到很多建议算法这样做的论文。 For example Automated Pruning for Deep Neural Network Compression and An iterative pruning algorithm for feedforward neural networks .例如深度神经网络压缩的自动修剪前馈神经网络的迭代修剪算法

Wish you the best luck with your task!祝您工作顺利!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM