[英]DeepLearning4j‘s example compiled error
I got a few problems while programming with DeepLearning4j. 使用DeepLearning4j编程时遇到一些问题。
When I open and compile the example MnistMultiThreadedExample in Eclipse, these problems occured. 当我在Eclipse中打开并编译示例MnistMultiThreadedExample时,就会出现这些问题。
import org.deeplearning4j.datasets.iterator.impl.MnistDataSetIterator;
import org.deeplearning4j.datasets.test.TestDataSetIterator;
import org.deeplearning4j.iterativereduce.actor.multilayer.ActorNetworkRunner;**(error)**
import org.deeplearning4j.models.classifiers.dbn.DBN;**(error)**
import org.deeplearning4j.nn.conf.NeuralNetConfiguration;
import org.deeplearning4j.scaleout.conf.Conf;**(error)**
It is saying these package are not in the target package. 就是说这些软件包不在目标软件包中。 And I couldn't find these modules in the package and couldn't find it in Maven Center Repository while I couldn't find the Class in Source Code. 而且我无法在包中找到这些模块,也无法在Maven Center存储库中找到它们,而我却无法在源代码中找到类。
Now I want to know how I get these modules and what should I do before creating a AutoEncoder which could running on Spark. 现在,我想知道如何获得这些模块,以及在创建可以在Spark上运行的AutoEncoder之前应该做什么。
The example code is shown below: 示例代码如下所示:
import org.deeplearning4j.datasets.iterator.impl.MnistDataSetIterator;
import org.deeplearning4j.datasets.test.TestDataSetIterator;
import org.deeplearning4j.iterativereduce.actor.multilayer.ActorNetworkRunner;
import org.deeplearning4j.models.classifiers.dbn.DBN;
import org.deeplearning4j.nn.conf.NeuralNetConfiguration;
import org.deeplearning4j.scaleout.conf.Conf;
public class MnistMultiThreadedExample {
public static void main(String[] args) throws Exception {
//5 batches of 100: 20 each
MnistDataSetIterator mnist = new MnistDataSetIterator(20, 60000);
TestDataSetIterator iter = new TestDataSetIterator(mnist);
ActorNetworkRunner runner = new ActorNetworkRunner(iter);
NeuralNetConfiguration conf2 = new NeuralNetConfiguration.Builder()
.nIn(784).nOut(10).build();
Conf conf = new Conf();
conf.setConf(conf2);
conf.getConf().setFinetuneEpochs(1000);
conf.setLayerSizes(new int[]{500,250,100});
conf.setMultiLayerClazz(DBN.class);
conf.getConf().setnOut(10);
conf.getConf().setFinetuneLearningRate(0.0001f);
conf.getConf().setnIn(784);
conf.getConf().setL2(0.001f);
conf.getConf().setMomentum(0.5f);
conf.setSplit(10);
conf.getConf().setUseRegularization(false);
conf.setDeepLearningParams(new Object[]{1,0.0001,1000});
runner.setup(conf);
runner.train();
}
}
You should add the following dependency to your POM: 您应该将以下依赖项添加到POM:
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-scaleout-akka</artifactId>
<version>0.0.3.3</version>
</dependency>
This will add as transitive dependencies deeplearning4j-scaleout-api
and deeplearning4j-core
. 这将作为传递依赖deeplearning4j-scaleout-api
添加deeplearning4j-scaleout-api
和deeplearning4j-core
。 Those three dependencies will provide you the imports you are missing. 这三个依赖项将为您提供您缺少的导入。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.