简体   繁体   中英

Machine Learning scale feature vector and its derivative

I have a training set of feature vectors such that components are obtained using some function evaluated over some discrete set of x's. There is also another feature vector which is obtained by differentiating the first one. What I want to do is to train model using both types of feature vectors at the same time. The difficulty I'm having is that I don't know how to scale both vectors properly (0-1 range ideally).

So far I tried using scikit MinMaxScaler on the first type of vectors and then reuse MinMaxScaler object on the vector derivative but this does not seem to work.

Please note that I don't have much experience in ML.

When it comes to ML, we need to preprocess our dataset according to the algorithm we use. There are mainly 4 methods that we use to preprocess the data and one of that method is rescale data. When we rescaling the data, we usually rescale the whole data into the range of 0 and 1.

MinMaxScaler is a method that provide by scikit learn that rescaler the data into the range of 0 and 1.

Use MinMaxScaler to rescaler the data. It is so far the most using method to rescale the data.

FYI you can use rescaler preprocessing technique to neural networks, regressions and K nearest neighbours algorithms (according to my knowledge).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM