简体   繁体   中英

RMSE (root mean square deviation) calculation in R

I have numeric feature observations V1 through V12 taken for a target variable Wavelength . I would like to calculate the RMSE between the Vx columns. Data format is below.

Each variable "Vx" is measured at a 5-minute interval. I would like to calculate the RMSE between the observations of all Vx variables, how do I do that?

我对波长变量有不同的观察,每个变量,Vx 以 5 分钟为间隔测量,

This is a link I found, but I'm not sure how I can get y_pred: https://www.kaggle.com/wiki/RootMeanSquaredError

For the link provided below, I don't think I have the predicted values: http://heuristically.wordpress.com/2013/07/12/calculate-rmse-and-mae-in-r-and-sas/

The function below will give you the RMSE:

RMSE = function(m, o){
  sqrt(mean((m - o)^2))
}

m is for model (fitted) values, o is for observed (true) values.

For your help, just wrote these functions:

#Fit a model
fit <- lm(Fertility ~ . , data = swiss)

# Function for Root Mean Squared Error
RMSE <- function(error) { sqrt(mean(error^2)) }
RMSE(fit$residuals)

# If you want, say, MAE, you can do the following:

# Function for Mean Absolute Error
mae <- function(error) { mean(abs(error)) }
mae(fit$residuals)

I hope it helps.

How to perform a RMSE in R.

See my other 97+ up voted canonical answer for doing RMSE in Python: https://stackoverflow.com/a/37861832/445131 Below I explain it it terms of R code.

RMSE: (Root mean squared error), MSE: (Mean Squared Error) and RMS: (Root Mean Squared) are all mathematical tricks to get a feel for change over time between two lists of numbers.

RMSE provides a single number that answers the question: "How similar, on average, are the numbers in list1 to list2?". The two lists must be the same size. I want to "wash out noise between any two given elements, wash out the size of the data collected, and get a single number feel for change over time".

Intuition and ELI5 for RMSE:

Imagine you are learning to throw darts at a dart board. Every day you practice for one hour. You want to figure out if you are getting better or getting worse. So every day you make 10 throws and measure the distance between the bullseye and where your dart hit.

You make a list of those numbers. Use the root mean squared error between the distances at day 1 and a list containing all zeros. Do the same on the 2nd and nth days. What you will get is a single number that hopefully decreases over time. When your RMSE number is zero, you hit bullseyes every time. If the number goes up, you are getting worse.

Example in calculating root mean squared error in R:

cat("Inputs are:\n") 
d = c(0.000, 0.166, 0.333) 
p = c(0.000, 0.254, 0.998) 
cat("d is: ", toString(d), "\n") 
cat("p is: ", toString(p), "\n") 

rmse = function(predictions, targets){ 
  cat("===RMSE readout of intermediate steps:===\n") 
  cat("the errors: (predictions - targets) is: ", 
      toString(predictions - targets), '\n') 
  cat("the squares: (predictions - targets) ** 2 is: ", 
      toString((predictions - targets) ** 2), '\n') 
  cat("the means: (mean((predictions - targets) ** 2)) is: ", 
      toString(mean((predictions - targets) ** 2)), '\n') 
  cat("the square root: (sqrt(mean((predictions - targets) ** 2))) is: ", 
      toString(sqrt(mean((predictions - targets) ** 2))), '\n') 
  return(sqrt(mean((predictions - targets) ** 2))) 
} 
cat("final answer rmse: ", rmse(d, p), "\n") 

Which prints:

Inputs are:
d is:  0, 0.166, 0.333 
p is:  0, 0.254, 0.998 
===RMSE Explanation of steps:===
the errors: (predictions - targets) is:  0, -0.088, -0.665 
the squares: (predictions - targets) ** 2 is:  0, 0.007744, 0.442225 
the means: (mean((predictions - targets) ** 2)) is:  0.149989666666667 
the square root: (sqrt(mean((predictions - targets) ** 2))) is:  0.387284994115014 
final answer rmse:  0.387285 

The mathematical notation:

R中的RMSE解释

RMSE isn't the most accurate line fitting strategy, total least squares is:

Root mean squared error measures the vertical distance between the point and the line, so if your data is shaped like a banana, flat near the bottom and steep near the top, then the RMSE will report greater distances to points high, but short distances to points low when in fact the distances are equivalent. This causes a skew where the line prefers to be closer to points high than low.

If this is a problem the total least squares method fixes this: https://mubaris.com/posts/linear-regression/

Gotchas that can break this RMSE function:

If there are nulls or infinity in either input list, then output rmse value is is going to not make sense. There are three strategies to deal with nulls / missing values / infinities in either list: Ignore that component, zero it out or add a best guess or a uniform random noise to all timesteps. Each remedy has its pros and cons depending on what your data means. In general ignoring any component with a missing value is preferred, but this biases the RMSE toward zero making you think performance has improved when it really hasn't. Adding random noise on a best guess could be preferred if there are lots of missing values.

In order to guarantee relative correctness of the RMSE output, you must eliminate all nulls/infinites from the input.

RMSE has zero tolerance for outlier data points which don't belong

Root mean squared error squares relies on all data being right and all are counted as equal. That means one stray point that's way out in left field is going to totally ruin the whole calculation. To handle outlier data points and dismiss their tremendous influence after a certain threshold, see Robust estimators that build in a threshold for dismissal of outliers.

You can either write your own function or use the the package hydroGOF, which also has a RMSE function. http://www.rforge.net/doc/packages/hydroGOF/rmse.html

Regarding your y_pred you first need a model which produced them, otherwise why would you want to calculate RMSE?

You can also use library(mltools) in R, which has method

rmse(preds = NULL, actuals = NULL, weights = 1, na.rm = FALSE)

Reference: http://search.r-project.org/library/mltools/html/rmse.html

You could also use summary() for your linear model:

mod = lm(dependent ~ independent, data) then:

mod.error = summary(mod)
mod.error$sigma

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM