Floats have their own min
and max
methods that handle NaN consistently, so you can fold over the iterator:
use std::f64;
fn main() {
let x = [2.0, 1.0, -10.0, 5.0, f64::NAN];
let min = x.iter().fold(f64::INFINITY, |a, &b| a.min(b));
println!("{}", min);
}
Prints -10
.
If you want different NaN handling, you can use PartialOrd::partial_cmp
. For example, if you wish to propagate NaNs, fold with:
use std::f64;
use std::cmp::Ordering;
fn main() {
let x = [2.0, 1.0, -10.0, 5.0, f64::NAN];
let min = x.iter().fold(f64::INFINITY, |a, &b| {
match PartialOrd::partial_cmp(&a, &b) {
None => f64::NAN,
Some(Ordering::Less) => a,
Some(_) => b,
}
});
println!("{}", min);
}
If you know your data does not contain NaNs, then assert that fact by unwrapping the comparison:
fn example(x: &[f64]) -> Option<f64> {
x.iter()
.cloned()
.min_by(|a, b| a.partial_cmp(b).expect("Tried to compare a NaN"))
}
If your data may have NaNs, you need to handle that case specifically. One solution is to say that all 16,777,214 NaN values are equal to each other and are always greater than or less than other numbers:
use std::cmp::Ordering;
fn example(x: &[f64]) -> Option<f64> {
x.iter()
.cloned()
.min_by(|a, b| {
// all NaNs are greater than regular numbers
match (a.is_nan(), b.is_nan()) {
(true, true) => Ordering::Equal,
(true, false) => Ordering::Greater,
(false, true) => Ordering::Less,
_ => a.partial_cmp(b).unwrap(),
}
})
}
There are numerous crates available that can be used to give you whichever semantics your code needs.
You should not use partial_cmp(b).unwrap_or(Ordering::Equal)
because it provides unstable results when NaNs are present, but it leads the reader into thinking that they are handled:
use std::cmp::Ordering;
use std::f64;
fn example(x: &[f64]) -> Option<f64> {
x.iter()
.cloned()
.min_by(|a, b| a.partial_cmp(b).unwrap_or(Ordering::Equal))
}
fn main() {
println!("{:?}", example(&[f64::NAN, 1.0]));
println!("{:?}", example(&[1.0, f64::NAN]));
}
Some(NaN)
Some(1.0)
A built-in total-ordering comparison method for floats named .total_cmp()<\/code> is available on nighty<\/a> , and should be on stable within a couple months, baring any surprising issues.
(The vote to stablize the feature recently passed<\/a> .) This implements that total ordering defined in IEEE 754, with every possible f64 bit value being sorted distinctly, including positive and negative zero, and all of the possible NaNs. (Be aware that some NaNs sort above Infinity, and some NaNs sort below -Infinity, so the "maximum" value may be confusing in the presence of NaN, but it will be stable.
#![feature(total_cmp)]
fn main() {
let mut a: Vec<f64> = vec![2.0, 2.5, -0.5, 1.0, 1.5];
let maximum = *a.iter().max_by_key(f64::total_cmp).unwrap();
println!("The maximum value was {maximum}.");
a.sort_by(f64::total_cmp);
}
A built-in total-ordering comparison method for floats named .total_cmp()<\/code> is available on nighty<\/a> , and should be on stable within a couple months, baring any surprising issues.
(The vote to stablize the feature recently passed<\/a> .) This implements that total ordering defined in IEEE 754, with every possible f64 bit value being sorted distinctly, including positive and negative zero, and all of the possible NaNs. Be aware that some NaNs sort above Infinity, and some NaNs sort below -Infinity, so the "maximum" value may be confusing in the presence of NaN, but it will be consistent.
#![feature(total_cmp)]
fn main() {
let mut a: Vec<f64> = vec![2.0, 2.5, -0.5, 1.0, 1.5];
let maximum = *a.iter().max_by_key(f64::total_cmp).unwrap();
println!("The maximum value was {maximum}.");
a.sort_by(f64::total_cmp);
}
Perhaps like this?
fn main() {
use std::cmp::Ordering;
let mut x = [2.0, 1.0, -10.0, 5.0];
x.sort_by(|a, b| a.partial_cmp(b).unwrap_or(Ordering::Equal));
println!("min in x: {:?}", x);
}
One thing I struggled with is that sort_by
mutates the vector in place so you can't use it in a chain directly.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.