简体   繁体   中英

finding the newest file in the directory

I need to find the newest file in the directory.
The line that I'm currently using is:

cd /mypath/ && ls -t | head -n1

The problem is that, I'm constantly creating files and sometimes, there are 2 files created at the ' same ' time, meaning that when I execute the line above I'll miss one of the files.

Can somebody please help?

Sorry for the original mixup. Here is a short script that will find the newest file in a directory (given as the first argument). Use stat -c %Y to find its time in seconds since epoch. Then compare with all other files in the directory and if the time difference ( td ) is less than 10 seconds, output the filename (you can adjust the difference to meet your needs)

Give this a try:

#!/bin/bash

path="${1:-.}"  ## set path (default .)

[ "${path:(-1):1}" = '/' ] && path="${path%/}"  ## strip trailing '/'

oldifs="$IFS"   ## save old IFS, set to break on newline
IFS=$'\n'
fnarray=( $(ls -rt "$path" | tail -n5) )        ## read newest 5 into array
IFS="$oldifs"

for ((i=$((${#fnarray[@]} - 1)); i >= 0; i--)); do  ## make sure newest is file
    [ -f  "${path}/${fnarray[i]}" ] && fn="${path}/${fnarray[i]}" && break
done
[ -d "$fn" ] && { echo "error: no file found in newest 5 entries"; exit 1; }

for i in "${path}"/*; do                ## for each file in path
    [ -f "$i" ] || continue             ## if not a file skip
    td=$(( $(stat -c %Y "$i") - $(stat -c %Y "$fn") ))  ## time difference in sec
    (( td < 0 )) && td=$(( td * -1 ))   ## takes abs value
    (( td < 10 )) && echo "$i"          ## if less than 10 seconds, output file
done

exit 0

Note: the original issue I had was due to the possibility that the newest entry in the directory was a sub-directory (which was not wanted). This presented a challenge due to the way ls ..| tail.. ls ..| tail.. returned results. So basically I took the last 5 as a hack (you should set an initial try of 5, and if the fn is still a dir, increase it by twice and try again... until you have exhausted all files in the directory.. or do a simple find /path/ -type f first to insure there is a file present) Sigh , but that would have completely obscured the test for oldest code.

The other issue I don't like with the solution is that comparing stat times is not inherently fast in bash. So this would not be lightning fast on large directories. (liveable with < 1000 entries). Returning the possible latest files with the exact same times is a good challenge with no quick off-the-shelf solution I could find.


full script

A full implementation of iterative testing of all entries in the target directory working backwards from the newest to find the first file (not a dir) to compare times against would be something similar to the following. note: the code has been tested, but there may still be corner cases that need fixing:

#!/bin/bash

path="${1:-.}"          ## set path (default .)

[ "${path:(-1):1}" = '/' ] && path="${path%/}"  ## strip trailing '/'

[ -d "$path" ] || {     ## validate path
    echo "error: invalid path provided. '$path' not found."
    exit 1;
}

declare -i n=5      ## initial entries to read into array if fn is directory
declare -i oldn=1   ## save index for increased loop efficiency

## get number of files in target path
nfiles=$(wc -l < <( find "$path" -maxdepth 1 -type f ) )

(( $nfiles < 1 )) && {  ## test files present in path or exit
    echo "error: no files found in target directory"
    exit 1
}

oldifs="$IFS"   ## save old IFS, set to break on newline
IFS=$'\n'

fn="${path}/$(ls -rt "$path" | tail -n1)"        ## find most recent file

while [ -d "$fn" ]; do                           ## while fn is a directory
    fnarray=( $(ls -rt "$path" | tail -n$n) )    ## read newest n entries into array
    for ((i=$((${#fnarray[@]} - oldn)); i >= 0; i--)); do  ## make sure newest is file
        [ -f  "${path}/${fnarray[i]}" ] && fn="${path}/${fnarray[i]}" && break
    done
    (( n == nfiles )) && break                  ## if all entries tried, bail
    oldn=$n                                     ## update oldn to skip next try
    [ -f "$fn" ] && break || n=$(( n * 2 ))     ## if file, done, else n=n*2
    (( n > nfiles )) && n=nfiles                ## if n > nfiles, n=nfiles
done
[ -d "$fn" ] && {
    echo "error: something bad happened, no files found in target directory"
    exit 1
}
IFS="$oldifs"

for i in "${path}"/*; do                ## for each file in path
    [ -f "$i" ] || continue             ## if not a file skip
    td=$(( $(stat -c %Y "$fn") - $(stat -c %Y "$i") ))  ## time difference in sec
    (( td < 0 )) && td=$(( td * -1 ))   ## takes abs value
    (( td < 10 )) && echo "$i"          ## td less than X seconds, output filename
done

exit 0

Somehow this seems like overkill...

Seems you want not just the newest file, but you want to report each new file exactly once. The following script does that, by keeping a $reftime file that tracks what it reported last time:

cd "$1"
reftime=/tmp/newest-file-name

next_file=
prev_file=$(cat "$reftime" 2> /dev/null)
while true; do
  read -r filename
  if [[ "$prev_file" ]]; then
    [[ "$filename" = "$prev_file" ]] && break # we've gotten to reported files
  else
    [[ -z "$filename" ]] && break # we've read past the last file
  fi
  next_file="$filename"
done < <(ls -t)

if [[ "$next_file" ]]; then
    # Update file $reftime with time and name of the file we're about to offer
    echo "$next_file" > "$reftime"
    touch -r "$next_file" "$reftime"

    echo "$next_file"
fi

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM