简体   繁体   中英

Grep a file containing strings from another file

I have two files X1 and X2 in linux pc, one file (X1) contains 10000 unique ids (like 1001, 5287, 6589) Another file (X2) contains lines containing any of these ids in a particular location let's say in the position 125 to 128 in each line. Now I want to grep lines of the second file (X2) containing ids of the first file (X1) and write it to other file (X3). I want any awk command or perl script for this purpose. I want to grep those lines of X2 which have the ids of X1 in the positions 125 to 128 of X2 and not in other positions of X2.

grep permits reading search expressions from a file, so you can use this:

grep -f X1 X2 > X3

To limit the match to position 125 you can add a pattern that matches the first 124 characters, that is, ^.{124} . You can use for example sed to create the modified pattern file:

sed -e 's/^/^.{124}/' X1 > X1.patterns
grep -E -f X1.patterns X2 > X3
for id in 'cat x1'; do grep $id x2; done > x3

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM