I am very new to bash scripting. I attempted to write a script that merges several json files. For example:
File 1:
{
"file1": {
"foo": "bar"
}
}
File 2:
{
"file1": {
"lorem": "ipsum"
}
}
Merged File:
{
"file1": {
"foo": "bar"
},
"file2": {
"lorem": "ipsum"
}
}
This is what I came up with:
awk 'BEGIN{print "{"} FNR > 1 && last_file == FILENAME {print line} FNR == 1 {line = ""} FNR==1 && FNR != NR {printf ","} FNR > 1 {line = $0} {last_file = FILENAME} END{print "}"}' json_files/* > json_files/all_merged.json
It works but I feel there is a better way of doing this. Any ideas?
Handling JSON with awk is not a terribly good idea. Arbitrary changes in meaningless whitespace will break your code. Instead, use jq
; it is made for this sort of thing. To combine two objects, use the *
operator, ie, for two files:
jq -s '.[0] * .[1]' file1.json file2.json
And for arbitrarily many files, use reduce
to apply it sequentially to all:
jq -s 'reduce .[] as $item ({}; . * $item)' json_files/*
The -s
switch makes jq
read the contents of the JSON files into a large array before handling them.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.