I'm using postgres as a database in my application.
In that I have a jsonb column and for storing json data inside it.
{
"id": "manohar",
"array": [
{
"status": "active",
"date": "13/12/2022"
},
{
"status": "InActive",
"date": "13/12/2021"
}
]
}
so here every time I'm writing subqueries to select any field based on maximum date inside an array object. Problem here is I have 100's of fields in each json object so every time I have to write subqueries for each feild.
and after using subqueries looks less in performance.
Here is the Query that I'm using
Select
(select t.sub_array->>'status'
from tbl_name d
cross join lateral (
select t.item as sub_array
from jsonb_array_elements(d.column_name -> 'array') as t(item)
where t.item ->> 'date' <= TO_CHAR(CURRENT_DATE, 'YYYY-MM-DD')
order by (t.item ->> 'date')::date desc
limit 1
) t where t.primaryKey = d.key) as status
--like I have to write 100 subqueries for 100's fields
from tbl_name t
where t.id='3'
I don't want to use Views for this requirements.
Any better approach or suggestions will be helpful..
Thanks..
You must use the lateral view
to access a JSON. But storing and accessing a JSON in a DB is not part of the best practices in SQL.
In your case, I can only advise you to change your data model. Storing JSON is one thing, you can use blob
string to store it, that should be ok. But at the same time you write it, you should also explode its content in tables/columns/lines. Therefore, you can then access its content more easily.
The idea is :
This is just an example of what you should do. You probably need to find pretty names to reflect the business value of each of your objects.
table "id"
ID | NAME |
---|---|
1 | manohar |
table "id_status"
id | status | date |
---|---|---|
1 | InActive | 13/12/2021 |
1 | Active | 13/12/2022 |
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.