Say that I have a set with columns like this
user_id | username | updated_at | data...
user_id
and username
are not unique in the set, so you can have something like this
user_id | username | updated_at | data...
------------------------------------------
1 | test | 140****** | ...
4 | test2 | 140****** | ...
1 | test | 139****** | ...
7 | meh | 140****** | ...
But I would like to remove the duplicate occurrences; I tried GROUP BY
but it gives me something unexpected as a lot of items are getting removed (I guess they appears later in the set as the query has a LIMIT
in it).
If you need to select all the data - first of all u should decide how to get the updated_at and data columns.
In case you want to have the data concatenated and you want to have the latest updated_at you should do
SELECT user_id, username, max(updated_at), group_concat(data separator ',')
FROM table_name
GROUP BY user_id, username
ORDER BY user_id, username
LIMIT X
in that case your data will be ordered by user_id, and username
Note: it is not clear from your question if you want to remove the data from the table itself or only from the result set.
Is this what you want?
select distinct user_id, username
from table t;
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.