Not sure if this is the best way but here's my code so far. Currently, it keeps the first duplicate and deletes the others from the table. I want it to keep the last row with the largest OrderId
number and delete the rest. I've tried Take
instead of Skip
but can't seem to get it working properly.
var duplicateRow = (from o in db.Orders
group o by new { o.CustomerId } into results
select results.Skip(1)
).SelectMany(a => a);
db.Orders.DeleteAllOnSubmit(duplicateRow);
db.SubmitChanges();
Since you don't use OrderBy
the result is arbitrary
I want it to keep the last row with the largest 'OrderId' number
Then use:
var duplicateRows = db.Orders
.GroupBy(x => x.CustomerId)
.SelectMany(g => g.OrderByDescending(o => OrderId).Skip(1));
You can use Reverse():
using System;
using System.Linq;
public class Program
{
public class O
{
public int CustomerId;
public int OrderId;
public O(int c,int o)
{
CustomerId = c;
OrderId = o;
}
public override string ToString()
{
return string.Format("(C_ID: {0}, O_ID:{1})",CustomerId, OrderId);
}
}
public static void Main()
{
var data = new O[]{ new O(1,1),new O(2,1),new O(3,1),new O(1,2)};
var duplicateRow = (from o in data
group o by new { o.CustomerId } into results
select results.Reverse().Take(1) // reverse and then take 1
).SelectMany(a => a);
foreach( var o in duplicateRow)
Console.WriteLine(o);
Console.ReadLine();
}
}
Output:
(C_ID: 1, O_ID:2)
(C_ID: 2, O_ID:1)
(C_ID: 3, O_ID:1)
This works due to stableness - if your input have later OrderID earlier you need to OrderBy as well.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.