problem description
List < Map > data structure as follows:
List<Map<String, Object>> list = new ArrayList<>();
Map<String, Object> map1 = new HashMap<>();
map1.put("order_no", "123");
map1.put("quantity", 10);
map1.put("amount", 100);
Map<String, Object> map2 = new HashMap<>();
map2.put("order_no", "223");
map2.put("quantity", 15);
map2.put("amount", 150);
Map<String, Object> map3 = new HashMap<>();
map3.put("order_no", "123");
map3.put("quantity", 5);
map3.put("amount", 50);
Map<String, Object> map4 = new HashMap<>();
map4.put("order_no", "124");
map4.put("quantity", 6);
map4.put("amount", 60);
Map<String, Object> map5 = new HashMap<>();
map5.put("order_no", "223");
map5.put("quantity", 7);
map5.put("amount", 70);
list.add(map1);
list.add(map2);
list.add(map3);
list.add(map4);
list.add(map5);
there is a requirement to judge whether there are duplicates of Map.key=order_no, and its value in the above list < Map >, and take out the duplicate items. As shown in the example, we should finally catch the two orders of order_no=123,223,. My current way of writing is:
//list2 list
List<Map<String, Object>> list2 = new ArrayList<>();
list2.addAll(list);
List<Map<String, Object>> collect = list.stream().filter(x->{
long count = list2.stream().filter(x2->x2.get("order_no").equals(x.get("order_no"))).count();
if(count>1) { //
return true;
}
return false;
}).collect(Collectors.groupingBy(x->x.get("order_no"))).entrySet().stream().map(x->{
Map<String, Object> tmp = new HashMap<>();
tmp.put("key_order", x.getKey());
tmp.put("order_list", x.getValue());
return tmp; //
}).collect(Collectors.toList());
although the function is realized at present, considering that there are tens of thousands or more orders, redefining the same transition with list is rough and inefficient. I would like to ask you if there is a more concise, efficient and elegant way to achieve the function?