S
shadestreet
I have about 4500 different SKU's listed in rows in a file. (An "SKU"
stands for Stock Keeping Unit, and is basically a numeric code to
identify a product-FYI).
Each one of these SKUs has several characteristics, such as Cases per
Pallet, weight per case, dollars per unit, etc that are listed in
columns.
It turns out that some of these SKU numbers are identical, so I wanted
to take any that were identical and merge them together. Basically I
need to delete the duplicate row except for the value in Column G which
is the amount of units sold. That value I will need to add to column G
of the original row.
The only solution I can think of is to use Subtotals and set it so at
each change in SKU it adds the units sold. Then I could find any
Subtotal that consists of more than one SKU, manually fix the value for
column G, and delete the duplicate row. After I do this for all of
them, i could resort to remove subtotals. This solution is tedious,
can anyone think of a faster and mroe efficient solution?
Thanks
stands for Stock Keeping Unit, and is basically a numeric code to
identify a product-FYI).
Each one of these SKUs has several characteristics, such as Cases per
Pallet, weight per case, dollars per unit, etc that are listed in
columns.
It turns out that some of these SKU numbers are identical, so I wanted
to take any that were identical and merge them together. Basically I
need to delete the duplicate row except for the value in Column G which
is the amount of units sold. That value I will need to add to column G
of the original row.
The only solution I can think of is to use Subtotals and set it so at
each change in SKU it adds the units sold. Then I could find any
Subtotal that consists of more than one SKU, manually fix the value for
column G, and delete the duplicate row. After I do this for all of
them, i could resort to remove subtotals. This solution is tedious,
can anyone think of a faster and mroe efficient solution?
Thanks