Quicker Counting

R

ReportSmith

A 2-part question. I have code that will sort a a dataset (with multiple
rows and columns) by the data in column "J" (a STATE field).

Then, the code will cycle through the sorted list and everytime a state
changes, new lines will be entered to the next 1,000th (so if 'AK' has 700
rows, 300 blank lines will be entered before the 'AL' rows (which will start
on row 1001; if 'AK' has 1100 rows, 900 blank lines willbe entered before the
'AL' rows (which will start on row 2001, etc).

The code works, but not as fast as I would like to see. Does anyone have
any suggestions for a faster way?

Sub Test()
Cells.Select
Selection.Sort Key1:=Range("J2"), Order1:=xlAscending, Header:=xlYes, _
OrderCustom:=1, MatchCase:=False, Orientation:=xlTopToBottom, _
DataOption1:=xlSortNormal

'go to the State column, row 2
Application.Goto reference:="R2C10"
PrevCell = ActiveCell.Value

RowNum = 1
Do Until PrevCell = ""
'find the next state
Do Until ActiveCell.Value <> PrevCell
Application.Goto reference:="R" & RowNum + 1 & "C10"
RowNum = RowNum + 1
Loop

'find the next 1000th line
NewRow = Application.WorksheetFunction.Ceiling(RowNum, 1000)
'do until the next 1000th line
Do While RowNum <= NewRow
Rows(RowNum).Select
'select the row
Selection.Insert Shift:=xlDown
'insert blank line
RowNum = RowNum + 1 'go to
next row
Loop

'move 1 row down and column J
Application.Goto reference:="R" & NewRow + 1 & "C10"
PrevCell = ActiveCell.Value
Loop
End Sub

Thanks in advance for any/all suggestions.
 
D

Don Guillett

You may consider using FIND to find starting from the bottom going up p1 and
then p2 using 1 cell less. Then count the diff and do your thing.
 
J

Jim Cone

I would probably do it differently, but the following should speed up your existing code.
It is untested and the row values may need adjusting up or down by 1.
--
Jim Cone
San Francisco, USA
http://www.realezsites.com/bus/primitivesoftware
(Excel Add-ins / Excel Programming)

'-------------
Sub Test()
Dim PrevCell As Variant
Dim RowNum As Long
Dim NewRow As Long

Cells.Sort Key1:=Range("J2"), Order1:=xlAscending, _
Header:=xlYes, OrderCustom:=1, MatchCase:=False, _
Orientation:=xlTopToBottom, DataOption1:=xlSortNormal

'go to the State column, row 2
Application.Goto reference:="R2C10"
PrevCell = ActiveCell.Value

Do Until PrevCell = ""
'find the next state
Do Until ActiveCell.Value <> PrevCell
ActiveCell.Offset(1, 0).Select
Loop
RowNum = ActiveCell.Row

'find the next 1000th line
NewRow = Application.WorksheetFunction.Ceiling(RowNum, 1000)
'do until the next 1000th line
Do While RowNum <= NewRow
Rows(RowNum).Insert Shift:=xlDown
RowNum = RowNum + 1 'go to Next Row
Loop

'move 1 row down and column J
Cells(NewRow + 1, 10).Select
PrevCell = ActiveCell.Value
Loop
End Sub
'-----------



"ReportSmith"
wrote in message
A 2-part question. I have code that will sort a a dataset (with multiple
rows and columns) by the data in column "J" (a STATE field).

Then, the code will cycle through the sorted list and everytime a state
changes, new lines will be entered to the next 1,000th (so if 'AK' has 700
rows, 300 blank lines will be entered before the 'AL' rows (which will start
on row 1001; if 'AK' has 1100 rows, 900 blank lines willbe entered before the
'AL' rows (which will start on row 2001, etc).

The code works, but not as fast as I would like to see. Does anyone have
any suggestions for a faster way?

Sub Test()
Cells.Select
Selection.Sort Key1:=Range("J2"), Order1:=xlAscending, Header:=xlYes, _
OrderCustom:=1, MatchCase:=False, Orientation:=xlTopToBottom, _
DataOption1:=xlSortNormal

'go to the State column, row 2
Application.Goto reference:="R2C10"
PrevCell = ActiveCell.Value

RowNum = 1
Do Until PrevCell = ""
'find the next state
Do Until ActiveCell.Value <> PrevCell
Application.Goto reference:="R" & RowNum + 1 & "C10"
RowNum = RowNum + 1
Loop
'find the next 1000th line
NewRow = Application.WorksheetFunction.Ceiling(RowNum, 1000)
'do until the next 1000th line
Do While RowNum <= NewRow
Rows(RowNum).Select
'select the row
Selection.Insert Shift:=xlDown
'insert blank line
RowNum = RowNum + 1 'go to next row
Loop
'move 1 row down and column J
Application.Goto reference:="R" & NewRow + 1 & "C10"
PrevCell = ActiveCell.Value
Loop
End Sub
Thanks in advance for any/all suggestions.
 
J

Jim Thomlinson

This should be close...

Sub test()
Dim rng As Range
Dim lng As Long

Set rng = Cells(Rows.Count, "J").End(xlUp)
Range(rng, Range("J2")).Sort Key1:=Range("J2"), Order1:=xlAscending, _
Header:=xlNo
Set rng = Cells(Rows.Count, "J").End(xlUp)
lng = 0
Do While rng.Row > 2
Set rng = rng.Offset(-1, 0)
If rng.Value <> rng.Offset(1, 0).Value Then
Do While rng.Value = rng.Offset(-lng)
lng = lng + 1
Loop
rng.Offset(1, 0).Resize(1000 - lng).EntireRow.Insert
lng = 0
End If
Loop
rng.End(xlDown).Offset(1, 0).EntireRow.Delete
End Sub
 
J

Jim Thomlinson

Ooops... I just noticed mine is only sorting column J... Try this...

Sub test()
Dim rng As Range
Dim lng As Long

Set rng = Cells(Rows.Count, "J").End(xlUp)
Range(rng, Range("J2")).EntireRow.Sort Key1:=Range("J2"),
Order1:=xlAscending, _
Header:=xlNo
Set rng = Cells(Rows.Count, "J").End(xlUp)
lng = 0
Do While rng.Row > 2
Set rng = rng.Offset(-1, 0)
If rng.Value <> rng.Offset(1, 0).Value Then
Do While rng.Value = rng.Offset(-lng)
lng = lng + 1
Loop
rng.Offset(1, 0).Resize(20 - lng).EntireRow.Insert
lng = 0
End If
Loop
rng.End(xlDown).Offset(1, 0).EntireRow.Delete
End Sub
 
R

ReportSmith

Thanks for the suggestions. I will try them out as time allows (as always,
time is of the essence). In the meantime, I will go with the code I have.

Thanks again.
 
D

Don Guillett

Using FIND should be quicker that each row. Adapt to suit

Sub PlaceBlankRowsBetweenValues()
'Columns(1).SpecialCells(xlCellTypeBlanks).Delete
lr = Cells(Rows.count, "a").End(xlUp).Row
With Range("A1:A" & lr)
..AdvancedFilter Action:=xlFilterCopy, _
CopyToRange:=.Range("C1"), Unique:=True
End With

'pick an UNused column
lrc = Cells(Rows.count, "c").End(xlUp).Row
mylist = Application.Transpose(Range("C2:C" & lrc))
Columns("c").ClearContents

On Error GoTo nomo
For i = 1 To lr
p1 = Cells.Find(mylist(i), After:=Range("a1"), LookIn:=xlValues, LookAt:= _
xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext).Row
MsgBox p1
p2 = Cells.Find(mylist(i + 1), After:=Cells(p1, 1), LookIn:=xlValues,
LookAt:= _
xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext).Row
'MsgBox p2
dif = Application.CountIf(Columns(1), mylist(i))
'MsgBox dif
If dif < 10 Then Cells(p2, 1).Resize(10 - dif).EntireRow.Insert 'Select

Next i
nomo:
End Sub
 
R

ReportSmith

Thanks Don. I tried it out, but got stuck with the following line when
compiling:

...AdvancedFilter Action:=xlFilterCopy, CopyToRange:=.Range("C1"),
Unique:=True

Not quite sure what the "..AdvancedFilter" means.
 
R

ReportSmith

Disregard the first response. I got it. the "..AdvancedFind" should be
".AdvancedFind"
 
D

Don Guillett

A little cleanup and adapting for you

Sub PlaceBlankRowsBetweenValues()
'Columns("J").SpecialCells(xlCellTypeBlanks).Delete
lr = Cells(Rows.count, "j").End(xlUp).Row
With Range("j1:j" & lr)
..AdvancedFilter Action:=xlFilterCopy, _
CopyToRange:=.Range("k1"), Unique:=True
End With

'pick an UNused column instead of K
lrk = Cells(Rows.count, "k").End(xlUp).Row
mylist = Application.Transpose(Range("k2:k" & lrc))
Columns("k").ClearContents

On Error GoTo nomo
For i = 1 To lrk
p1 = Cells.Find(mylist(i), After:=Range("j1"), LookIn:=xlValues, LookAt:= _
xlwhole, SearchOrder:=xlByRows, SearchDirection:=xlNext).Row
'MsgBox p1
p2 = Cells.Find(mylist(i + 1), After:=Cells(p1, 1), LookIn:=xlValues,
LookAt:= _
xlwhole, SearchOrder:=xlByRows, SearchDirection:=xlNext).Row
'MsgBox p2
dif = Application.CountIf(Columns("J"), mylist(i))
'MsgBox dif
If dif < 10 Then Cells(p2, "J").Resize(10 - dif).EntireRow.Insert 'Select

Next i
nomo:
End Sub
 
J

Jim Thomlinson

This is not directed solely at you but I feel the need to vent a bit. If
there are parameters such as you only want minor tweaks to your existing code
and are not interested in other solutions then please specify that ahead of
time. When you do not even take the time to test my solution, you have
completely wasted my time. When you waste the time of people who are
providing the solutions it does not bode well for getting help in the future.
If someone has taken the considerable time to provide a solution the least
you can do is to test it and give them some feedback.
 
D

Don Guillett

Jim, This happens to all of us often.
I tested your solution and it does the same as mine.
I only felt that FIND would be a bit quicker on a large data field
 
J

Jim Thomlinson

What gets me is if you had to pay for the advice you would not dismiss the
solution based on time constraints. Even more basic than that if you had to
ask someone face to face for help you would not dismiss their help with such
little regard...

Just my 2 cents...
 
J

Jim Cone

Jim,
It seems to me that the more time spent on a solution the greater the chance
that it will be ignored. <g>
Maybe the complexity increases with the time spent and the answer becomes unintelligible to the poster?

Lacking worthwhile pursuits, a while back, I started a list of posters who do not
acknowledge responses. Just those posters whose questions I attempted to
answer and the poster ignored all responses from me or others.

The list is up to about 1200 now.
--
Regards,
Jim Cone
San Francisco, USA
http://www.realezsites.com/bus/primitivesoftware
(Excel Add-ins / Excel Programming)



"Jim Thomlinson"
wrote in message
What gets me is if you had to pay for the advice you would not dismiss the
solution based on time constraints. Even more basic than that if you had to
ask someone face to face for help you would not dismiss their help with such
little regard...
Just my 2 cents...
 
R

ReportSmith

Guys, guys, guys,
To clarify...I did not dismiss the posted solutions/suggestions. I have
multiple responsibilities and need to attend to other matters as well (there
are only so many hours in a day). My reply was a quick one just so I could
acknowledge the responders. So, just the opposite of what Jim said.

To continue my original thought...I will test out the suggested solutions
and compare the time differentials - and go with the one that makes most
sense and is quickest.

Once again, thanks for the solutions and suggestions (that will not go
ignored).
 
R

ReportSmith

Don,
It's 6pm here and I finally got a chance to test your code. It is pretty
quick, but it does not take into account a few things, although I did see how
to get a count of records in the dataset (I can definitely use that).

The code enters lines from row 1 and with each iteration of the For..Next
loop, more lines are added from the top. It does not find the next state and
do the next 1000th 'ceiling'. Maybe I modified the code wrong.

Also, I found a variable <lrc> that wasn't initialized, but used in the
following line(s):
......
lr = Cells(Rows.Count, "j").End(xlUp).Row
With Range("j1:j" & lr)
.AdvancedFilter Action:=xlFilterCopy, CopyToRange:=.Range("aj1"),
Unique:=True
End With

'pick an UNused column instead of K
lrk = Cells(Rows.Count, "aj").End(xlUp).Row
mylist = Application.Transpose(Range("aj2:aj" & lrk)) '<-----was <lrc> -
mistake?
......

I used column "AJ" instead of "K" (I have data from col "A" to col "AA")
Like I said, maybe I modified the code incorrectly. Any suggestions?

Thanks again.
 
J

Jim Thomlinson

If the question is important enough for you to request my immediate time then
it should be important enough for you to devote your immediate time... If you
don't have the time right now then don't request my time right now.

Just so we are all on the same page I do not know of anyone around here who
witholds answering quesitons based on who is asking, but there are a few
people I recall who burned their bridges... oddly enough they never posted
again.

Jim Cone... only 1200. When did you start the list. Last month... <bg>
 
D

Don Guillett

Further testing with Jim's and mine actually shows Jim's to be a bit
quicker. I wonder why and also if the same would still be the case on a
large database???
 
J

Jim Thomlinson

Yours has a fair bit of overhead before it really gets going... Once it gets
going it may well be faster but you have to get past the advanced filter and
such first. I would suspect that on large datasets it might be faster.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top