C# & GC

M

Maxim Kazitov

Hi,

I create application which transform huge XML files (~ 150 Mb) to CVS files.
And I am facing strange problem. First 1000 rows parsed in 1 sec after 20000
rows speed down to 100 rows per sec, after 70000 rows speed down to 20 rows
per sec ( I should parse ~ 2 500 000 rows).

For me it looks like a GC problem, but I have no Idea how to fix it :(

Any ideas are welcome.
 
J

Jon Shemitz

Maxim said:
I create application which transform huge XML files (~ 150 Mb) to CVS files.
And I am facing strange problem. First 1000 rows parsed in 1 sec after 20000
rows speed down to 100 rows per sec, after 70000 rows speed down to 20 rows
per sec ( I should parse ~ 2 500 000 rows).

For me it looks like a GC problem, but I have no Idea how to fix it :(

If you do this transform by reading the whole file into a
representation of the XML file and then generating CVS, you are
imposing serious memory pressure. If you can read an XML element and
write a CVS element, without each iteration adding (much, if at all)
to your working set, you might go much faster.

If you do need to build a representation of the whole file, and each
XML attribute name and value is a distinct string, you can often save
a lot by "interning" string values, eliminating duplicate string
values.

It's also entirely possible that this has nothing to do with the GC.
What you describe is compatible with some code that's walking a linked
list that keeps growing ....
 
M

Maxim Kazitov

Hi Jon,

I use XmlTextReader, so I don't read all XML in once, during the parsing
I build small Xml Documents (one XmlDocument per row), and apply a set of
XPath's to each document. I have a couple of Hashtables in my code, but they
pretty small.


Thanks,
Max
 
F

Fred Hirschfeld

Are you creating an XmlDocument or reusing the same one? You should ensure
that you are simply using the same one and Loading the XML string into the
same one.

I ran into memory issues when I used XmlDocument instances a lot.

Maxim Kazitov said:
Hi Jon,

I use XmlTextReader, so I don't read all XML in once, during the parsing
I build small Xml Documents (one XmlDocument per row), and apply a set of
XPath's to each document. I have a couple of Hashtables in my code, but they
pretty small.


Thanks,
Max
 
C

Cor Ligthert

Maxim,

Probably is the reason what you use to build your CSV files.
When you create them as long Strings first in memory, than the problem is
clear.

Can you show that?

Cor
 
C

Christoph Nahr

I use XmlTextReader, so I don't read all XML in once, during the parsing
I build small Xml Documents (one XmlDocument per row), and apply a set of
XPath's to each document. I have a couple of Hashtables in my code, but they
pretty small.

1. Make sure that you "let go" of each XmlDocument when you no longer
use it. All references must have gone out of scope, or set to null
references, or reassigned to the new XmlDocument. The old documents
must not stay around in memory.

2. Call System.GC.Collect() immediately before you create a new
XmlDocument. Microsoft pretends this can't happen but I've seen it
myself that the garbage collector's performance can completely break
down if you repeatedly allocate large pools of objects without manual
Collect calls in-between.
 
W

Willy Denoyette [MVP]

Maxim Kazitov said:
Hi,

I create application which transform huge XML files (~ 150 Mb) to CVS
files. And I am facing strange problem. First 1000 rows parsed in 1 sec
after 20000 rows speed down to 100 rows per sec, after 70000 rows speed
down to 20 rows per sec ( I should parse ~ 2 500 000 rows).

For me it looks like a GC problem, but I have no Idea how to fix it :(

Any ideas are welcome.

I could be wrong, but It looks like you are using more memoy than physically
available and as result the system starts paging and finaly starts
thrashing. That would mean you are holding references to objects that could
otherwise be collected by the GC, so it's not a GC problem it's a design
problem.
I suggest you start looking at the memory consumption using Perfmon (GC GEN
0, 1 and 2 memory counters) and the paging activity.
If it looks like I'm right you should check your object allocation pattern,
check wheter you are holding references that could otherwise be released,
for instance references stored in arrays/collections that are no longer
needed should be set to null.

Willy.
 
P

Pat A

You should also use the StringBuilder to build your output string. If
you are using string concatenation, you are creating many string
instances, and that is very ineffecient. If you are concatenating a
string in a loop, always use stringbuilder.
 
J

Jay B. Harlow [MVP - Outlook]

Maxim,
Do you need the XmlDocument? Have you considered using XPathDocument class
instead. I don't know if its more memory friendly then XmlDocument, I do
know it is faster then XmlDocument...

Have you used PerfMon or CLR Profiler to see what is the life time of your
objects? I would use PerfMon first as Willy suggests, & if it suggests a
memory problem, then use CLR Profiler to identify specific problems...

Info on the CLR Profiler:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenethowto13.asp

http://msdn.microsoft.com/library/d...y/en-us/dndotnet/html/highperfmanagedapps.asp

Hope this helps
Jay


Maxim Kazitov said:
Hi Jon,

I use XmlTextReader, so I don't read all XML in once, during the
parsing I build small Xml Documents (one XmlDocument per row), and apply a
set of XPath's to each document. I have a couple of Hashtables in my code,
but they pretty small.


Thanks,
Max
 
M

Maxim Kazitov

I create CVS file row by row, and push each row to file. So, application
doesn't use any huge strings.
 
M

Maxim Kazitov

XPathDocument is ReadOnly...., I need some object which can add/remove nodes
and I know only XmlDocument




Jay B. Harlow said:
Maxim,
Do you need the XmlDocument? Have you considered using XPathDocument class
instead. I don't know if its more memory friendly then XmlDocument, I do
know it is faster then XmlDocument...

Have you used PerfMon or CLR Profiler to see what is the life time of your
objects? I would use PerfMon first as Willy suggests, & if it suggests a
memory problem, then use CLR Profiler to identify specific problems...

Info on the CLR Profiler:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenethowto13.asp

http://msdn.microsoft.com/library/d...y/en-us/dndotnet/html/highperfmanagedapps.asp

Hope this helps
Jay
 
J

Jon Shemitz

Maxim said:
I already use StringBuilder

Well, that's probably the chief source of your slowdown. Appending to
a StringBuilder is a lot faster than repeatedly doing BigString +=
SmallString, but it still is periodically reallocating its internal
buffer and copying the data from the old buffer to the new buffer.
That's slow in several ways: * Copying a big buffer runs at bus speeds
AND purges cache * allocating a large object forces a garbage
collection * large objects are allocated on the Large Object Heap, a
traditional linked-list heap, not a compacted heap.

Can you write your CVS to a file, line by line? That would keep your
working set from growing, and would tend to make your algorithm cost
linear with the number of rows read.
 
G

Guest

Hi Maxim,
I think your gona have to post a code snipit of what your doing. My general
rule is to use the XmlTextReader to read node by node. DONT create
XmlDocument objects because they are heavy. DONT do XPath. DONT do xslt.

My recomendation, without seeing what you are doing, is to read the xml
document node by node, and write to a file stream for each node.

Maxim Kazitov said:
Hi Jon,

I use XmlTextReader, so I don't read all XML in once, during the parsing
I build small Xml Documents (one XmlDocument per row), and apply a set of
XPath's to each document. I have a couple of Hashtables in my code, but they
pretty small.


Thanks,
Max
 
G

Guest

Hi, Maxim

I think you have several issues in your code. When you try to do stuff that
is way off from normal usage you will need to handle a lot of low level stuff
yourself instead of trusting the nice DOM and Path objects.

Here are some rules of thumb for performance when parsing large files:
1. Avoid in-memory parsing. If your input file is 150 MB on disk you will
need ALOT of memory and this will put unreasonable pressure on the entire
machine.
2. Avoid heavy object creation, you wrote that you create a XmlDocument for
every row, this will be VERY heavy.
3. Avoid XPAth, I know that it is a great technology but you are way out of
range and therefore need to handle everything down to the "metal".
4. Use XmlTextReader.
5. Combine FileStream, StreamWriter and StringBuilder to create a solution
that behave well. Dont create the entire file in a stringbuilder but instead
create parts of it in the stringbuilder then flush theese to file to avoid
memory consumption.

// Daniel
 
S

Steve Drake

Have you tried running a profiling tool such as ANT?

also.... could you comment out the some code to try to isolate the problem,
look at perfmon / % time in GC, this should be around 20%.

Steve
 
P

Philipp Schumann

You can feed the reader itself into the XSLT processor, is that an option in
your app design? This essentially means XSLT itself iterates through the
reader (and outputs to a stream).

Untested, but perhaps this gives you better results, unless you explicitly
need to do something specific in between each intermediate step. (In which
case, you could also have XSLT invoke a specified callback BTW.)

Maxim Kazitov said:
Hi Jon,

I use XmlTextReader, so I don't read all XML in once, during the
parsing I build small Xml Documents (one XmlDocument per row), and apply a
set of XPath's to each document. I have a couple of Hashtables in my code,
but they pretty small.


Thanks,
Max
 
N

Nick Malik [Microsoft]

Hello Maxim,

I looked at each of your responses. Here is what you appear to be doing:
You read a very large XML document using XMLTextReader

You apply XPath queries... but you want to use XMLDocument (for some reason)
because you want to change the nodes.

Some folks feel that you are using XSLT, but, reading the messages in the
microsoft.public.dotnet.framework newsgroup, I don't see you saying anything
about XSLT. Perhaps you posted a response to only one NG? Or did others
read that in?

If your input is XML and your output is CSV, and you are using
XMLTextReader, there is no reason to ever use XMLDocument. You can load
data from the xml into a class, manipulate the data as methods and
properties, and write it using CSV, without ever using XMLDocument.

In fact, I'm wondering about something. How complex is the node structure
that is used to generate a single CSV record? Are we talking about hundreds
of attributes and tightly wound rules (like with a HIPAA XML transaction) or
are we talking about a sales invoice (with a few dozen fields and some
repeated columns)? If the latter, then use the XMLTextReader to get the
text for each CSV record, extract the InnerXML, and parse it, by hand. You
are very likely to get a performance ratio that is useful and that you can
understand and optimize.

I hope this helps,

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top