A
Ali-R
Hi,
Is there a parser which parses CSV files?
Thanks for your help.
Reza
Is there a parser which parses CSV files?
Thanks for your help.
Reza
Dan Bass said:There's loads of CSV parsers out there:
http://www.google.co.uk/search?sour...rls=GGLD,GGLD:2005-06,GGLD:en&q=csv+parser+c#
Good luck.
What do yuo mean by a generic text ODBC driver or the Excel driver?use ADO.NET to read/write a CSV file via, say, a generic text ODBC driver
or the Excel driver
And a FieldInfo property consisting of an arraylist of FieldInfo structs
that defines the field name/data type of each field.
1) you said :
What do yuo mean by a generic text ODBC driver or the Excel driver?
2)I didn't underestand what's the usage of
3)I need to validate each record value against some complicated rules
,what have you done about that?
4) Can you send me that example by any chance?
my email is (e-mail address removed) (remove Fake from the beggining of my
userid![]()
Unless you want all of your fields to be strings and don't care to handle
/ validate field name header records that are part of some CSV files, you
need some mechanism to define field names, field order, field data types,
and perhaps width and basic formatting instructions. You just create a
struct or class that defines these items.
Bob Grommes said:Field names can become important when you have a header record in the CSV
and need to check that the header record does not change from what's
expected, since change in field order is significant, and change in field
names *may* be significant. It has been my experience that data from
outside sources can change at any time without notice, no matter whether
the provider of the data guarantees they will or will not. So it pays to
check that you're getting what you expect before you go and put the wrong
data into the wrong fields or something.
In any case I like to name all the fields just for debugging purposes,
even when there is no header record. Field names can also be useful for
populating default data grid headers and the like, in the UI, though I've
never actually needed to do it. Finally you could create indexers or
methods that would return field values based on field name rather than
just field index. Again, something I haven't actually needed to do, but
it could be handy at times.
My FieldInfo concept is not magical, your parsing / reading / writing
class has to use that info to do the appropriate transformations and
enforce rules or raise errors; it's just a container for some limited
schema information. There are probably lots of ways you could structure
it, this is just the way I chose.
One thing I have found is that although I built in support for converting
fields to and from specific .NET data types as part of reading and writing
CSVs, in practice I haven't used that nearly as much as I thought I would,
and tend to just define all fields as strings. For the work I do anyway,
it seems better to let client applications of these text file I/O classes
be responsible for whatever scrubbing and conversion they need to do. If
you wanted to be brutally simplistic you could just design a class that
treats all fields as strings and is just responsible for correctly
separating out each field string in order, and doesn't care about names,
data types, or anything else but perhaps how many fields to expect. The
decision here is a matter of where you want to replace the responsibility
for data integrity and how you want to handle data that doesn't conform to
expectations.
You can always drop me a note. I can't guarantee I'll have time to
respond, at least right away, but I will try.
Best,
--Bob