Import with Schema.ini and delimiter other than ,

R

Ross Cox

I'm trying to write code that will import a text delimited file with unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does not seem to
work with any other delimiter. I tried * and | and others. I end up with
just a couple of fields input, with the data not seperated where the
delimiter is. A typical field in a record ends up looking like this:

DL52579106|Jankowsky|Ann|Maureen McKee Eidson?|||||||Ebbtides Nite-Lite
UD|03/04/1994|Emmett & Ann Atkinson|Levi Docker of Heatherhill|Ebbtide Get
Real|Australian Shepherd|Dog

I tried CharacterSet=OEM with same results. The challenge here is that I'm
inputting data from a user's file with the number of fields and delimeter
are variables (and if first row is header or not). The data may contain
commas. I have a routine to strip out the single and double quotes before
the import runs.

Any thoughts?
 
J

John Nurick

Hi Ross,

I have a nasty feeling that the text file import system will only handle
files with unspecified fields if they are comma-separated.

IMO the simplest approach is to write code to read the file line by line
and use Split() to separate it into fields on whatever delimiter is
chosen.

Having read and split the first line, you have a list of field names and
can then create the table. Then work through the remainder of the file,
either using recordset operations or building and executing SQL
single-record append queries to create the records.

Post back here if this doesn't make sense.

I'm trying to write code that will import a text delimited file with unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does not seem to
work with any other delimiter. I tried * and | and others. I end up with
just a couple of fields input, with the data not seperated where the
delimiter is. A typical field in a record ends up looking like this:

DL52579106|Jankowsky|Ann|Maureen McKee Eidson?|||||||Ebbtides Nite-Lite
UD|03/04/1994|Emmett & Ann Atkinson|Levi Docker of Heatherhill|Ebbtide Get
Real|Australian Shepherd|Dog

I tried CharacterSet=OEM with same results. The challenge here is that I'm
inputting data from a user's file with the number of fields and delimeter
are variables (and if first row is header or not). The data may contain
commas. I have a routine to strip out the single and double quotes before
the import runs.

Any thoughts?
 
R

Ross Cox

Yes, I've seen code that uses Split(), and was considering using it. I just
wasn't ready to give up on the 'easy' way yet. The MS documentation on
schema.ini file isn't very verbose, and seemed to indicate that my code
should work. I guess I will write Split() code, now.

Thanks.

Ross Cox

John Nurick said:
Hi Ross,

I have a nasty feeling that the text file import system will only handle
files with unspecified fields if they are comma-separated.

IMO the simplest approach is to write code to read the file line by line
and use Split() to separate it into fields on whatever delimiter is
chosen.

Having read and split the first line, you have a list of field names and
can then create the table. Then work through the remainder of the file,
either using recordset operations or building and executing SQL
single-record append queries to create the records.

Post back here if this doesn't make sense.

I'm trying to write code that will import a text delimited file with unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does not seem to
work with any other delimiter. I tried * and | and others. I end up with
just a couple of fields input, with the data not seperated where the
delimiter is. A typical field in a record ends up looking like this:

DL52579106|Jankowsky|Ann|Maureen McKee Eidson?|||||||Ebbtides Nite-Lite
UD|03/04/1994|Emmett & Ann Atkinson|Levi Docker of Heatherhill|Ebbtide Get
Real|Australian Shepherd|Dog

I tried CharacterSet=OEM with same results. The challenge here is that I'm
inputting data from a user's file with the number of fields and delimeter
are variables (and if first row is header or not). The data may contain
commas. I have a routine to strip out the single and double quotes before
the import runs.

Any thoughts?
 
R

Ross Cox

On his website, Mr. Larry Rebich has provided the best answer to this
problem. See http://www.buygold.net/v05n09/v05n09.html.

The trick is to use the MS text driver and an ADO recordset (Jet4). You
still put the Schema.ini File in the same directory as the input file, and
you can set the delimiter to whatever you want. The | caracter worked fine.

' I added the code between the astrics.
'****************
DIM rs as ASO.Recordset
DIM cnn as Connection
DIM sCon as string
---------------------------------
set rs = New ADO.Recordset
set cnn = New Connection
'********************

' build the connection string
sCon = "Driver={Microsoft Text Driver (*.txt; *.csv)}; "
sCon = sCon & "DEFAULTDIR=" & sPath & "; "
sCon = sCon & "Extensions=" & sExt & "; "
sCon = sCon & "Persist Security Info=False"

cnn.Open sCon 'open the connection

' read and parse into a recordset
rs.Open "Select * from [" & sFile & "]", cnn, adOpenStatic, adLockReadOnly,
adCmdText
----------------------------------
rs now contains the imported file. This even works with fields like

"Me, You, Us",

With or without the quotes, and with embedded commas (with quotes, of
course), it still works! WOO HOO!

Thank you Larry Rebich .

Ross Cox

Ross Cox said:
Yes, I've seen code that uses Split(), and was considering using it. I just
wasn't ready to give up on the 'easy' way yet. The MS documentation on
schema.ini file isn't very verbose, and seemed to indicate that my code
should work. I guess I will write Split() code, now.

Thanks.

Ross Cox

John Nurick said:
Hi Ross,

I have a nasty feeling that the text file import system will only handle
files with unspecified fields if they are comma-separated.

IMO the simplest approach is to write code to read the file line by line
and use Split() to separate it into fields on whatever delimiter is
chosen.

Having read and split the first line, you have a list of field names and
can then create the table. Then work through the remainder of the file,
either using recordset operations or building and executing SQL
single-record append queries to create the records.

Post back here if this doesn't make sense.

I'm trying to write code that will import a text delimited file with unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does not
seem
 
D

david epsom dot com dot au

This is very odd: you are using the text ISAM of the Jet database
engine, which is exactly the same as what you were doing before.
I wonder what exactly TransferText is playing at??????

(david)



Ross Cox said:
On his website, Mr. Larry Rebich has provided the best answer to this
problem. See http://www.buygold.net/v05n09/v05n09.html.

The trick is to use the MS text driver and an ADO recordset (Jet4). You
still put the Schema.ini File in the same directory as the input file, and
you can set the delimiter to whatever you want. The | caracter worked fine.

' I added the code between the astrics.
'****************
DIM rs as ASO.Recordset
DIM cnn as Connection
DIM sCon as string
---------------------------------
set rs = New ADO.Recordset
set cnn = New Connection
'********************

' build the connection string
sCon = "Driver={Microsoft Text Driver (*.txt; *.csv)}; "
sCon = sCon & "DEFAULTDIR=" & sPath & "; "
sCon = sCon & "Extensions=" & sExt & "; "
sCon = sCon & "Persist Security Info=False"

cnn.Open sCon 'open the connection

' read and parse into a recordset
rs.Open "Select * from [" & sFile & "]", cnn, adOpenStatic, adLockReadOnly,
adCmdText
----------------------------------
rs now contains the imported file. This even works with fields like

"Me, You, Us",

With or without the quotes, and with embedded commas (with quotes, of
course), it still works! WOO HOO!

Thank you Larry Rebich .

Ross Cox

Ross Cox said:
Yes, I've seen code that uses Split(), and was considering using it. I just
wasn't ready to give up on the 'easy' way yet. The MS documentation on
schema.ini file isn't very verbose, and seemed to indicate that my code
should work. I guess I will write Split() code, now.

Thanks.

Ross Cox

John Nurick said:
Hi Ross,

I have a nasty feeling that the text file import system will only handle
files with unspecified fields if they are comma-separated.

IMO the simplest approach is to write code to read the file line by line
and use Split() to separate it into fields on whatever delimiter is
chosen.

Having read and split the first line, you have a list of field names and
can then create the table. Then work through the remainder of the file,
either using recordset operations or building and executing SQL
single-record append queries to create the records.

Post back here if this doesn't make sense.
wrote:

I'm trying to write code that will import a text delimited file with unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does not
seem
to
work with any other delimiter. I tried * and | and others. I end up with
just a couple of fields input, with the data not seperated where the
delimiter is. A typical field in a record ends up looking like this:

DL52579106|Jankowsky|Ann|Maureen McKee Eidson?|||||||Ebbtides Nite-Lite
UD|03/04/1994|Emmett & Ann Atkinson|Levi Docker of
Heatherhill|Ebbtide
Get
Real|Australian Shepherd|Dog

I tried CharacterSet=OEM with same results. The challenge here is
that
I'm
inputting data from a user's file with the number of fields and delimeter
are variables (and if first row is header or not). The data may contain
commas. I have a routine to strip out the single and double quotes before
the import runs.

Any thoughts?
 
R

Ross Cox

I don't know why it works. I know that the ISAM text driver is used in both
cases, but it seems to (does) work with ADO recordsets. Sounds like a bug.

GEE, who would have thought that there were bugs in Micorsoft applications?

Anyway, I guess it is very helpful to use non-microsoft sources to find out
how things really work. This little piece of code will replace a
complicated (using the Split function)(and therefore bug prone) function.
I've done a bunch of testing with this, and it seems to work in all cases.

I noticed that I dim'd rs as an ASO.recordset in the code I posted. Of
course, it's ADO. ASO recordset would be the other type of ODBC driver, an
Ass Hol.... well

Again, WOO HOO!

Ross Cox


david epsom dot com dot au said:
This is very odd: you are using the text ISAM of the Jet database
engine, which is exactly the same as what you were doing before.
I wonder what exactly TransferText is playing at??????

(david)



Ross Cox said:
On his website, Mr. Larry Rebich has provided the best answer to this
problem. See http://www.buygold.net/v05n09/v05n09.html.

The trick is to use the MS text driver and an ADO recordset (Jet4). You
still put the Schema.ini File in the same directory as the input file, and
you can set the delimiter to whatever you want. The | caracter worked fine.

' I added the code between the astrics.
'****************
DIM rs as ASO.Recordset
DIM cnn as Connection
DIM sCon as string
---------------------------------
set rs = New ADO.Recordset
set cnn = New Connection
'********************

' build the connection string
sCon = "Driver={Microsoft Text Driver (*.txt; *.csv)}; "
sCon = sCon & "DEFAULTDIR=" & sPath & "; "
sCon = sCon & "Extensions=" & sExt & "; "
sCon = sCon & "Persist Security Info=False"

cnn.Open sCon 'open the connection

' read and parse into a recordset
rs.Open "Select * from [" & sFile & "]", cnn, adOpenStatic, adLockReadOnly,
adCmdText
----------------------------------
rs now contains the imported file. This even works with fields like

"Me, You, Us",

With or without the quotes, and with embedded commas (with quotes, of
course), it still works! WOO HOO!

Thank you Larry Rebich .

Ross Cox

Ross Cox said:
Yes, I've seen code that uses Split(), and was considering using it. I just
wasn't ready to give up on the 'easy' way yet. The MS documentation on
schema.ini file isn't very verbose, and seemed to indicate that my code
should work. I guess I will write Split() code, now.

Thanks.

Ross Cox

Hi Ross,

I have a nasty feeling that the text file import system will only handle
files with unspecified fields if they are comma-separated.

IMO the simplest approach is to write code to read the file line by line
and use Split() to separate it into fields on whatever delimiter is
chosen.

Having read and split the first line, you have a list of field names and
can then create the table. Then work through the remainder of the file,
either using recordset operations or building and executing SQL
single-record append queries to create the records.

Post back here if this doesn't make sense.
wrote:

I'm trying to write code that will import a text delimited file with
unknown
# of fields and other then comma for delimiter. The input code is:

infile = CurrentProject.path & "\tmpfile.csv"
DoCmd.TransferText acImportDelim, , "tblImport", infile,
Me.chkHeader

The schema.ini file is in the same directory as tmpfile.csv, and is
customized with whatever the user's delimiter happens to be (and
ColNameHeader set TRUE or FALSE depending on me.chkheader). Like:

[tmpfile.csv]
Format=Delimited(|)
ColNameHeader = TRUE
MaxScanRows = 0
CharacterSet = ANSI

This code works perfectly if I use Format=Delimited(,), but does
not
seem
to
work with any other delimiter. I tried * and | and others. I end up with
just a couple of fields input, with the data not seperated where the
delimiter is. A typical field in a record ends up looking like this:

DL52579106|Jankowsky|Ann|Maureen McKee Eidson?|||||||Ebbtides Nite-Lite
UD|03/04/1994|Emmett & Ann Atkinson|Levi Docker of Heatherhill|Ebbtide
Get
Real|Australian Shepherd|Dog

I tried CharacterSet=OEM with same results. The challenge here is that
I'm
inputting data from a user's file with the number of fields and delimeter
are variables (and if first row is header or not). The data may contain
commas. I have a routine to strip out the single and double quotes before
the import runs.

Any thoughts?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top