How to read html files AS IS. Encoding seems to change the characters.

Z

Zoro

My task is to read html files from disk and save them onto SQL Server
database field. I have created an nvarchar(max) field to hold them.
The problem is that some characters, particularly html entities, and
French/German special characters are lost and/or replaced by a
question mark.
This is really frustrating. I have tried using StreamReader with ALL
the encodings available and none work correctly. Each encoding handles
some characters and but loses others. I also tried reading into byte
array, but as soon as I converted the array to string the encoding
ruined the text.
Maybe the solution is not to convert to string? but then how will I
save it to the database?

Is there a way to get this html text AS IS - with no encoding and no
changes into the database?
I could do it in Delphi and many other pre .NET, there must be a way
in C# too - surely?!

Thanks a lot for your help,
zoro.
 
E

Eugene Mayevski

Hello!
You wrote on 30 Mar 2007 10:43:07 -0700:

Z> Is there a way to get this html text AS IS - with no encoding and no
Z> changes into the database?

Of course there's a way - use byte[] as a container, and use BLOB in the DB.
Don't use strings, as you will get all sorts of problems (as you already
do).

With best regards,
Eugene Mayevski
http://www.SecureBlackbox.com - the comprehensive component suite for
network security
 
Z

Zoro

Hello!
You wrote on 30 Mar 2007 10:43:07 -0700:

Z> Is there a way to get this html text AS IS - with no encoding and no
Z> changes into the database?

Of course there's a way - use byte[] as a container, and use BLOB in the DB.> Don't use strings, as you will get all sorts of problems (as you already
do).

With best regards,
Eugene Mayevskihttp://www.SecureBlackbox.com- the comprehensive component suite for
network security

Thanks for that Eugene.
Unfortunatly, I was since told that the data has to be stored in text
format so that it may be cut and past into a text editor directly from
database and also for other applications to be able to retrieve it.
I also found that the problem is purely with C# reading the string -
the nvarchar field doesn't affect it.

Is it possible to read the file into a byte[] and INSERT the array
directly into a nvarchar field? if so how?

I am still intrigued about how to do it in a string though? how come
Delphi and VB6 can read a file into a string, without changing it's
characters? there must be a way to do it in C# !?

Thanks.
zoro.
 
M

Mihai N.

Is it possible to read the file into a byte[] and INSERT the array
directly into a nvarchar field? if so how?

I am still intrigued about how to do it in a string though? how come
Delphi and VB6 can read a file into a string, without changing it's
characters? there must be a way to do it in C# !?

No, and notbody can do it, Delphi or VB.

nvarchar is Unicode text. HTML files are also text, but the encoding is
not always known. You can determine it if is properly tagged with
a meta http-equiv tag (for example <meta http-equiv="Content-Type"
content="text/html; charset=utf-8" />)

Thing is, you cannot get bytes and store them as nvarchar (o varchar),
because you will get crap. It might seem to work sometimes, but fail
spectacularly other times (depending on the content and encoding of
your html)
 
Z

Zoro

Thing is, you cannot get bytes and store them as nvarchar (o varchar),
because you will get crap. It might seem to work sometimes, but fail
spectacularly other times (depending on the content and encoding of
your html)

Thanks for that Mihai. So do you agree with Eugene that the best way
forward for me is to read the file into byte[] and store in a
varbinary field?
If so, I will try to convince my management that it is the best way to
go forward, but I have a couple of questions about it:

What is the best (safest/fastest) method to read html file into byte[]
and store this array into a varbinary field?
How should the data be retrieved from the varbinary field, in order to
save it back as an html file - (must be identical to the original)?

Thank you,
zoro.
 
M

Mihai N.

What is the best (safest/fastest) method to read html file into byte[]
and store this array into a varbinary field?
How should the data be retrieved from the varbinary field, in order to
save it back as an html file - (must be identical to the original)?
Sorry, but I am not familiar enough with the way .NET works with databases
to tell you exactly how to do this.

Thanks for that Mihai. So do you agree with Eugene that the best way
forward for me is to read the file into byte[] and store in a
varbinary field?
Yes. A binary field is the only way to 100% maintain a file unchanged.
But this is not going to be very useful for any kind of processing.
So, if the main goal is storing the htmls, then this is ok.
If you need searching, indexing, etc., then the db should know that is text,
and we get back to square 1.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Zoro said:
My task is to read html files from disk and save them onto SQL Server
database field. I have created an nvarchar(max) field to hold them.
The problem is that some characters, particularly html entities, and
French/German special characters are lost and/or replaced by a
question mark.
This is really frustrating. I have tried using StreamReader with ALL
the encodings available and none work correctly.

Have you really tried with ALL available encodings, or just the ones
that are predefined in the Encoding class? I counted to 140 supported
encodings in the documentation, of which 36 are supported on all
systems. You can create an Encoding object for any of those.

Examples:

Encoding windows1252 = Encoding.GetEncoding(1252);
Encoding dosWesternEuropean = Encoding.GetEncoding(850);
Encoding iso8859_1 = Encoding.GetEncoding(28591);

ISO-8859-1 is the default encoding for the MIME type text/html. That is
the content type usually used for web pages, so my first guess would be
that the files are encoded that way.
Each encoding handles
some characters and but loses others. I also tried reading into byte
array, but as soon as I converted the array to string the encoding
ruined the text.
Maybe the solution is not to convert to string? but then how will I
save it to the database?

Is there a way to get this html text AS IS - with no encoding and no
changes into the database?

As the file is not text at all, but only binary data, there is no way of
decoding the data into text without decoding it.
I could do it in Delphi and many other pre .NET, there must be a way
in C# too - surely?!

Of course there is. You just have to find out what encoding was used to
create the file, then you can decode it.
 
Z

Zoro

Thanks for the help Goran.
Have you really tried with ALL available encodings, or just the ones
that are predefined in the Encoding class? I counted to 140 supported
encodings in the documentation, of which 36 are supported on all
systems. You can create an Encoding object for any of those.

Examples:

Encoding windows1252 = Encoding.GetEncoding(1252);
Encoding dosWesternEuropean = Encoding.GetEncoding(850);
Encoding iso8859_1 = Encoding.GetEncoding(28591);

You were right - I only tried the predefined - I didn't know about the
others. But I have now, and none seem to result in the same file.
I am running a very simple test:

I run the following code:
Encoding enc = Encoding.GetEncoding(28591);
string html = File.ReadAllText("test.html", enc);
File.WriteAllText("test-1.html", html);
return;

I then compare test.html with test-1.html in ExamDiff text comparing
utility and I am getting many differences. All I want it to be able to
read text and write it back UNCHANGED.

Unfortunately, there is no encoding meta-data in the html but it has
words like "Ménière" in it and I lose those accented characters
between the files.

You are saying that the file is really a binary rather than a text
file. This is true, but my C# is not good eough to do what is required
as a result:
1. reading it as binary
2. storring it as binary in the database and most importantly
3. restoring it from database to UNCHARGED file on demand.

Can you help me with these tasks?

Thanks a lot,
zoro.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Zoro said:
Thanks for the help Goran.


You were right - I only tried the predefined - I didn't know about the
others. But I have now, and none seem to result in the same file.
I am running a very simple test:

I run the following code:
Encoding enc = Encoding.GetEncoding(28591);
string html = File.ReadAllText("test.html", enc);
File.WriteAllText("test-1.html", html);
return;

I then compare test.html with test-1.html in ExamDiff text comparing
utility and I am getting many differences. All I want it to be able to
read text and write it back UNCHANGED.

You are writing it back as UTF-8, as you are not specifying any encoding
in the WriteAllText method call. Can the ExamDiff utility compare files
with different encoding?
Unfortunately, there is no encoding meta-data in the html but it has
words like "Ménière" in it and I lose those accented characters
between the files.

You are saying that the file is really a binary rather than a text
file. This is true, but my C# is not good eough to do what is required
as a result:
1. reading it as binary
2. storring it as binary in the database and most importantly
3. restoring it from database to UNCHARGED file on demand.

Use File.ReadAllBytes to read the file into a byte array. You can save
that in an image field in the database. When you retrieve the data, you
get a byte array that you can save to a file using File.WriteAllBytes.
 
Z

Zoro

Thanks again Goran for your help.
You are writing it back as UTF-8, as you are not specifying any encoding
in the WriteAllText method call.

It looks like I may be able to do it with string after all.
It looks like the problem with my test was - like you suggested - that
i didn't specify the write encoding. When I do, as long as I use the
same encoding when reading and writing, it worked with all 3 codes you
have suggested (but not with any of the built in codes - e.g. UTF-n!).

I am still not clear on how it's going to work away from the test -
using the database situation, but I am HOPING it would work as
follows:
1. I will use 1 of these codes to read the file
2. then store the string into nvarchar field and add a note informing
users of the encoding I used
3. specify the same encoding when creating the file, after reading the
string from the db.

Do you think this would work?
Thanks again,
zoro.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Zoro said:
Thanks again Goran for your help.


It looks like I may be able to do it with string after all.
It looks like the problem with my test was - like you suggested - that
i didn't specify the write encoding. When I do, as long as I use the
same encoding when reading and writing, it worked with all 3 codes you
have suggested (but not with any of the built in codes - e.g. UTF-n!).

Then you have successfully decoded the file into text, as you are not
losing any characters.

If you save the file using utf-8, all the characters will still be
there, as strings are unicode and utf-8 can store any unicode
characters. The reason that your test did not succeed with the unicode
encodings is because the utility that you are using doesn't support
unicode. You would need the "Pro" version for that.
I am still not clear on how it's going to work away from the test -
using the database situation, but I am HOPING it would work as
follows:
1. I will use 1 of these codes to read the file
2. then store the string into nvarchar field and add a note informing
users of the encoding I used
3. specify the same encoding when creating the file, after reading the
string from the db.

Do you think this would work?
Thanks again,
zoro.

As you successfully decoded the file to a string, you can store that in
a nvarchar/ntext field and you are done. You can also store the encoding
used if you like to recreate the file exactly, but you can create a file
using any encoding that supports the characters in the text.

One advantage with using utf-8 encoding is that it places a BOM (byte
order mark) at the beginning of the file, that can be used to identify
the encoding used. If you use the File.ReadAllText to read a file that
contains a BOM, it will read the file correctly, even if you specify a
completely different encoding.
 
Z

Zoro

As you successfully decoded the file to a string, you can store that in
a nvarchar/ntext field and you are done. You can also store the encoding
used if you like to recreate the file exactly, but you can create a file
using any encoding that supports the characters in the text.

This doesn't seem to work. When I tried to read the files using
28591encoding and writing using 65001 (UTF-8), I seem to have lost the
dashes (-) in the text. I am not sure why this happens, but it looks
like I must save the text with the same encoding I read from the file.
btw, this is not only observed in the utility, I also opened the file
in a browser and the browser interpreted it differently too.

Thank again,
zoro.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Zoro said:
This doesn't seem to work. When I tried to read the files using
28591encoding and writing using 65001 (UTF-8), I seem to have lost the
dashes (-) in the text. I am not sure why this happens, but it looks
like I must save the text with the same encoding I read from the file.
btw, this is not only observed in the utility, I also opened the file
in a browser and the browser interpreted it differently too.

Thank again,
zoro.

That is strange. A dash is a regular ASCII character, which should work
in any encoding.

However, the browser might misinterpret the encoding, have you tried to
open the file in Notepad?
 
Z

Zoro

That is strange. A dash is a regular ASCII character, which should work
in any encoding.

However, the browser might misinterpret the encoding, have you tried to
open the file in Notepad?

I did open them in Notepad. In the original file it's a dash and in
the new file it's a question mark (?) or missing char - depending on
the encoding.
However, I suspect this is because they are Em-dash or En-dash - ascii
150, 151.
Thanks,
zoro.
 
M

Mihai N.

As you successfully decoded the file to a string, you can store that in
a nvarchar/ntext field and you are done. You can also store the encoding
used if you like to recreate the file exactly, but you can create a file
using any encoding that supports the characters in the text.

On problem with loading with an encoding and saving with another is that
if the original encoding was not correct, you get junk.
Imagine loading a Greek file using the Russian codepage (cp1250), then
saving it as utf-8.

The second problem with this is if some files have in fact a encoding
meta-data in the head (or will start having, at some point).
This means that you end up writing a file as UTF-8 while the meta says
something else (the original encoding).

Other than this is should be ok.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top