SQL Server Data Access - Conversion to Unicode?

E

epigram

If we have a SQL Server 2000 database that does not use unicode data types
(nchar, nvarchar, ntext), but is instead uses character data types (char,
varchar, text) does ADO.NET go through some conversion process when it, for
example, retrieves these values to store in a .NET string? In particular,
is there a significant performance hit that would NOT take place if we were
using the unicode data types instead? My assumption here being that a
conversion wouldn't have to take place to convert to SQL Server unicode data
types to .NET unicode types (e.g. the string class). I'm concerned about
the performance implications of this type of a conversion happening
constantly.

Thanks.
 
P

Pablo Castro [MS]

ADO.NET will convert all string data to unicode before returning it to your
application (we simply use the Encoding class from the .NET Framework). So
there is some performance hit there. I wouldn't be too concerned about this
unless you have an extreme case of it. The performance hit is in the client,
not in the server, and it's a CPU-bound process, so today's fast CPUs do it
in a snap.

--
Pablo Castro
Program Manager - ADO.NET Team
Microsoft Corp.

This posting is provided "AS IS" with no warranties, and confers no rights.
 
E

epigram

Makes sense. We're actually about to begin designing our database and we
were asking ourselves whether we should use unicode data types for all
character-based columns, or only where they might truly be needed. The
debate is between a larger database (i.e. using unicode data types
everywhere) or possible conversion/cpu overhead from the conversion
operations (i.e. using unicode data types only where required). It sounds
like you might lean to the latter, correct?

Thanks!
 
I

IPGrunt

Makes sense. We're actually about to begin designing our database and we
were asking ourselves whether we should use unicode data types for all
character-based columns, or only where they might truly be needed. The
debate is between a larger database (i.e. using unicode data types
everywhere) or possible conversion/cpu overhead from the conversion
operations (i.e. using unicode data types only where required). It sounds
like you might lean to the latter, correct?

Thanks!

I don't know. I'm in the habit of using N' codes, (though I still get
confused for a second or two looking at db statistics until I realize
I'm saving double-bytes.)

I figure whatever happens with the view layer the backend will
support it. Application port to chinese? No problem!

My tables aren't very large, though. If I were saving wide tables,
I'd think twice about each atom of storage.

So the answer is, as always, it depends.

-- ipgrunt
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top