Mark Rae said:
In what way is it more efficient...?
In just about every possible way, to be honest.
It doesn't create extra intermediate strings for no good reason. It
also doesn't use Encoding.ASCII for no particularly good reason. (Both
will give bad results for non-ASCII data.) I've only just noticed
you're encoding the whole string for *every* iteration!
Here's a little benchmark encoding a string of 10,000 characters 100
times with each method:
using System;
using System.Text;
class Test
{
static void Main()
{
string test = new string('a', 10000);
if (Mark(test) != Austin(test))
{
throw new Exception
("Can't test - results aren't the same");
}
{
DateTime start = DateTime.Now;
for (int i=0; i < 100; i++)
{
Mark(test);
}
DateTime end = DateTime.Now;
Console.WriteLine ("Mark: {0}", end-start);
}
{
DateTime start = DateTime.Now;
for (int i=0; i < 100; i++)
{
Austin(test);
}
DateTime end = DateTime.Now;
Console.WriteLine ("Austin: {0}", end-start);
}
}
public static string Mark(string pstrEvent)
{
string strReturn = "";
for(int intPos = 0; intPos < pstrEvent.Length; intPos++)
{
strReturn += "&#" +
((int)Encoding.ASCII.GetBytes(pstrEvent)[intPos]).ToString() + ";";
}
return strReturn;
}
public static string Austin(string str)
{
StringBuilder sb = new StringBuilder();
foreach (char c in str)
{
sb.Append("&#");
sb.Append((int)c);
sb.Append(";");
}
return sb.ToString();
}
}
And the results:
Mark: 00:02:34.7968750
Austin: 00:00:00.7031250
For this particular data, Austin's version is about 220 times faster
than yours. For longer strings, that would go up - you'd be creating
more (and longer) intermediate strings, and you'd be encoding a longer
string every iteration.