Why do you use XML?

J

jehugaleahsa

Hello, everyone:

I have a simple request to anyone interested. We at work are aware of
the (so-called) importance of XML. We have laboriously attempted to
find practical uses of XML. However, in our environment, working with
XML seems like a complete waste of time. For instance, we pull data
from a database and send it to a database.

Recently I attempted to pull a space separated value file into an XML
format and used XSLT to convert it to a comma separated value file.
Well, XSLT doesn't work naturally for anything but XML. It was a waste
of time and it would have been easier to just send the SSV to a CSV
directly with some minimal coding.

Since we work primarily with databases and nothing in-between, I am
really starting to wonder if knowing anything about XML is really all
that useful to my life right now. That is why I would like for
everyone reading to tell me you opinions and practical experiences
with XML. Do you use XML as regularly as hype seems to say you should?
Do you use it in an environment that rarely goes outside of database
interactivity? Do you think XML is all that important to a developer
who can generally write code to perform the same task, potentially
with less effort?

I am personally take the point that XML is rarely needed tool, outside
of the occasional configuration file. And even then, all these tools
like DTDs, XPath, XPointer, XSLT, etc., seem even less used. I feel
like there is this enormous push for XML but see it as only useful in
environments where data is being passed out and about. Am I completely
missing the benefits of XML or is it just my environment?

I can see how it useful to save data out so it can be used later,
especially when a database is not involved. And XML is easier to read
than some pre-XML formats, but it also requires an enormous amount
more space. Thinking about a SSV file that contains over 4MB of data -
converting that document to XML usually triples (at least) the amount
of memory needed. And usually, here, software expects a non-XML
format. For me, it is like putting an extra level of code that not
only consumes an enormous amount of runtime, memory and development
time, but it also ends up not having any benefit at all in the end. I
suppose the world would be a better place if we all started writing
our applications to accept and output XML. I really need a way to
determine, from a developer's point of view, whether XML is a good
decision or just something that sounds good.

What do you think?

Thanks,
Travis
 
J

Jon Skeet [C# MVP]

What do you think?

I think it's a good idea to use the right format for the job, but XML
has significant benefits in many places:

1) Reasonable character encoding support
2) No need to manually escape and unescape data (did your SSV file have
no values with spaces in?)
3) Natural way of representing hierarchies, or multiple tables etc -
SSV/CSV really only describes a single rectangular table easily
4) Reasonably self-descriptive, with element and attribute names
5) Well-defined standard - if you give me an XML file, I need to
ask you enough information to *understand* the data, but I don't
need more information in order to *load* it. Compare that with
finding out which particular brand of SSV/CSV/etc you're using
 
P

Peter Duniho

Well, Jon already made almost all the points I might have, plus some extra
ones. :)

That leaves only this...

[...]
I can see how it useful to save data out so it can be used later,
especially when a database is not involved. And XML is easier to read
than some pre-XML formats, but it also requires an enormous amount
more space. Thinking about a SSV file that contains over 4MB of data -
converting that document to XML usually triples (at least) the amount
of memory needed.

One thing that I find myself doing occasionally is compressing the XML
data. Gzip libraries are common, built into .NET and other platforms, and
they are usually as simple to use as wrapping a regular file stream in a
deflate/inflate stream, as appropriate.

If you put this support into the application, then the data need not have
a large footprint on disk (with all those repeated tags, it compresses
very well :) ). It can even make i/o faster, since even with compression
in the chain of data transfer, the disk i/o is the bottleneck. At the
same time, anyone who wants to get a human-readable version of the data
can easily unzip the file and look at it.

Of course, you can do this with any data format, and text formats respond
well to compression. But as you point out, XML can have a fairly large
footprint compared to other data formats, and this goes a long way toward
alleviating that one drawback (one of the few drawbacks of XML, IMHO).

That's mostly an aside though. I think the most important thing is to
remember to use the right tool for the job. If using XML is slowing you
down, overcomplicating things, bloating your data, without any benefit
then of course you should not be using it.

But it is still useful in many other contexts.

Basically, the computer industry has this habit of hyping whatever the
"latest thing" is. Some people are misled into thinking that they should
apply that tool to everything they can, when in fact it's just one more
tool made available for use _when needed_. (Actually, this isn't unique
to the computer industry, but I think the pace of innovation in the
computer industry tends to exacerbate the problem somewhat).

For the most part, computer technologies don't "make the world a better
place". They just help software engineers implement better solutions, or
implement solutions better or faster. Don't make the mistake of putting
all your hopes and dreams on a single technology. :)

Pete
 
M

Marc Gravell

For raw tabular data, things like delimited files work great;
personlly I prefer TSV over CSV just because of the international
issus associated with comma/period.

Xml is absolutely *not* the right tool for every job; but a very
useful tool to have available. For structured / semi-structured data,
xml has many benefits; in addition to what Jon has mentioned, there
are also a powerful set of tools for manipulating data without having
to write any framework code - such as xsd for validation (also
incorporating standard known formats for common data types), and xslt
for transformation (either to a different structure of xml, or to
render as html etc).
all these tools like DTDs, XPath, XPointer, XSLT, etc., seem even
less used
No; rest assured, xsd (over dtd), xpath and xslt are all *heavily*
used tools.

Additionally; for cases where you can't know the structure in advance,
the DB support in SQL Server 2005 etc is a far more appealing option
than doing an "inner platform" antipattern in the database (table
rows, linking to table cells (0:*) to get the name/value pairs, etc) -
especially if it isn't purely tabular; you can store, index and query
elements simply via xpath, such as "/order/lines[1]/@part" (or
whatever)
Thinking about a SSV file that contains over 4MB of data -
converting that document to XML usually triples (at least) the
amount
of memory needed.

Yes, xml can be a little bulky; you can alleviate that a bit by
preferring attributes over elements for simple values, but it also
compresses *very* nicely with things like GZIP. Note that for large
documents you should ideally be using streaming tools (XmlReader etc),
not buffered tools (XmlDocument) to read the data where possible.
And usually, here, software expects a non-XML format.

My experience is the opposite; when dealing with externals etc, in
most (not all) cases, the "other side" is happy to talk xml, since it
is easy to define, platform independent, and simply to work bith
(contrast to binary formats). The only exceptions I have seen are (as
in your case) delimited tabular files; fine: I'll just use the same
data-processing code, but simply stick the right xslt / etc on the end
to fit the consumer.

Marc
 
C

Christopher Ireland

Marc,
Very valid point; I think the trick is to know "just enough" about
each available technology (i.e. that it exists, and what problem it
solves), so that when you *see* a similar problem you know what to
consider; and what not to.

I think that would make a great subject for a blog post, or even a book. An
overview of all available technologies and the problem it was designed to
solve.

Is there one already available? Any takers if there isn't?

--
Thank you,

Christopher Ireland
http://shunyata-kharg.doesntexist.com

"Only the curious will learn, only the resolute overcome the obstacles to
learning. The Quest quotient has always excited me more than the
intelligence quotient."
Eugene S. Wilson
 
M

Marc Gravell

Don't make the mistake of putting all your hopes and dreams on a
single technology. :)

Very valid point; I think the trick is to know "just enough" about
each available technology (i.e. that it exists, and what problem it
solves), so that when you *see* a similar problem you know what to
consider; and what not to.

Marc
 
L

Lew

Peter said:
Basically, the computer industry has this habit of hyping whatever the
"latest thing" is. Some people are misled into thinking that they
should apply that tool to everything they can, when in fact it's just
one more tool made available for use _when needed_. (Actually, this
isn't unique to the computer industry, but I think the pace of
innovation in the computer industry tends to exacerbate the problem
somewhat).

For the most part, computer technologies don't "make the world a better
place". They just help software engineers implement better solutions,
or implement solutions better or faster. Don't make the mistake of
putting all your hopes and dreams on a single technology. :)

In addition to Peter's and others' excellent points, notice that XML started
making a splash about nine or ten years ago now, and in that time it grew from
an interesting offshoot of SGML to the basis for a boatload of systems and
applications. Consider why the industry embraced XML so wholeheartedly. The
fact that it did indicates that there must be *some* value to it.

Compared to data interchange formats like CSV or semantically-determined
layout (e.g., ID number in columns 11-19), XML is easier to read for humans,
easier to maintain, editable in normal text editors, extensible (hence the "X"
in "XML"), semantically void, and yes, quite verbose.

Consider that programs like Excel that import CSV files easily typically have
an option to skip the first row so that a human can put column labels there.
Human readability is a major asset to a file format. Hence the burgeoning
popularity of XML for configuration files and deployment descriptors.

The cost of maintaining artifacts is often larger than the cost of developing
them. XML is much easier to maintain than other formats.
 
J

Jon Skeet [C# MVP]

Christopher Ireland said:
I think that would make a great subject for a blog post, or even a book. An
overview of all available technologies and the problem it was designed to
solve.

Is there one already available? Any takers if there isn't?

A book would be out of date as soon as the proofs were sent to the
printers. Might be a good idea for a Wiki though...
 
C

Christopher Ireland

Jon,
A book would be out of date as soon as the proofs were sent to the
printers. Might be a good idea for a Wiki though...

Yes, it might be. With your depth and breadth of knowledge, maybe the MVP
collective could consider creating such a wiki for the benefit of us all!

--
Thank you,

Christopher Ireland
http://shunyata-kharg.doesntexist.com

"I went to a restaurant that serves 'breakfast at any time.' So I ordered
French Toast during the Renaissance."
Steven Wright
 
J

Jon Skeet [C# MVP]

Christopher Ireland said:
Yes, it might be. With your depth and breadth of knowledge, maybe the MVP
collective could consider creating such a wiki for the benefit of us all!

Can't say I've got any experience of running a Wiki (beyond a swiki a
long time ago) but I'd be happy to help. Bit busy until the book comes
out, but maybe sometime after that...
 
C

Christopher Ireland

Jon,
Can't say I've got any experience of running a Wiki (beyond a swiki a
long time ago) but I'd be happy to help. Bit busy until the book comes
out, but maybe sometime after that...

Well, let's see if anybody else is interested...

--
Thank you,

Christopher Ireland
http://shunyata-kharg.doesntexist.com

"If the phone doesn't ring, it's me."
Jimmy Buffett
 
C

christery

So OP, XML comes from the interesting world of SGML or GML somewhere
in the 60s from Charles Goldfarb, not so new..
I thnk just like HTML is derived/a subset from that... I would say
that HTML has helped a bit, but not without gif, jpeg and so on... the
right tool for the right work... BUT as they say: "for a man with a
hammer, the world is full of nails" ;) I like that it translates
objects to text, conserving namespace and all, might be UTF-8/16/32
and I can normally read the info without the protocol spec.

I mean
delimiting
00034/ 42/XyZ
or something without delimiters
242hjh24123j412g3212g2
pos 2-5 is something and so on

ok with csv/tsv but try sending values containing , or ;... XML
handles that with 3-4 chars under ascii 127 think its tab " < and >
all others goes by the encode

And the XSD sorta works for building the c# object, so it gets a bit
as IDL/SDL in a soa app...

//CY
 
K

Kevin Spencer

I use XML for object serialization quite frequently. Not only is it fairly
easy to do, but the serialized objects can then be used by a variety of
applications without issues, for the reasons Jon outlined, as well as being
able to transport them easily between systems, since XML is agnostic to the
system using it, and is self-describing.

--
HTH,

Kevin Spencer
Chicken Salad Surgeon
Microsoft MVP
 
B

Bob Powell [MVP]

#1 Human readability of the data. This is still of considerable importance
if you have a problem that just won't quit.

#2 Cross platform portability. The data is expressed in the same way on all
machines. Binary formats have issues such as "endianism" to deal with.

#3 There are lots and lots and lots of great tools that understand it.

#4 If the herd is going west, don't try to walk east.

--
--
Bob Powell [MVP]
Visual C#, System.Drawing

Ramuseco Limited .NET consulting
http://www.ramuseco.com

Find great Windows Forms articles in Windows Forms Tips and Tricks
http://www.bobpowell.net/tipstricks.htm

Answer those GDI+ questions with the GDI+ FAQ
http://www.bobpowell.net/faqmain.htm

All new articles provide code in C# and VB.NET.
Subscribe to the RSS feeds provided and never miss a new article.
 
A

Anthony Jones

Hello, everyone:
What do you think?


Others have well described a balance of XML advantages. I'll describe how I
actually use it.

In the 'business layer' (don't like the term but its reasonable well
understood) I pull data from a database as XML, the data access layer is
simple because it only does XML.

Still at the business end on the server that XML can be manipulated and
expanded to include for example human readable text for lookup IDs etc.
This XML then represents a specific complex object. It can be cached
reducing the load on the DB to rebuild it.

Its then sent to the client (IE or Firefox). The client can then use XSL to
transform it in various ways (slow moving data can be place in XML resources
that are cacheable at the client). AJAX (a term that I hate but again
growing in common understanding) based code on the client can manipulate
this XML posting back portions to the Server to go through the 'business'
layer.

With SQL Server being the main persistant store the XML can be forwarded in
to it and all updates to the various tables that make the heiarchy of data
representing the object can be
updated in a single local transaction.

Unusually for .NET development I don't bother to serialise and de-serialise
to objects, I just leave it in XML (although access is often wrapped by
classes). Since most of the time data is modified and created by the user
on the client and simply ends up in stored in the database this model works
fine.

This is very specific example of where there is really little competion to
XML (although JSON is gaining ground). Its is really a function of the
limitations present in the front end use of browser. Given more flexiable
technology I would think more carefully about it. Apart from the browser
the ability to pass heirarchical data into SQL Server and have single local
transaction perform all data changes is also a bonus.
 
J

jehugaleahsa

I think it's a good idea to use the right format for the job, but XML
has significant benefits in many places:

1) Reasonable character encoding support

How often is this a real concern? I assume you are talking about
Unicode, UTF8, etc. How hard is it to generate a Unicode file? How
often can't you figure that out?
2) No need to manually escape and unescape data (did your SSV file have
   no values with spaces in?)

SSV files don't care where the spaces are. The position and length of
fields are static. XML doesn't need to limit the size of a field,
however, neither does CSV. A quick regular expression can break a CSV
file apart in no time (it is slower though).
3) Natural way of representing hierarchies, or multiple tables etc -
   SSV/CSV really only describes a single rectangular table easily

I agree this is a good use of XML. However, a long time ago, you
handled heirarchies with a relational schema and had the entities in
separate files, searching on index(which was the line number usually),
which was typically fast and easy to code to.
4) Reasonably self-descriptive, with element and attribute names

I agree about this too. However, again, most other formats support a
header line that has the name of the column. Usually that is all that
is needed. Using my example above, attributes translate to values and
child elements to relations. It can be equally represented.
5) Well-defined standard - if you give me an XML file, I need to
   ask you enough information to *understand* the data, but I don't
   need more information in order to *load* it. Compare that with
   finding out which particular brand of SSV/CSV/etc you're using

This is another good point. Again, even XML formats are typically
application specific, not global such as XHTML. To some degree the
developer must always know what he is working with. In my opinion, it
takes as long to understand a SSV as it does to understand an XML. DTD
surely helps and having the nesting direclty in the file is also
beneficial.


But what experiences have you found XML useful in. I mostly concerned
with knowing a good time to use XML. Like I said, I hear the hype all
the time, but have rarely found a practical application of it.
 
J

jehugaleahsa

For raw tabular data, things like delimited files work great;
personlly I prefer TSV over CSV just because of the international
issus associated with comma/period.

Xml is absolutely *not* the right tool for every job; but a very
useful tool to have available. For structured / semi-structured data,
xml has many benefits; in addition to what Jon has mentioned, there
are also a powerful set of tools for manipulating data without having
to write any framework code - such as xsd for validation (also
incorporating standard known formats for common data types), and xslt
for transformation (either to a different structure of xml, or to
render as html etc).
all these tools like DTDs, XPath, XPointer, XSLT, etc., seem even
less used

No; rest assured, xsd (over dtd), xpath and xslt are all *heavily*
used tools.

Additionally; for cases where you can't know the structure in advance,
the DB support in SQL Server 2005 etc is a far more appealing option
than doing an "inner platform" antipattern in the database (table
rows, linking to table cells (0:*) to get the name/value pairs, etc) -
especially if it isn't purely tabular; you can store, index and query
elements simply via xpath, such as "/order/lines[1]/@part" (or
whatever)
Thinking about a SSV file that contains over 4MB of data -
converting that document to XML usually triples (at least) the
amount
of memory needed.

Yes, xml can be a little bulky; you can alleviate that a bit by
preferring attributes over elements for simple values, but it also
compresses *very* nicely with things like GZIP. Note that for large
documents you should ideally be using streaming tools (XmlReader etc),
not buffered tools (XmlDocument) to read the data where possible.
And usually, here, software expects a non-XML format.

My experience is the opposite; when dealing with externals etc, in
most (not all) cases, the "other side" is happy to talk xml, since it
is easy to define, platform independent, and simply to work bith
(contrast to binary formats). The only exceptions I have seen are (as
in your case) delimited tabular files; fine: I'll just use the same
data-processing code, but simply stick the right xslt / etc on the end
to fit the consumer.

See, I had an enormous level of difficulty converting XML to CSV. The
problem was that XSLT was inserting newlines and spaces. I had to make
my XSLT extremely ugly to meet my requirements. I gave up after seeing
the right data pumped out but couldn't get it to look right. It seemed
to be an enormous waste of time, self-training and code.
 
J

Jon Skeet [C# MVP]

How often is this a real concern? I assume you are talking about
Unicode, UTF8, etc. How hard is it to generate a Unicode file? How
often can't you figure that out?

Often. You'd be amazed at how often people guess at encodings and get
it wrong.
SSV files don't care where the spaces are. The position and length of
fields are static.

That doesn't sound like a space-separated value file. That sounds like
a fixed record file to me. Different kettle of fish, IMO.
XML doesn't need to limit the size of a field,
however, neither does CSV. A quick regular expression can break a CSV
file apart in no time (it is slower though).

Except you've got to know exactly which flavour of CSV is being used.
I agree this is a good use of XML. However, a long time ago, you
handled heirarchies with a relational schema and had the entities in
separate files, searching on index(which was the line number usually),
which was typically fast and easy to code to.

Searching on line numbers only works with fixed length records (which
has limitations) *and* if you're using a fixed with encoding - so
UCS-2 is okay, but UTF-8 isn't, for example. It also means having to
manage more files.
I agree about this too. However, again, most other formats support a
header line that has the name of the column. Usually that is all that
is needed. Using my example above, attributes translate to values and
child elements to relations. It can be equally represented.

When you say "usually" - out of what range of uses? Most of my data
sources aren't represented in just a single table form - so I've
either got to have multiple files, or use a richer format such as XML.
This is another good point. Again, even XML formats are typically
application specific, not global such as XHTML. To some degree the
developer must always know what he is working with. In my opinion, it
takes as long to understand a SSV as it does to understand an XML. DTD
surely helps and having the nesting direclty in the file is also
beneficial.

There's a difference between understanding the syntax and
understanding the semantics. If it's an application-specific file,
you'll have to have some description of the semantics whatever format
you use - but with XML you don't need to have anything extra to
describe the syntax (beyond the option of using a DTD, which the
developer doesn't need to understand - they just give to the XML
parser).
But what experiences have you found XML useful in. I mostly concerned
with knowing a good time to use XML. Like I said, I hear the hype all
the time, but have rarely found a practical application of it.

I use XML all the time, in many, many situations. Persistence,
serialization, project files for Visual Studio, configuration files
for applications, RSS feeds... whereas I can't remember the last time
I had to write code to deal with a fixed size record file.

Jon
 
J

jehugaleahsa

Well, Jon already made almost all the points I might have, plus some extra 
ones.  :)

That leaves only this...

[...]
I can see how it useful to save data out so it can be used later,
especially when a database is not involved. And XML is easier to read
than some pre-XML formats, but it also requires an enormous amount
more space. Thinking about a SSV file that contains over 4MB of data -
converting that document to XML usually triples (at least) the amount
of memory needed.

One thing that I find myself doing occasionally is compressing the XML  
data.  Gzip libraries are common, built into .NET and other platforms, and  
they are usually as simple to use as wrapping a regular file stream in a  
deflate/inflate stream, as appropriate.

If you put this support into the application, then the data need not have  
a large footprint on disk (with all those repeated tags, it compresses  
very well :) ).  It can even make i/o faster, since even with compression  
in the chain of data transfer, the disk i/o is the bottleneck.  At the  
same time, anyone who wants to get a human-readable version of the data  
can easily unzip the file and look at it.

Of course, you can do this with any data format, and text formats respond  
well to compression.  But as you point out, XML can have a fairly large  
footprint compared to other data formats, and this goes a long way toward  
alleviating that one drawback (one of the few drawbacks of XML, IMHO).

That's mostly an aside though.  I think the most important thing is to  
remember to use the right tool for the job.  If using XML is slowing you 
down, overcomplicating things, bloating your data, without any benefit  
then of course you should not be using it.

But it is still useful in many other contexts.

Basically, the computer industry has this habit of hyping whatever the  
"latest thing" is.  Some people are misled into thinking that they should  
apply that tool to everything they can, when in fact it's just one more  
tool made available for use _when needed_.  (Actually, this isn't unique 
to the computer industry, but I think the pace of innovation in the  
computer industry tends to exacerbate the problem somewhat).

For the most part, computer technologies don't "make the world a better  
place".  They just help software engineers implement better solutions, or  
implement solutions better or faster.  Don't make the mistake of putting 
all your hopes and dreams on a single technology.  :)

Pete

That is how I am feeling about it. I feel like I did go toward XML
because of all the hype. I mean, I bought O'Reilly's XML Nutshell
book, which is very good. However, in the last 2 years of owning it, I
haven't used it once. I felt that my project was a perfect utilization
(finally!) and realized in the end it wasn't that great after all. I
mean, it would have been nice since the output format is liable to
change every year or so. I would have preferred, especially since the
turn around time is usually only a month, to be able to change an XSLT
and be good to go. But some times writing some code and recompiling is
just easier. I don't know.
 
J

jehugaleahsa

Often. You'd be amazed at how often people guess at encodings and get
it wrong.

I suppose. I did recently misinterpret an ASCII file for Unicode and
had all sorts of trouble. Thank goodness StreamReader is smarter than
I am.
That doesn't sound like a space-separated value file. That sounds like
a fixed record file to me. Different kettle of fish, IMO.

Indeed you are right. The standard method for handling embedding
spaces (or even commas in CSVs) is to use quotes or some other
character. Again, Regex handles this very well.
Except you've got to know exactly which flavour of CSV is being used.

It is just the same as not knowing which XML format you are using. XML
can represent the same data in many/unlimited? different ways. If code
is written correctly, the amount of code aware of the data source is
minimal. Unless you are working with an extremely deleveloper-
unfriendly format, I say it is just as easy to pull data out no matter
what. The only benefit to XML is that someone already did most of the
parsing work for you, via XPath for example. However, writing a custom
parser can be as simple as a regex expression, so why force your input
into XML when SSV or CSV is readily available?
Searching on line numbers only works with fixed length records (which
has limitations) *and* if you're using a fixed with encoding - so
UCS-2 is okay, but UTF-8 isn't, for example. It also means having to
manage more files.

They don't "have" to be fixed-length records. There just needs to be
an agreement that there is one record per line. The additional benefit
is that you aren't forced to send a 20MB file if all your user wants
are the child records, which is a common scenario.

I'm not arguing with you. I am just pointing out that it is a matter
of the environment, and so far what you are said doesn't intice me to
rely on XML. I still don't see how XML can benefit me *here*.
When you say "usually" - out of what range of uses? Most of my data
sources aren't represented in just a single table form - so I've
either got to have multiple files, or use a richer format such as XML.

In my environment, we work primarily with CSV and SSV. Relationships
are typically formed on the database or are fixed. Again, XML repeats
the data definition every time. I would call this the biggest waste
generator when using XML. It's just not that practical in my
environment.
There's a difference between understanding the syntax and
understanding the semantics. If it's an application-specific file,
you'll have to have some description of the semantics whatever format
you use - but with XML you don't need to have anything extra to
describe the syntax (beyond the option of using a DTD, which the
developer doesn't need to understand - they just give to the XML
parser).

You make it sound like your parser "knows" what to do with what it
extracts. What does it extract, where does it go? How does it
magically make your code know what to do? At some point you the
developer must know what the data *means*. You need to know where to
find the data. You need to know where to put it. That requires knowing
the syntax *and* the symantics. Unless you have found a way to
overcome this need for developer intervention? Did ya? 'Cause that
would be something I would like to have in my possession!
I use XML all the time, in many, many situations. Persistence,
serialization, project files for Visual Studio, configuration files
for applications, RSS feeds... whereas I can't remember the last time
I had to write code to deal with a fixed size record file.

Jon

But if you think about it, all of these applications of XML are due to
someone deciding to jump on the XML train. They didn't have to use
XML. What about the "in-between". Anyone can send a format over a
network. Does XML provide a benefit that couldn't have been achieved
otherwise? or do these applications use XML because it makes their
code seem more up-to-date? or do they use XML because of the pre-built
tools available? I would venture that someone could create a CSV
standard document format and the tools to work with it that could
rival any XML-based platform. Create the tools and you will have
people who use your format. Especially if you can convince them that
it is "the way" to do data tranfer.

When was the last time you had to create your own XML format and use
it in your own code? Are we just working with tools that make our
lives easier, and it is just coincidental that they used XML? I can't
say. I feel as though there is some reason for all the hype. I just
wish I knew how to utilize it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top