completely simple question about .NET distributed code

J

jason

we have developed a .NET class library that defines a family of
reusable business objects. the class library is developed in C#, and
works quite well.

one problem, however, is that as more and more applications are being
developed to consume this class library, we are running into deployment
and version controll issues.

i recall a lot of this kind of thing being solved with distributed
application models, such as the EJB specification, which i worked with
in the past. from my reading, it seems like the distributed model for
..NET code is "Web Services". my questions then are threefold:

1) is this assumption correct? that the distributed code model for .NET
is using web services?

2) are .NET objects transportable across the wire through the XML
messaging? for example can a web service return a datatable?

2) since web services use XML as the interprocess message format, are
there any significant issues with message throughput? for example, if
one method in one class of my class library returns hundreds of rows of
data as part of a reporting method, would there be significant overhead
encapsulating that information in the XML message?

please feel free to correct any false facts that my questions
predicate, or answer the ones that are on track. i would appreciate any
help with this basic distribution question,

jason
 
N

Nicholas Paldino [.NET/C# MVP]

Jason,

Web services are one of the models for distributed code. Web services
are used typically when the boundary between client and service requires
interoperability (such as .NET to java, or some other technology).

There are other technologies as well, namely, Enterprise Services (COM+)
and remoting. Also on the horizon is the Windows Communication Foundation
(which does everything, quite honestly, and if you have the time, I would
target that for development).

Moving on to your second question, yes, .NET objects are transportable
across web services. XML Serialization is used for that serialization
across the wire, and your types are subject to those limitations.

Remoting can use SOAP formatting or Binary serialization, both of which
are "true" serialization technologies. WCF is flexible in how the instances
are serialized on the wire.

To answer your "second" second question, yes, there is going to be
overhead. In general, XML is a verbose way of describing data (text-based,
self-describing), so as the amount of data grows larger, the time to
serialize that data grows. Binary formats used in remoting, WCF, and COM+
are faster, but ultimately, there is a transformation from an instance to
the format on the wire, and that is always subject to the amount of data in
the instance. Its just that some encodings are faster than others.

Another thing to think about is the fact that web services are
stateless, while the other options all expose a number of options when it
comes to session state.

Also, do you have other requirements? Who is going to use these? Do
you have to authenticate and authorize access? Are your clients running
..NET or not? Do you require session state?

These questions are just as important as the encoding question. If you
answer them, then we can give you a better idea of what path to head down.

Hope this helps.
 
J

jason

yes, this helps quite a bit, not only answer the questions i did have,
but helping formulate other questions i should be asking. i very much
appreciate it.

everything we are writing is on the .NET platform, so it sounds like
the primary advantage of the web services would not be something we
immediately would benefit from. hopefully that means we will be able to
choose one of the other distributed technologies you mention, with the
faster transmission.

but to answer questions you raised that might influence the final
choice:

the objects that the library provide are consumed primarily by several
web applications (one classic ASP about to be converted to .NET, the
other is already .NET), but also by several command line applications
(also .NET) and in the future probably some service applications (also
..NET). there are no conventional clients, only these handful of
applications, all but one of which are .NET now, the exception to be
made .NET soon.

we control state at the application presentation level, and information
access at the data row level (and a little bit at the application
level), so i'm not sure that we would need to authenticate and
authorize access to the libraries themselves. though if there's an easy
way to register subscribing applications to the use of the library,
that would be pretty spiffy. i don't think we would need to perform
session state at the library level either, as it is only used for
atomic data interactions. session state is currently handled by the
consuming applications.

does this information influence the choices? thanks again for the
advice already given,

jason
 
N

Nicholas Paldino [.NET/C# MVP]

Jason,

Well, they are all conventional clients. They are looking to consume a
service, right? It doesn't matter what they are, as long as they consume,
they are the client.

When I say session state, I mean in the sense of, if you get a reference
to an object, do the calls to one method influence the calls to other
methods on that object? Do you have a need for identity on that instance,
or no? It sounds like the most likely answer to this is no.

Personally, I wouldn't use remoting for this, as it is a technology
which is going to get little in the way of improvements once WCF is
released.

I would recommend going with enterprise serivces, which will provide an
easier upgrade path to WCF when the time comes.
 
J

jason

yes, quite right about the client definition.

and regarding session state, i think there actually would be some need
for it. quite frequently we instantiate an object from the class
library, load it with data from the database, manipulate that data in
the object, and then save the data back to the database.

i think i was assuming that the distributed code would basically
transmit the object instance to the consuming client, which could
perform all of the operations on its local instance. now that i think
about it, that might not be at all how the distributed code models
work. let me know if i need to alter my impressions on this regard.

i will begin to research enterprise services, though. thank you very
much!

jason
 
N

Nicholas Paldino [.NET/C# MVP]

Jason,

You are right, your thoughts on this need to be adjusted somewhat.

In the .NET world, the service lives outside of the app domain of the
current client. This means that you could have a service in another app
domain in the same process, or another process on the same machine, or
another machine altogether. It doesn't matter. The way it works is that
you get a reference to a proxy, and then make calls through that proxy to be
exectued in the service app domain.

You could send an object back into your client's app domain, and then
have functions performed on it, but it kind of defeats the purpose. It
makes you have to ask what benefits are you gaining from having a service
return an object which is going to do the work in the client space to begin
with.
 
J

jason

thanks for setting me straight, this explanation makes sense.

as for what the benefit is, the original impetus for this is simply to
have a central runtime version of the reusable code, instead of a copy
of the dll on every machine that wants to use it. so in that sense
getting a copy of the object still has some benefits in a distributed
model. does this make sense, or am i still way off the mark? i'm sort
of imagining a 'baby step' to distributed code. right now we have
copies of dll's everywhere and we instantiate all the objects in the
consuming applications.

it seems like we could benefit by moving those dll's to a single
runtime copy that consuming applications can get an instance from to
interact with just how they do today?

though of course i'm curious to learn more about the benefits of
running the object on the separate object server too. is there some
information i could digest to learn about the advantages /
disadvantages of doing this kind of thing in the future perhaps? once
we baby step off the dll's, it seems like we could plan another
(larger) effort to move all the runtime manipulation as well?

thanks again for all the valuable info,

jason
 
N

Nicholas Paldino [.NET/C# MVP]

Jason,

Yes, there is a big difference here. What you have is a distribution
issue.

What I would do is strong name your assembly, and maintain the version
numbers. This way, you can have a better idea of who is running what.

You want to use a distributed approach when you have a need to. Perhaps
you have a machine that has access to a certain resource, which others do
not, or perhaps that machine is super powerful, and can perform some
calculations in much quicker time than it would take locally. These are the
reasons you have for remote calls.

Yes, there is an overhead in marshaling the call from one machine to
another, but usually, the savings in the difference between local and remote
computing time justify it, or the resource is only accessible from that
other machine.

And anyways, the way that you are framing your solution, you still have
to move an object back to the local machine, which means that if they are
going to run code on that machine, they have to have a copy of the assembly
anyways (to access the type information), which defeats the whole purpose!
 
J

jason

ahhh, i see, so distributed code models aren't a solution for
deployment / version control issues at all. well that is good to know,
even if it's a shame.

thanks for all the info. should we ever have an actual distribution
need, i'll have a list of things to look into!

jason
 
B

Brant Estes

If I might add my two cents to the conversation...

To learn more about distributed business objects, technologies such as
remoting, web services, and Enterprise Services, and how they work in
the real world with those business objects, you might want to pick up a
copy of Rockford Lhotka's, Expert C# Business Objects by Apress. What
the book contains is information about these technologies, the
difference between passing business objects to an application server by
value (serialized) or by reference (marshalled). In the book, he
implements a business object framework that will abstract these
technologies and concepts, leaving the business object developer to
focus purely on business logic, and not the mundane transport logic
under the covers.

He's due to release his next version of the book most likely sometime
in March, which really extends the framework a great deal, to take
advantage of the new features in the version 2.0 .NET framework. I'd
highly suggest looking into it when it comes out.
 
J

j-integra_support

A couple of comments before you write off "Remoting" completely as a
solution...

Implementing a .NET Remoting solution over an approach whcih uses Web
Services may be beneficial for the following reasons:

1) .NET Remoting is faster and more optimized than Web Services (this
is true of today at least - who knows in the future with the speed of
Web Services improving in leaps and bounds).

2) .NET Remoting will provide an easier route forward should you decide
to migrate to Indigo/WCF in the future, regardless of whether you
switch to Web Services or Indigo/WCF services.

If you properly design today for a .NET Remoting application then the
migration path forward for Indigo/WCF services will be clearer and have
more support. Here is a page talking about the future of .NET
remoting...

http://blogs.msdn.com/richardt/archive/2004/03/05/84771.aspx

Web Services may not provide such a supported or easy way forward. Then
again, perhaps it well. But don't discount .NET Remoting just yet.


Shane Sauer
J-Integra Interoperability Solutions
http://j-integra.intrinsyc.com/
high performance interop middleware for java, corba, com & .net
 
N

Nicholas Paldino [.NET/C# MVP]

Yes, remoting is faster and more optimized than web services, but it is
not significantly faster than enterprise services.

.NET remoting will NOT provide an easier rout forward to WCF. A
properly designed Enterprise Service component (meaning, defining interface
separately from implementation, etc, etc) will actually provide the easiest
upgrade path. The same thing could be said of remoting to some degree, but
the emphasis on proper design (separation of interface from implementation)
is not as prevalent as it is with ES.

Finally, remoting pretty much doesn't have a leg to stand on. As
distributed object technology, it is not going to be extended, and it
currently has some gaping holes in it (such as authentication and
authorization, transaction support, reliable messaging, etc, etc). From
what I have seen, there are currently no plans to extend Remoting to provide
this, instead, that will be left to WCF.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top