IoC

F

Frank Rizzo

Don't mean to start a flame as it always does, however, I've implemented a
small project using IoC and I am beginning to question whether it really
adds flexibility to the application or just adds another useless layer of
indirection.

Case and point. There is a class called OrderProcessor that takes IOrder
and IOrderRules in the constructor. Of course, both IOrder and IOrderRules
are defined in the IoC container. Now at this point, and forever going
forward, there is only 1 implementation of IOrder (StandardOrder) and
exactly 1 implementation of IOrderRules (CompanyRules). That's all there
will ever be, or otherwise the entire IoC setup breaks apart.

Sure, OrderProcessor does not have a dependency on the hardwired
StandardOrder and CompanyRules classes, but, all the same, it has a
dependency on IOrder and IOrderRules. I simply don't see what this level of
indirection gives me other than more complicated debugging, having to
explain to newbies the whole IoC paradigm (and why it's useful), more files
in the solution, etc...

If anyone has experience implementing a successful IoC based application,
please enlighten me as to what was gained (or not) by adding IoC into the
app.

Regards
 
P

Peter Morris

Case and point. There is a class called OrderProcessor that takes IOrder
and IOrderRules in the constructor. Of course, both IOrder and
IOrderRules
are defined in the IoC container. Now at this point, and forever going
forward, there is only 1 implementation of IOrder (StandardOrder) and
exactly 1 implementation of IOrderRules (CompanyRules). That's all there
will ever be, or otherwise the entire IoC setup breaks apart.

It's not the lack of purpose in IoC that's the problem here, it's your use
of it.

What you should be implementing here is a "service", whereas it is quite
possible that what you actually need here is a method
StandardOrder.Process(). You should really only implement a service if it
doesn't work on a single instance or on a logical aggregate of instances.
For example

public interface IOrderProcessingService
{
void ProcessOrders(IEnumerable<StandardOrder> orders);
}

but even then this would just iterate the enumeration and call
StandardOrder.Process();


A better example would be something like a Repository.

public interface IStandardOrderRepository
{
StandardOrder GetByNumber(int orderNumber);
}

Now if you have some other code somewhere which retrieves an order from the
DB via the repository you would pass in a mocked repository instead of the
real one, saving you from having to have to have data in your DB, or any DB
at all for that matter.

var testOrder = {Some method to create your test order};
var mockOrderRepository =
MockRepository.GenerateMock<IStandardOrderRepository>();
mockOrderRepository.Expect(x => x.GetByNumber(1)).Return(testOrder);

var serviceToTest = new
SomeOtherServiceThatUsesOrderRepository(mockOrderRepository);
serviceToTest.TestSomething();

//Do some Asserts here.


As you can see you are now testing the "serviceToTest" in isolation, giving
it predictable results to work with.


In addition, I don't see any reason why you should add interfaces to those
domain objects. Sure it means you can mock them, but that just seems like a
*lot* of work to me. What I do is have a TestObjectFactory class in my test
project which creates test data objects and I reuse those.

If you write your tests first you will see where it is appropriate to use
IoC.
 
P

Peter Morris

Something I forgot to mention.

public class OrderProcessor
{
public OrderProcessor (
StandardOrder order,
CompanyRules rules)
{
...
}
}

I'd be more inclined to define this as

public class OrderProcessor : IOrderProcessor
{
public void Process(StandardOrder order)
{
...
}
}

The reason is that if you use a dependency injection container such as Unity
you can create the service with a ContainerControlledLifetimeManager and
have the same instance capable of processing multiple requests on different
threads because it has no state. Your current implementation would require
a transient object (one instance per request).

You'd then only have other services injected into the constructor, for
example...


public class OrderProcessor : IOrderProcessor
{
readonly ICustomerRepository CustomerRepository;

public OrderProcessor(ICustomerRepository customerRepository)
{
//Check customerRepository != null

CustomerRepository = customerRepository;
}

public void Process(StandardOrder order)
{
//Check order != null

Customer customer =
CustomerRepository.GetByID(order.CustomerID);
...
}
}

Here you can see that when you test your OrderProcessor you would mock the
CustomerRepository that you pass in to it, and then call Process passing the
instance you would like to process.
 
P

Peter Morris

What's IoC?

Inversion of Control. There is also Dependency Injection you might find
interesting.
 
F

Frans Bouma [C# MVP]

Frank said:
Don't mean to start a flame as it always does, however, I've implemented a
small project using IoC and I am beginning to question whether it really
adds flexibility to the application or just adds another useless layer of
indirection.

Case and point. There is a class called OrderProcessor that takes IOrder
and IOrderRules in the constructor. Of course, both IOrder and IOrderRules
are defined in the IoC container. Now at this point, and forever going
forward, there is only 1 implementation of IOrder (StandardOrder) and
exactly 1 implementation of IOrderRules (CompanyRules). That's all there
will ever be, or otherwise the entire IoC setup breaks apart.

I then don't really understand why you use an interface. I mean: if you
pass in an interface and you program against the interface, passing in
another implementation of the interface through DI would not break things?
Sure, OrderProcessor does not have a dependency on the hardwired
StandardOrder and CompanyRules classes, but, all the same, it has a
dependency on IOrder and IOrderRules. I simply don't see what this level of
indirection gives me other than more complicated debugging, having to
explain to newbies the whole IoC paradigm (and why it's useful), more files
in the solution, etc...

If anyone has experience implementing a successful IoC based application,
please enlighten me as to what was gained (or not) by adding IoC into the
app.

IoC is a very formal concept, and dependency injection is used to
implement it. With that in mind, you can use these two concepts (which
are very tightly coupled, pun intended) to separate the decision WHICH
implementation of a given type is used inside a given class C.

As an example I'll give you auditing, authorization and validation,
three concepts which are implemented through IoC with dependency
injection in our O/R mapper LLBLGen Pro.

The entity classes don't know anything about the logic of auditing,
authorization and validation, they just call out to an auditor,
authorizer or validator if these objects are available inside the entity
object. This also means that an entity class doesn't have code inside
itself which does things like:
this.ValidatorToUse = new CustomerValidator();

Instead, the validator is injected through dependency injection. WHich
validator class instance is injected is the concern of the IoC system.
This allows for separate development of these concerns and also separate
maintenance of these concerns. The bonus is that because DI works with
discovery and configs, it is possible to have a system which is done,
though doesn't use auditing and you place an assembly with auditors for
example in the bin folder, setup a simple config setting in the
application's config file and after the app restarts it has auditing
functionality inside the entities.

This is useful because there can be multiple implementations (and maybe
not an implementation at all) of a given interface. In YOUR situation,
it's different: you have a dependency between class A and B and that
dependency is always there. In that case, don't use an interface and
don't use IoC as it's overkill and makes things complicated. I.o.w.:
don't use tools just because some guy on a blog said you should use them
otherwise your code is crap, use tools because they solve a problem. If
you don't have a problem ABC, don't use a fix for ABC. Interfaces are
for multiple type inheritance, IoC is for defining which dependencies
there are outside the depending class. If you already know there is just
1 dependency, it's pure overkill to use IoC. Yes, I know the TDD / Agile
mob will now lynch me and say that IoC and interfaces make TDD much
easier, but it doesn't make it less overkill and less overhead, and
overkill/overhead always leads to a less maintainable system.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
F

Frans Bouma [C# MVP]

Peter said:
It's not the lack of purpose in IoC that's the problem here, it's your
use of it.

What you should be implementing here is a "service", whereas it is quite
possible that what you actually need here is a method
StandardOrder.Process(). You should really only implement a service if
it doesn't work on a single instance or on a logical aggregate of
instances. For example

public interface IOrderProcessingService
{
void ProcessOrders(IEnumerable<StandardOrder> orders);
}

but even then this would just iterate the enumeration and call
StandardOrder.Process();

That's not always possible. Process on StandardOrder can only work if
StandardOrder doesn't need to access any other objects but its own. It
might not be the case, and therefore a processor class is required which
has the ability to consult other objects/classes in the process. Calling
deeper into a hierarchy has advantages, but only if deeper in the
hierarchy decisions are always possible to make. If you need information
deep inside the hierarchy which isnt available at that point, you can't
do the processing there.

What you propose doesn't make things easier to read nor easier to
maintain, as it doesn't solve a complexity problem:
OrderProcessor.Process is also a single point where processing takes
place with the advantage that it might consult other objects not inside
StandardOrder / CompanyRules: your setup for example requires that the
rules are part of StandardOrder, which is IMHO silly.
A better example would be something like a Repository.

public interface IStandardOrderRepository
{
StandardOrder GetByNumber(int orderNumber);
}

Now if you have some other code somewhere which retrieves an order from
the DB via the repository you would pass in a mocked repository instead
of the real one, saving you from having to have to have data in your DB,
or any DB at all for that matter.

but increasing the level of complexity in the application. Maintenance,
it's so utterly important and so easily overlooked. The vast majority of
software engineers in the world are doing maintenance and the amount of
maintenance work only increases. Making things more complex is not doing
anyone a favor. If test data is a concern, you can easily solve that in
a database.
var testOrder = {Some method to create your test order};
var mockOrderRepository =
MockRepository.GenerateMock<IStandardOrderRepository>();
mockOrderRepository.Expect(x => x.GetByNumber(1)).Return(testOrder);

var serviceToTest = new
SomeOtherServiceThatUsesOrderRepository(mockOrderRepository);
serviceToTest.TestSomething();

//Do some Asserts here.


As you can see you are now testing the "serviceToTest" in isolation,
giving it predictable results to work with.

no, you test your service for predictable results in the test
environment. In production, there are many more variables at play:
multiple threads with users doing transactions on the db, locking rows,
blocking your transaction, will it succeed? you won't know.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
P

Peter Morris

I said:

You said:
That's not always possible. Process on StandardOrder can only work if
StandardOrder doesn't need to access any other objects but its own.

Which to me looks like exactly the same thing. An aggregate should be able
to access other instances within the same logical group so can easily have
access to everything it needs. If the functionality doesn't "belong" to a
single object the by all means create a service, but if it makes sense to
have it on an object don't make a service. I wouldn't make a
TelevisionSwitcherOnnerOfferService, I'd just implement something on the TV
:)

your setup for example requires that the rules are part of StandardOrder,
which is IMHO silly.

How on Earth can you possibly know that? Neither of us have any idea what
this person's domain model looks like. The rules might be aggregated parts
of an order, they might belong to the customer, they might be global and
shared by multiple customers. To say my suggestion is silly without having
any knowledge of the domain is a big leap. I am merely posing ideas that
the poster may consider along with their full knowledge of the problem in
order to see if any of them fit.



but increasing the level of complexity in the application.

You think that whenever you have code that needs to get an order by its ID
the following code complicates it?

StandardOrder order = StandardOrderRepository.GetByNumber(number);

Doesn't look complicated to me.

If test data is a concern, you can easily solve that in a database.

That's integration testing, not unit testing. If you have a method that
works out the total value of a purchase order why would you want to ensure
the database works? When you test something you should isolate it as much
as possible, that way when 1 thing fails you get failed tests for that 1
thing only. If you create tests that read test data from a DB then you not
only test that the current method works but also the persistence layer and
the connection to the DB. If you have no connection to the DB most of your
tests will fail, rather than only the tests which test for DB connectivity.
When you test everything together you are testing if it all works or it all
fails, that's not "unit" testing.

no, you test your service for predictable results in the test environment.
In production, there are many more variables at play: multiple threads
with users doing transactions on the db, locking rows, blocking your
transaction, will it succeed? you won't know.

If you want to check X works with multiple threads then write a test that
runs multiple threads to test it. Unit testing is easier with inversion of
control. Inversion of Control is easy to set up with Dependency Injection.
Just because you have an army of unit tests doesn't mean that you shouldn't
run your app and test it too, it just means that you can modify your source
code and introduce less new bugs.
 
P

Peter Morris

As an example I'll give you auditing, authorization and validation, three
concepts which are implemented through IoC with dependency injection in
our O/R mapper LLBLGen Pro.

And when you wrote your unit tests didn't you mock those interfaces and then
pass the mocked interfaces into the relevant points to ensure that the
interfaces are notified of certain things under specific circumstances?

Yes, I know the TDD / Agile mob will now lynch me and say that IoC and
interfaces make TDD much easier, but it doesn't make it less overkill and
less overhead, and overkill/overhead always leads to a less maintainable
system.

In my experience it forces me to split my code up into more logical units.
When I am writing code I often realise "Why should a file that converts XML
to a Binary file know how to zip and unzip files?" When this happens I end
up with code like this

public class XmlToBinaryFileProcessor : IFileProcessor
{
readonly IZipService ZipService;

public XmlToBinaryFileProcessor(
IZipService zipService)
{
//check zipService != null
ZipService = zipService;
}


void IFileProcessor.Process(string archiveFilename, string
outputFileName)
{
ZipService.ExtractFile(someExpectedFileNameWithinTheArchive,
archiveFileName);
CreateBinaryFile(the file name of the extracted file);
ZipService.CreateZipForFile(someResultingBinaryFileName,
outputFileName);
}

}


Rather that littering my methods with ZIP/UNZIP code.
 
F

Frans Bouma [C# MVP]

Chopping up posts isn't really making things readable, Peter. Now I
can't see to which sentences I replied with my texts.

Peter said:
How on Earth can you possibly know that? Neither of us have any idea
what this person's domain model looks like.

Sure, but your proposal of calling a process routine on StandardOrder
has the risk of running into the drawback of OO programming: that you
are inside an object and need to do something but you need information
that's outside the object, so you effectively have to either make a
reference to another object (bad) or move the code upwards in the call
chain.
You think that whenever you have code that needs to get an order by its
ID the following code complicates it?

StandardOrder order = StandardOrderRepository.GetByNumber(number);

Doesn't look complicated to me.

without any context (you cut that away) it's impossible to answer.
That's integration testing, not unit testing.

Oh dear...

Peter, who gives a flying **** what it is called. Only the true Agile
fetishists who have the Agile Manifesto tatooted on their foreheads
think it's important to notice that you have different kinds of testing
which apparently require different approaches.
If you have a method that
works out the total value of a purchase order why would you want to
ensure the database works?

Because clean-room, 'it looks great in theory' kind of testing is
bullshit: it's not useful because at runtime in production, the
environment and thus the rules are different.
When you test something you should isolate
it as much as possible, that way when 1 thing fails you get failed tests
for that 1 thing only.

Which doesn't mean anything. If I have an algorithm which has to do 3
things, and all 3 things separately work, I have no guarantee that
executing thing 2 after thing 1 will work.

Oh wait, that was 'integration testing' right? No, the algorithnm is
for example a routine used in yet another routine.

Anyway, I hate whining threads about TDD related crap, so have fun Peter,

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
F

Frans Bouma [C# MVP]

Peter said:
And when you wrote your unit tests didn't you mock those interfaces and
then pass the mocked interfaces into the relevant points to ensure that
the interfaces are notified of certain things under specific circumstances?

No, mocking is stupid in an O/R mapper scenario, as there are unlimited
# of user-cases and the code is very complex (read: you can't get away
with isolating silly small routines and trust it will work together as
well).

I use algorithm proving techniques and code reviews to verify algorithm
implementations. I can tell you that that works very well. The upside is
that you also identify edge cases and important areas during these
processes and you can then write unittests to verify these edge cases
and important areas so you can verify that code doesn't break.
In my experience it forces me to split my code up into more logical
units. When I am writing code I often realise "Why should a file that
converts XML to a Binary file know how to zip and unzip files?"

what does that have to do with interfaces and IoC? It's just common
sense software engineering.
When this happens I end up with code like this

public class XmlToBinaryFileProcessor : IFileProcessor
{
readonly IZipService ZipService;

public XmlToBinaryFileProcessor(
IZipService zipService)
{
//check zipService != null
ZipService = zipService;
}


void IFileProcessor.Process(string archiveFilename, string
outputFileName)
{
ZipService.ExtractFile(someExpectedFileNameWithinTheArchive,
archiveFileName);
CreateBinaryFile(the file name of the extracted file);
ZipService.CreateZipForFile(someResultingBinaryFileName,
outputFileName);
}

}


Rather that littering my methods with ZIP/UNZIP code.

Or, you could also have looked at it from a more formal way and
recognize that you have a pipeline of processing elements, and thus make
it so that processing element X processes the input to output O for
processing element Y which processes it further. No IoC needed, and even
more important: no notion of 'zip' inside the XmlToBinaryFileProcessor.

After all, you're not creating a binary file, but a compressed file AND
a binary file, but more importantly, you're creating a compressed form
of the XML data, and store it in a file, so more concerns than 1 in the
same class.

Nothing wrong with that, the paper
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.7026
already proved that it's impossible to get SoC without Aop, though as
soon as you recognize that you've more than 1 concern which is dominant,
testing it through code might make it more cumbersome than you initially
would think.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
P

Peter Morris

Sure, but your proposal of calling a process routine on StandardOrder has
the risk of running into the drawback of OO programming: that you are
inside an object and need to do something but you need information that's
outside the object

I usually obtain support for other stuff using services. If it's not a good
solution in this case then it's probably not a good idea to add a Process
method to StandardOrder.

without any context (you cut that away) it's impossible to answer.

I'll try to include more in future, like that :)

Oh dear...

Peter, who gives a flying **** what it is called. Only the true Agile
fetishists who have the Agile Manifesto tatooted on their foreheads think
it's important to notice that you have different kinds of testing which
apparently require different approaches.

I'm not arguing terminology. I must confess to being the last person to
give a damn what stuff is called, I just use what I find works and dump what
I find doesn't. I was merely pointing out that to test a single unit is
completely different from testing multiple units all work together.

Which doesn't mean anything. If I have an algorithm which has to do 3
things, and all 3 things separately work, I have no guarantee that
executing thing 2 after thing 1 will work.

Personally I try to avoid implementing my code in such a way that I must
call public methods in a specific order. If things need to happen in a
specific order I try to make them private and then expose a public method
that calls them in the correct order. If there is a scenario where there is
no required sequence and method 2 puts the object into a specific state that
method 3 has a problem with then either

A: Method 2 is leaving the object in an illogical state, or
B: Your tests on Method 3 don't cover all logical variations

Anyway, I hate whining threads about TDD related crap, so have fun Peter,

I'm not whining in the slightest. To be honest I don't even know if what I
do could be considered TDD. All I know is I write tests first then
implement after. I use dependency injection to help me to separate my code.
I've only been using this approach now for about 6 months, and while I am
really enjoying it I am always willing to see flaws in something because it
might help me to avoid problems in the future or even find a better way of
doing it. So far I find it is saving me a lot of time, and has saved me on
at least 10 times from introducing very obscure bugs such as

if (hello)
...
instead of

if (!hello)
...

or

if (a < b)
...

instead of

if (a <= b)
...
 
P

Peter Morris

In my experience it forces me to split my code up into more logical
what does that have to do with interfaces and IoC? It's just common sense
software engineering.

Before using IoC I'd probably have just unzipped where I was and then maybe
refactored the code out only when I needed it somewhere else. Now I tend to
write code and as I do I think "actually, unzipping is not the
responsibility of this class" and place it elsewhere. I'm just saying that
I've started to do this more since using IoC simply because I needed to pass
a mocked IZipService when running unit tests and I didn't want to have to
have specific ZIP files on my hard disk etc in order for the tests to pass.


I'll get around to that as soon as I am not knackered :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top