• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
Collapse

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Overarchitected Projects"

Collapse

  • firestarter
    replied
    Originally posted by SpontaneousOrder View Post
    Well... I did say *if*. And I certainly wouldn't "unit test" the same code I'd "test driven" as another extra activity, unless it to focus on some particular hairy algorithm.
    Which is fine for testing a few input/output cases but you'll never achieve the same level of granularity that unit testing allows for, nor will you be able to verify how exactly your classes are interacting together.

    Simple example - We take our Calculator class and add component B to provide some caching logic on top. TDD will allow you to verify the result but you won't be able to verify how the result was provided. Is B dipping into the cache or forcing a recalculation each time?

    If you test them together, and you test drive your code, you'll never get unexpected nulls or garbage. That's the point.

    Isolating everything as an overarching rule invites the problem of never knowing exactly how your collaborating classes may behave under any given circumstance, and therefore you write many tests that *should* be redundant - but are necessary for peace of mind.
    You also write tests which cover both valid scenarios and invalid ones too - which is both wasteful and worsens the signal/noise ratio. This is why you can't really do proper TDD (as far as I'm concerned) while isolating every class.
    Can you guarantee that these classes are only ever going to be used with each other? Is it possible someone might come along and lift out one of these components to be used else where?

    The power of a solid set of unit tests can ensure that a class conforms to an exact set of behaviour, regardless of external factors. So a test that is potentially redundant today, might prevent a bug next month when other developers start trampling over the code.

    But the principle is still the same. It is far better, where possible, to test behaviour rather than implementation, and behaviour is an emergent property of multiple collaborators - not a single class.
    And better still, to recognise that different types of testing are tools to achieve different things and often it's not an either/or situation.

    Leave a comment:


  • SpontaneousOrder
    replied
    Originally posted by firestarter View Post
    TDD and Unit Testing are not the same thing, and doing one doesn't mean you can't do the other.
    Well... I did say *if*. And I certainly wouldn't "unit test" the same code I'd "test driven" as another extra activity, unless it to focus on some particular hairy algorithm.

    Originally posted by firestarter View Post
    Agreed that in some cases, where there is a natural coupling between classes, that it makes sense to test the classes together (the classic example would be the CalculatorInputParser and Calculator). I'd argue this isn't the norm though.

    A good example for isolating classes might be if B is a UserPreferenceService, and A reads out a preference which is then used to drive some further logic in A.

    How can you verify how A behaves correctly based on all the possible return values for this preference?
    How can you confirm A behaves gracefully if B throws an exception, return null, returns garbage, or doesn't return at all?
    If you test them together, and you test drive your code, you'll never get unexpected nulls or garbage. That's the point.

    Isolating everything as an overarching rule invites the problem of never knowing exactly how your collaborating classes may behave under any given circumstance, and therefore you write many tests that *should* be redundant - but are necessary for peace of mind.
    You also write tests which cover both valid scenarios and invalid ones too - which is both wasteful and worsens the signal/noise ratio. This is why you can't really do proper TDD (as far as I'm concerned) while isolating every class.

    If A invokes service B for info that then drives more logic in A, while service B - as services tend to do - basically orchestrates datasources C & D, then you can should be able to, on the majority of occasions, test A & B together while mocking C & D.

    I.e all use cases for our 'unit' under test (in this case some component consisting of A & B) that exercise B originate through A (or A & A2 is there are more consumers of that service).

    Of course it depends on the exact nature of the service and its consumer. If the service in turn collaborates with validators and filters and whatnot, then you're now adding more permutations if you include those too, and adding more gaps if you don't. It is obviously a judgement call when it comes to granularity & overlap.

    But the principle is still the same. It is far better, where possible, to test behaviour rather than implementation, and behaviour is an emergent property of multiple collaborators - not a single class.

    I suspect, though, that this will vary depending on the kind of software being developed. So I don't want to make it sound like a one-size-fits-all approach.


    Obviously, none of this has anything to do with DI frameworks though.
    Last edited by SpontaneousOrder; 3 March 2016, 22:01.

    Leave a comment:


  • Hobosapien
    replied
    Sounds like a way of telling whether a system should be scrapped or not.

    We could offer it as an evaluation service to government projects, but the current pattern of massive failure at massive cost is a way for us to get back (by the day rate) some of the taxes we get fleeced for via IR35 and the like.

    Leave a comment:


  • firestarter
    replied
    Sounds like the problem here with the team (as with a lot of the industry) is a complete lack of common sense.

    An over-engineered system such as the one you describe will only be made worse by the addition of DI.

    A properly designed one will only benefit.

    Leave a comment:


  • Jaws
    replied
    Originally posted by firestarter View Post
    Read through some of the blog posts. Sounds like the general idea is to have a BaseCommand which is actually just a big ballbag of mutable dependencies, which are all (hopefully) set prior to execution, with each command implementation able to access any of them.
    "Hopefully set" from the one single place that executes them, which presumably has unit tests to ensure this happens? The point of the series of articles was not really about dependency injection (where in fact the dependencies are injected at command execution time) but reducing the amount of different abstractions [and therefore "dependencies"] a project actually needs.

    I see overzealous defensive programming all the time but it has always seemed that the place developers have productivity issues is not what property they inadvertently set to null but which of the 100 classes (that don't need to be there) do they need to make the change to.

    Originally posted by firestarter View Post
    I could only ever see this working for small projects.
    Now this is the real problem, a great deal of projects that could be small end up being large due to architectural decisions of the latest person who swallowed the Domain Drive Design book and is using the current project to test how it works. Fine, I've made this mistake my myself in the past but it is probably less than 10% of projects that actually need it.

    For example, with the current "greenfield" project I'm working on, after being given the solution by the lead developer, I see 12 projects in Visual Studio. The system is a series of forms that result in data being stored in a database or service calls being made (now where have I seen this before?). How could this possibly need more than a few general abstractions (or assemblies for that matter)?
    Last edited by Jaws; 2 March 2016, 07:15.

    Leave a comment:


  • firestarter
    replied
    Originally posted by SpontaneousOrder View Post
    Lets suppose you're doing the TDD thing... if you isolate every class you're throwing away 50% of the value of TDD, as TDD is all about emerging relationships between classes & components.
    TDD and Unit Testing are not the same thing, and doing one doesn't mean you can't do the other.


    If you have complicated logic in A coupled to complicated logic in B, then most of the time you've got a bad design. If you have complicated logic in A coupled to more trivial logic in B, then testing A & B together with a mocked C is often even better.

    There is a time and a place, which is the point being made. Getting more benefit from isolating a single class, as opposed to from a small gaggle of cohesive classes operating together is, in my opinion, fairly rare. Hence I'd expect to see things like factories which return pre-assembled components being injected more than I would expect to see every single dependency being wired up in some layer of indirection.
    Agreed that in some cases, where there is a natural coupling between classes, that it makes sense to test the classes together (the classic example would be the CalculatorInputParser and Calculator). I'd argue this isn't the norm though.

    A good example for isolating classes might be if B is a UserPreferenceService, and A reads out a preference which is then used to drive some further logic in A.

    How can you verify how A behaves correctly based on all the possible return values for this preference?
    How can you confirm A behaves gracefully if B throws an exception, return null, returns garbage, or doesn't return at all?

    Leave a comment:


  • firestarter
    replied
    Originally posted by Jaws View Post
    ....
    Read through some of the blog posts. Sounds like the general idea is to have a BaseCommand which is actually just a big ballbag of mutable dependencies, which are all (hopefully) set prior to execution, with each command implementation able to access any of them.

    A complete antipattern and a bizarre attempt at avoiding DI. I could only ever see this working for small projects.

    Leave a comment:


  • DimPrawn
    replied
    Originally posted by firestarter View Post
    Never seen so much hate for DI frameworks. Is there a similar disdain for loose coupling and unit testing as well?
    No hate from DI frameworks here, but just adds yet another layer of abstraction/indirection to cope with when debugging and troubleshooting. Through in 12 layers, 47 microservices and 12 VM's to processes a record in a database and it kind of starts to wear a bit thin.

    But who cares, Bob in India can sort it all out soon for $10 an hour.

    Leave a comment:


  • SpontaneousOrder
    replied
    Originally posted by firestarter View Post
    So ComponentA has a bunch of logic in it, including calling ComponentB.

    You don't see any value in being able to test ComponentA in isolation?
    Lets suppose you're doing the TDD thing... if you isolate every class you're throwing away 50% of the value of TDD, as TDD is all about emerging relationships between classes & components.

    If you have complicated logic in A coupled to complicated logic in B, then most of the time you've got a bad design. If you have complicated logic in A coupled to more trivial logic in B, then testing A & B together with a mocked C is often even better.

    There is a time and a place, which is the point being made. Getting more benefit from isolating a single class, as opposed to from a small gaggle of cohesive classes operating together is, in my opinion, fairly rare. Hence I'd expect to see things like factories which return pre-assembled components being injected more than I would expect to see every single dependency being wired up in some layer of indirection.

    Leave a comment:


  • SpontaneousOrder
    replied
    Originally posted by firestarter View Post
    I'd be interested to know what the alternative to DI is.

    Would it be new'ing things up on the fly or just making everything static?
    The hate hasn't been for DI. It's been for DI frameworks.

    Although I did mention that people take DI unnecessarily far too.

    Leave a comment:


  • Jaws
    replied
    Originally posted by firestarter View Post
    Interesting idea but I see a few potential problems:
    -Consumers are still tightly coupled with commands the they rely on
    Yes although I can't think where I've ever needed to replace the command with something else. If it became necessary to have that level of flexibility, sure I'd take a different approach - but _only_ when it became necessary. Due to the way the commands are executed elsewhere, things can still be tested easily.

    Originally posted by firestarter View Post
    -Consumers still need to construct each command (including supplying of dependencies). If those dependencies also have dependencies then your code will get messy very quickly
    Actually, the dependencies are supplied by the command runner. The class creating the command will just provide its basic parameters, there is a little bit of logic in the runner to supply dependencies where needed (such as data access components). Typically there will not be many different types of dependencies so this logic is very straight forward. It just depends on how you have set up your application.

    Originally posted by firestarter View Post
    -If an I*Service would normally expose 2+ methods, then this would presumably be 2+ commands? How would state that would have been encapsulated by said service be encapsulated across multiple commands?
    Hm the state is really held in whatever the commands are operating on, so most likely they will have a shared dependency. If you are describing a transaction across multiple calls to your service, then this entire transaction could be either encapsulated within a command or just within the method that is creating these commands depending on the level of reuse required.

    Originally posted by firestarter View Post
    -Can't easily restrict the number of instances of a particular command
    You don't need this with the commands, they are extremely light weight. You don't need to share state or anything like that, they are disposed of as soon as they are executed. The commands are more like delegates but with the benefit that they all follow the same basic interface and can have dependencies injected into them just before they are executed.

    The dependencies might be longer lived and I expect these to be managed by a factory or IoC container but the commands themselves represent tiny bits of logic to perform updates and are not holding onto resources any longer than their execute method takes to run.

    Although I mention dependencies being injected into commands you still end up with a much simpler configuration because you don't have to deal with entity specific services like IUserUpdateService, IUserDeletionService etc, just a few abstractions like ICommand, IDatabase (or data context), IEmailService which are more general.

    It's worked for me on a few projects and they're all easy to maintain looking back. Like with all things though it depends on your requirements. For my own projects it's mostly about keeping the number of layers to a minimum and the number of abstractions per layer to a minimum as well.

    Edit: by the way this isn't something I've come up with myself, it's actually from a series of blog posts by Ayende: h ttps://ayende.com/blog/154209/limit-your-abstractions-refactoring-toward-reduced-abstractions - which I believe is well worth the read if you haven't already.
    Last edited by cojak; 28 February 2016, 19:52. Reason: Removed link

    Leave a comment:


  • firestarter
    replied
    Originally posted by Jaws View Post
    One alternative, rather than having lots of I*Service implementations and injecting those, is to make extensive use of the command pattern. It's quite testable newing up command instances by providing the means for the developer to intercept where the command is actually executed. You could do this by passing in an ICommandRunner style implementation (DI - oops) or use a static CommandRunner class with say a static delegate property which you can override while testing. A variation could be used for queries also. (To test you just check how the command object has been configured then don't actually execute it, you can write different tests to test the command in isolation).

    Personally don't have too much of a problem with dependency injection but thought I'd present an alternative I use myself which ends up with a very small amount of DI container configuration (or none at all).
    Interesting idea but I see a few potential problems:

    -Consumers are still tightly coupled with commands the they rely on

    -Consumers still need to construct each command (including supplying of dependencies). If those dependencies also have dependencies then your code will get messy very quickly

    -If an I*Service would normally expose 2+ methods, then this would presumably be 2+ commands? How would state that would have been encapsulated by said service be encapsulated across multiple commands?

    -Can't easily restrict the number of instances of a particular command

    Leave a comment:


  • Jaws
    replied
    Originally posted by firestarter View Post
    I'd be interested to know what the alternative to DI is.

    Would it be new'ing things up on the fly or just making everything static?
    One alternative, rather than having lots of I*Service implementations and injecting those, is to make extensive use of the command pattern. It's quite testable newing up command instances by providing the means for the developer to intercept where the command is actually executed. You could do this by passing in an ICommandRunner style implementation (DI - oops) or use a static CommandRunner class with say a static delegate property which you can override while testing. A variation could be used for queries also. (To test you just check how the command object has been configured then don't actually execute it, you can write different tests to test the command in isolation).

    Personally don't have too much of a problem with dependency injection but thought I'd present an alternative I use myself which ends up with a very small amount of DI container configuration (or none at all).

    Leave a comment:


  • firestarter
    replied
    Originally posted by cojak View Post
    This isn't LinkedIn BS corner. We tell it as we see it.
    I'd be interested to know what the alternative to DI is.

    Would it be new'ing things up on the fly or just making everything static?

    Leave a comment:


  • BrilloPad
    replied
    I am working on one now. It is purely a PnL system. Take data out of source systems. Slice and dice.

    You would think they were building a nuclear war simulation.

    Why do banks subscribe to theses white elephants?

    Of course, on the plus side, I can invoice lots. I suppose I shouldn't bite the hand that feeds me swans wings and champagne....

    Leave a comment:

Working...
X