• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Overarchitected Projects

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #51
    Originally posted by SpontaneousOrder View Post
    Lets suppose you're doing the TDD thing... if you isolate every class you're throwing away 50% of the value of TDD, as TDD is all about emerging relationships between classes & components.
    TDD and Unit Testing are not the same thing, and doing one doesn't mean you can't do the other.


    If you have complicated logic in A coupled to complicated logic in B, then most of the time you've got a bad design. If you have complicated logic in A coupled to more trivial logic in B, then testing A & B together with a mocked C is often even better.

    There is a time and a place, which is the point being made. Getting more benefit from isolating a single class, as opposed to from a small gaggle of cohesive classes operating together is, in my opinion, fairly rare. Hence I'd expect to see things like factories which return pre-assembled components being injected more than I would expect to see every single dependency being wired up in some layer of indirection.
    Agreed that in some cases, where there is a natural coupling between classes, that it makes sense to test the classes together (the classic example would be the CalculatorInputParser and Calculator). I'd argue this isn't the norm though.

    A good example for isolating classes might be if B is a UserPreferenceService, and A reads out a preference which is then used to drive some further logic in A.

    How can you verify how A behaves correctly based on all the possible return values for this preference?
    How can you confirm A behaves gracefully if B throws an exception, return null, returns garbage, or doesn't return at all?

    Comment


      #52
      Originally posted by firestarter View Post
      Read through some of the blog posts. Sounds like the general idea is to have a BaseCommand which is actually just a big ballbag of mutable dependencies, which are all (hopefully) set prior to execution, with each command implementation able to access any of them.
      "Hopefully set" from the one single place that executes them, which presumably has unit tests to ensure this happens? The point of the series of articles was not really about dependency injection (where in fact the dependencies are injected at command execution time) but reducing the amount of different abstractions [and therefore "dependencies"] a project actually needs.

      I see overzealous defensive programming all the time but it has always seemed that the place developers have productivity issues is not what property they inadvertently set to null but which of the 100 classes (that don't need to be there) do they need to make the change to.

      Originally posted by firestarter View Post
      I could only ever see this working for small projects.
      Now this is the real problem, a great deal of projects that could be small end up being large due to architectural decisions of the latest person who swallowed the Domain Drive Design book and is using the current project to test how it works. Fine, I've made this mistake my myself in the past but it is probably less than 10% of projects that actually need it.

      For example, with the current "greenfield" project I'm working on, after being given the solution by the lead developer, I see 12 projects in Visual Studio. The system is a series of forms that result in data being stored in a database or service calls being made (now where have I seen this before?). How could this possibly need more than a few general abstractions (or assemblies for that matter)?
      Last edited by Jaws; 2 March 2016, 07:15.

      Comment


        #53
        Sounds like the problem here with the team (as with a lot of the industry) is a complete lack of common sense.

        An over-engineered system such as the one you describe will only be made worse by the addition of DI.

        A properly designed one will only benefit.

        Comment


          #54
          Sounds like a way of telling whether a system should be scrapped or not.

          We could offer it as an evaluation service to government projects, but the current pattern of massive failure at massive cost is a way for us to get back (by the day rate) some of the taxes we get fleeced for via IR35 and the like.
          Maybe tomorrow, I'll want to settle down. Until tomorrow, I'll just keep moving on.

          Comment


            #55
            Originally posted by firestarter View Post
            TDD and Unit Testing are not the same thing, and doing one doesn't mean you can't do the other.
            Well... I did say *if*. And I certainly wouldn't "unit test" the same code I'd "test driven" as another extra activity, unless it to focus on some particular hairy algorithm.

            Originally posted by firestarter View Post
            Agreed that in some cases, where there is a natural coupling between classes, that it makes sense to test the classes together (the classic example would be the CalculatorInputParser and Calculator). I'd argue this isn't the norm though.

            A good example for isolating classes might be if B is a UserPreferenceService, and A reads out a preference which is then used to drive some further logic in A.

            How can you verify how A behaves correctly based on all the possible return values for this preference?
            How can you confirm A behaves gracefully if B throws an exception, return null, returns garbage, or doesn't return at all?
            If you test them together, and you test drive your code, you'll never get unexpected nulls or garbage. That's the point.

            Isolating everything as an overarching rule invites the problem of never knowing exactly how your collaborating classes may behave under any given circumstance, and therefore you write many tests that *should* be redundant - but are necessary for peace of mind.
            You also write tests which cover both valid scenarios and invalid ones too - which is both wasteful and worsens the signal/noise ratio. This is why you can't really do proper TDD (as far as I'm concerned) while isolating every class.

            If A invokes service B for info that then drives more logic in A, while service B - as services tend to do - basically orchestrates datasources C & D, then you can should be able to, on the majority of occasions, test A & B together while mocking C & D.

            I.e all use cases for our 'unit' under test (in this case some component consisting of A & B) that exercise B originate through A (or A & A2 is there are more consumers of that service).

            Of course it depends on the exact nature of the service and its consumer. If the service in turn collaborates with validators and filters and whatnot, then you're now adding more permutations if you include those too, and adding more gaps if you don't. It is obviously a judgement call when it comes to granularity & overlap.

            But the principle is still the same. It is far better, where possible, to test behaviour rather than implementation, and behaviour is an emergent property of multiple collaborators - not a single class.

            I suspect, though, that this will vary depending on the kind of software being developed. So I don't want to make it sound like a one-size-fits-all approach.


            Obviously, none of this has anything to do with DI frameworks though.
            Last edited by SpontaneousOrder; 3 March 2016, 22:01.

            Comment


              #56
              Originally posted by SpontaneousOrder View Post
              Well... I did say *if*. And I certainly wouldn't "unit test" the same code I'd "test driven" as another extra activity, unless it to focus on some particular hairy algorithm.
              Which is fine for testing a few input/output cases but you'll never achieve the same level of granularity that unit testing allows for, nor will you be able to verify how exactly your classes are interacting together.

              Simple example - We take our Calculator class and add component B to provide some caching logic on top. TDD will allow you to verify the result but you won't be able to verify how the result was provided. Is B dipping into the cache or forcing a recalculation each time?

              If you test them together, and you test drive your code, you'll never get unexpected nulls or garbage. That's the point.

              Isolating everything as an overarching rule invites the problem of never knowing exactly how your collaborating classes may behave under any given circumstance, and therefore you write many tests that *should* be redundant - but are necessary for peace of mind.
              You also write tests which cover both valid scenarios and invalid ones too - which is both wasteful and worsens the signal/noise ratio. This is why you can't really do proper TDD (as far as I'm concerned) while isolating every class.
              Can you guarantee that these classes are only ever going to be used with each other? Is it possible someone might come along and lift out one of these components to be used else where?

              The power of a solid set of unit tests can ensure that a class conforms to an exact set of behaviour, regardless of external factors. So a test that is potentially redundant today, might prevent a bug next month when other developers start trampling over the code.

              But the principle is still the same. It is far better, where possible, to test behaviour rather than implementation, and behaviour is an emergent property of multiple collaborators - not a single class.
              And better still, to recognise that different types of testing are tools to achieve different things and often it's not an either/or situation.

              Comment

              Working...
              X