• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

contracting in great depression II

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #21
    Originally posted by TimberWolf View Post
    There's a lot of science in testing and unfortunately a lot of it is carp and I've forgotten much of it. At what stage do you stop testing? e.g. when you don't find a bug at all for x amount of time, or when you only find x number of bugs in a certain amount of time? With huge software products I expect the latter approach is more common, and a certain amount of bugs are assumed and almost mathematically certain to exist.
    In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is. The trouble with statistical approaches is that you have to know precisely what units make up the statistics. A 'bug', loosely classified as 'showstopper, blocking, serious, minor, cosmetic' or something like that is actually difficult to quantify in a consisten manner, and even when you quantify it in terms of financial impact you might find that doing the impact analysis is more expensive and time consuming that just solving it or living with it. Same goes for test case based statistics; a test case can range from 'click with the left mouse button on field x; the cursor should be active in field x' to 'run batch y and compare output to last month's batch y; any differences should be explainable'. That means that statistics like 'there are 50 findings for every 1000 test cases' are often meaningless.

    Really it's about assessing risk and agreeing among a group of users, admins, builders and testers that it's a responsible risk to take the product into production or ship it to the client. One way is to organise testing into exploratory sessions, where an experienced tester and an experienced user work together according to a charter describiing what areas of the product they're going to test, what kind of critical behaviour they'll be observing and how long they're going to work. You can then look at the progress through several rounds of test sessions to see how many bugs you find per session or per hour; when this number has fallen significantly and starts to flatten out or shows no more serious findings, you can agree to move on. Imprecise, but experience suggests it can be very effective. Again however, you can ask 'how experienced is the tester/user, is the tester 'on form' today, have they become jaded from repeated the same thing again and again'. But this is what makes a profession; it involves judgment calls based on knowledge AND experience.
    Last edited by Mich the Tester; 5 December 2008, 10:59.
    And what exactly is wrong with an "ad hominem" argument? Dodgy Agent, 16-5-2014

    Comment


      #22
      "You can prove bugs exist, you cannot prove they don't exist"

      Just a little something I like to say now and again.

      Comment


        #23
        Originally posted by Mich the Tester View Post
        In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is. The trouble with statistical approaches is that you have to know precisely what units make up the statistics. A 'bug', loosely classified as 'showstopper, blocking, serious, minor, cosmetic' or something like that is actually difficult to quantify in a consisten manner, and even when you quantify it in terms of financial impact you might find that doing the impact analysis is more expensive and time consuming that just solving it or living with it. Same goes for test case based statistics; a test case can range from 'click with the left mouse button on field x; the cursor should be active in field x' to 'run batch y and compare output to last month's batch y; any differences should be explainable'. That means that statistics like 'there are 50 findings for every 1000 test cases' are often meaningless.

        Really it's about assessing risk and agreeing among a group of users, admins, builders and testers that it's a responsible risk to take the product into production or ship it to the client.
        Ah, well it sounds like you know your stuff.

        Comment


          #24
          Originally posted by Mich the Tester View Post
          In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is.
          Would such products be 'testees'?

          Comment


            #25
            Originally posted by Doggy Styles View Post
            Would such products be 'testees'?
            I'm afraid that joke is so old it's expired and starting to smell. Still worth a chuckle though.
            And what exactly is wrong with an "ad hominem" argument? Dodgy Agent, 16-5-2014

            Comment


              #26
              Originally posted by Mich the Tester View Post
              I'm afraid that joke is so old it's expired and starting to smell. Still worth a chuckle though.
              Oh. Well, not being a tester I've not heard it before. Or, more than likely I have but my brain is addled.

              Comment


                #27
                Originally posted by BrilloPad View Post
                Will we rename the great depression or find a new name for this one?
                The over-inflated depression...?
                Gas masks don't fit snails...

                Comment


                  #28
                  Originally posted by BrilloPad View Post
                  But the original great depression of 1870 was renamed the long depression when the great depression came along.

                  Will we rename the great depression or find a new name for this one?
                  How about the mother of all depressions.

                  The BBC have been calling it "the downturn". I think Alistair Campbell told them to.

                  Comment


                    #29
                    C++

                    C++ has got to be the most tulip language of all time and the amount of very tulip systems needing constant patching is never ending. It nearly killed me last time I went back to C++ but I'll do it again if things get really dire.
                    thats becasue you never learnt to code properly in more evoluted langages
                    stay with visual basic and java for the time being

                    Comment


                      #30
                      How are defining a code monkey? Is it anyone who just does programming? Or someone who only does simple, repetitive tasks?
                      Originally posted by MaryPoppins
                      I'd still not breastfeed a nazi
                      Originally posted by vetran
                      Urine is quite nourishing

                      Comment

                      Working...
                      X