• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "contracting in great depression II"

Collapse

  • d000hg
    replied
    Originally posted by rootsnall View Post
    C++ has got to be the most tulip language of all time and the amount of very tulip systems needing constant patching is never ending. It nearly killed me last time I went back to C++ but I'll do it again if things get really dire.
    I have a soft spot for C++. It's the language I've used the most and the one I used when I was programming games as a hobby. Sure it's hard-core, lacks many niceties and doesn't tolerate fools, but when you know it well those qualities give a certain satisfaction - being good at something difficult is always good for the ego

    Leave a comment:


  • d000hg
    replied
    How are defining a code monkey? Is it anyone who just does programming? Or someone who only does simple, repetitive tasks?

    Leave a comment:


  • sappatz
    replied
    C++

    C++ has got to be the most tulip language of all time and the amount of very tulip systems needing constant patching is never ending. It nearly killed me last time I went back to C++ but I'll do it again if things get really dire.
    thats becasue you never learnt to code properly in more evoluted langages
    stay with visual basic and java for the time being

    Leave a comment:


  • Doggy Styles
    replied
    Originally posted by BrilloPad View Post
    But the original great depression of 1870 was renamed the long depression when the great depression came along.

    Will we rename the great depression or find a new name for this one?
    How about the mother of all depressions.

    The BBC have been calling it "the downturn". I think Alistair Campbell told them to.

    Leave a comment:


  • BrianSnail
    replied
    Originally posted by BrilloPad View Post
    Will we rename the great depression or find a new name for this one?
    The over-inflated depression...?

    Leave a comment:


  • Doggy Styles
    replied
    Originally posted by Mich the Tester View Post
    I'm afraid that joke is so old it's expired and starting to smell. Still worth a chuckle though.
    Oh. Well, not being a tester I've not heard it before. Or, more than likely I have but my brain is addled.

    Leave a comment:


  • Mich the Tester
    replied
    Originally posted by Doggy Styles View Post
    Would such products be 'testees'?
    I'm afraid that joke is so old it's expired and starting to smell. Still worth a chuckle though.

    Leave a comment:


  • Doggy Styles
    replied
    Originally posted by Mich the Tester View Post
    In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is.
    Would such products be 'testees'?

    Leave a comment:


  • TimberWolf
    replied
    Originally posted by Mich the Tester View Post
    In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is. The trouble with statistical approaches is that you have to know precisely what units make up the statistics. A 'bug', loosely classified as 'showstopper, blocking, serious, minor, cosmetic' or something like that is actually difficult to quantify in a consisten manner, and even when you quantify it in terms of financial impact you might find that doing the impact analysis is more expensive and time consuming that just solving it or living with it. Same goes for test case based statistics; a test case can range from 'click with the left mouse button on field x; the cursor should be active in field x' to 'run batch y and compare output to last month's batch y; any differences should be explainable'. That means that statistics like 'there are 50 findings for every 1000 test cases' are often meaningless.

    Really it's about assessing risk and agreeing among a group of users, admins, builders and testers that it's a responsible risk to take the product into production or ship it to the client.
    Ah, well it sounds like you know your stuff.

    Leave a comment:


  • minestrone
    replied
    "You can prove bugs exist, you cannot prove they don't exist"

    Just a little something I like to say now and again.

    Leave a comment:


  • Mich the Tester
    replied
    Originally posted by TimberWolf View Post
    There's a lot of science in testing and unfortunately a lot of it is carp and I've forgotten much of it. At what stage do you stop testing? e.g. when you don't find a bug at all for x amount of time, or when you only find x number of bugs in a certain amount of time? With huge software products I expect the latter approach is more common, and a certain amount of bugs are assumed and almost mathematically certain to exist.
    In practise you often stop when the bolshy git of a project manager can con the users into accepting the product, however tulip it is. The trouble with statistical approaches is that you have to know precisely what units make up the statistics. A 'bug', loosely classified as 'showstopper, blocking, serious, minor, cosmetic' or something like that is actually difficult to quantify in a consisten manner, and even when you quantify it in terms of financial impact you might find that doing the impact analysis is more expensive and time consuming that just solving it or living with it. Same goes for test case based statistics; a test case can range from 'click with the left mouse button on field x; the cursor should be active in field x' to 'run batch y and compare output to last month's batch y; any differences should be explainable'. That means that statistics like 'there are 50 findings for every 1000 test cases' are often meaningless.

    Really it's about assessing risk and agreeing among a group of users, admins, builders and testers that it's a responsible risk to take the product into production or ship it to the client. One way is to organise testing into exploratory sessions, where an experienced tester and an experienced user work together according to a charter describiing what areas of the product they're going to test, what kind of critical behaviour they'll be observing and how long they're going to work. You can then look at the progress through several rounds of test sessions to see how many bugs you find per session or per hour; when this number has fallen significantly and starts to flatten out or shows no more serious findings, you can agree to move on. Imprecise, but experience suggests it can be very effective. Again however, you can ask 'how experienced is the tester/user, is the tester 'on form' today, have they become jaded from repeated the same thing again and again'. But this is what makes a profession; it involves judgment calls based on knowledge AND experience.
    Last edited by Mich the Tester; 5 December 2008, 10:59.

    Leave a comment:


  • TimberWolf
    replied
    Originally posted by Mich the Tester View Post
    Of course. Until then, I'll keep on invoicing.
    There's a lot of science in testing and unfortunately a lot of it is carp and I've forgotten much of it. At what stage do you stop testing? e.g. when you don't find a bug at all for x amount of time, or when you only find x number of bugs in a certain amount of time? With huge software products I expect the latter approach is more common, and a certain amount of bugs are assumed and almost mathematically certain to exist.

    Leave a comment:


  • Mich the Tester
    replied
    Originally posted by Doggy Styles View Post
    That perfect code would be written to a perfect spec, written to a perfect analysis of a perfect requirement.
    Of course. Until then, I'll keep on invoicing.

    Leave a comment:


  • Doggy Styles
    replied
    Originally posted by Mich the Tester View Post
    You guys have one means of putting me out of work for good. Write perfect code and run it through a perfect compiler to work on a perfect OS.
    That perfect code would be written to a perfect spec, written to a perfect analysis of a perfect requirement.

    Leave a comment:


  • Mich the Tester
    replied
    Originally posted by bobhope View Post
    Ahh Quality Center, now there's a product. If only it could go more than a couple of hours without crashing the browser.
    Yeah, and you know what a big part of the problem is? It gets so damn busy because of people entering bulltulip findings accompanied by miles of screen dumps. Testers do this, either because they lack the self investigation skills to check their own finding and investigate possible causes, or because they're not allowed to talk with the developers who are hidden somewhere in India behind a legion of managers. The way I like to work is to show the developer what I've found and how I've found it, then ask him what he thinks about it, reserving judgement as to whether it's a bug; after all, he might have got it right and I might have got it wrong; we might both have got it wrong, in which case we both learn something valuable from the finding. Ideally nothing should be entered into the findings tool until it's been discussed by the tester and the developer so that both understand what the problem is.

    Amazingly, in all the testing courses, we're told that testers should have good communication skills to avoid making a developer feel personally attacked by a finding, but the methodologies and tools used by large organisations seem designed to prevent that kind of cooperative working.

    In short; let me talk with the developer and show him what's going on, and that way we can work together for a better product. When we've agreed that there's an issue we can put it in QC so that the managers can have their meaningless reports.

    Sorry guys, I could go on about this for months, but I'll save you it. Just wait until my first testing courses go on the market. Alternatively, hire me and I'll tell you how tsting really should be done. I'm not cheap, but as Red Adair says, 'if you think hiring a professional is expensive, see what it costs when you hire an amateur'.

    Leave a comment:

Working...
X