• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Anyone come across Requirements Based Testing?"

Collapse

  • cojak
    replied
    Originally posted by zara_backdog View Post
    Not in the spec = Change Request - that way they are not logged as defects/issues.
    WSS...

    That's what agreed Defect Lists* are for..., that way it's been agreed that the supplier can fix certain defects (within the warranty period) without them clogging up Problem Management.

    No system can be completely bug-free (minor bugs, that is) before Go-Live unless they have no concept of "In-time, In-budget".

    (Apart from the Nuclear and Aircraft industry, but that's why they shovel money at testing in those industres...)

    *managed by Problem Management, obv...

    Leave a comment:


  • cojak
    replied
    Originally posted by CheeseSlice View Post
    In my exprience of these projects (i'm not a tester by any means), if something is broken within a product/software but it wasn't defined in the spec/requirements to behave in a particular way, then tough-bananas -> 'wont-fix'
    "That's be another £xxxx, please!"

    Kerching!

    Leave a comment:


  • zara_backdog
    replied
    Originally posted by CheeseSlice View Post
    Apart from if the user doesn't like it, but the requirements based tests still pass, then the product passes IMO

    In my exprience of these projects (i'm not a tester by any means), if something is broken within a product/software but it wasn't defined in the spec/requirements to behave in a particular way, then tough-bananas -> 'wont-fix'
    Not in the spec = Change Request - that way they are not logged as defects/issues.

    Leave a comment:


  • CheeseSlice
    replied
    Originally posted by original PM View Post
    However if we are talking about an internal dev team then (think about it) the operational profit the company makes pays these guys wages and for them to turn around with this sort of shoddy 'its not been signed for you cannot have it attitude' means the internal dev team will not last for very long.
    Agree strongly with that for internal teams.

    It is usually large external solution teams with a fixed price contracts that behave like this, and rightly so. The customer will have a UAT team that literally flags hundreds of defects per day to the supplier, all highest severity, all must be fixed now. Many of which may just be spelling typos, don't like the colour, mis-read spec, etc, with maybe 10% that actually need to be fixed. This exacerbated by the fact the UAT team are also external suppliers and justifying their own existence/head count.

    To survive all of this you need a bug council/triage made of teflon, or you go bankrupt.

    Leave a comment:


  • original PM
    replied
    In my exprience of these projects (i'm not a tester by any means), if something is broken within a product/software but it wasn't defined in the spec/requirements to behave in a particular way, then tough-bananas -> 'wont-fix'
    I just love this attitude.

    We did a rather large clear out of our IT department for doing exactly this.

    Yes I do agree you should cover all aspects of the required developments in the start up and requirements gathering stage and as such this should never happen.

    But in real life it does - it is very difficult for people to remember everything they want a system to do and this wil normally get found out in UAT - if not before when someone has a brainwave at 3 in the morning.

    However if we are talking about an internal dev team then (think about it) the operational profit the company makes pays these guys wages and for them to turn around with this sort of shoddy 'its not been signed for you cannot have it attitude' means the internal dev team will not last for very long.

    Which it didn't.

    Never forget who your customer is or you may find yourself without any.

    Leave a comment:


  • CheeseSlice
    replied
    Originally posted by Bluebird View Post
    isn't that a fancy name for UAT?
    Apart from if the user doesn't like it, but the requirements based tests still pass, then the product passes IMO

    In my exprience of these projects (i'm not a tester by any means), if something is broken within a product/software but it wasn't defined in the spec/requirements to behave in a particular way, then tough-bananas -> 'wont-fix'

    Leave a comment:


  • Bluebird
    replied
    Originally posted by speedo View Post
    Hi,

    I am a BA who has come across a testing methodology called Requirements Based Testing. Has anyone worked in an environment which uses Requirements Based Testing? I have a checked docs online and i have some understanding, however i have a few questions:

    1. Are the testers/developers involved during the requirements stage in order to help make the requirement clear and easier to test/code (i.e. expected results?)

    2. Can someone who has experience please provide me with a flow of how things work if using Requirements Based Testing, i am thinking it will flow something like this:

    2.1 Requirements Gathering - BA/PM
    2.2 Testers/developers/business review - Testers/developers to remove ambiguity
    2.3 Update Requirements with feedback from above
    2.4 Get sign off from the business
    2.5 Handover to the developers to code
    2.6 Handover to the testers once the BA as completed system testing.

    Obviously there might be a few iterations of points 2.2 - 2.3

    Any help appreciated!

    Thanks in advance!
    isn't that a fancy name for UAT?

    Leave a comment:


  • speedo
    replied
    Originally posted by foritisme View Post
    Morning Speedo

    point 2.6 I would have thought the testers would be involved with the integration / system test before the BA gets involved with 'business testing / UAT', but using the BA to clarify any points raised in system testing.
    Hi Foritisme
    It depends where you work - i have worked in places where i have been the only tester as a BA and would need to conduct the cycle of tests and other places where it is managed by a dedicated test team.

    Leave a comment:


  • speedo
    replied
    Thanks for the responses..... the reason i asked was because i had seen this on a job spec (currently on the bench)... having said that after viewing some documents on the web i have come to the following conclusion. RBT uses two techniques:

    1. Ambiguity checks - which is bringing on the testers early into the requirements stage and get their intake on how each requirement will get tested. If there is any ambiguity in the requirement, such as when you click the 'Help' link it opens up a new standard sized window. The ambiguity is what is standard - once the BA gets the dimensions this can be added into the requirement and therefore the testing of this becomes easier.

    2. Cause Effects Graph approach - this is where you put down the inputs and the expected outcomes, however where tests overlap they can be removed. By removing the tests doesn't mean that coverage is reduced. Having said that, if there is a brand new system where there are many inputs, the cause effect part will take ages, but then who am i to suggest this does not work.

    As someone pointed out i think the main purpose is that requirements/test cases are transparent.

    Once again thanks for the help!!!

    Leave a comment:


  • Ardesco
    replied
    And the morale of the story is let a BA do BA's work, and let Testers do Testers work!!!

    The number of times I have seem people try to blur the roles, usually because some deadline is coming up and they need to try and find a way to frig it all together in time to hit the deadline, and it has gone tits up is just not funny.

    People being forced into doing roles they are not supposed to be doing (be is lack of a person who can do it, or to hit deadlines) is just a way to screw a decent project up.....

    Leave a comment:


  • cojak
    replied
    Actually, writing requirements in a clear and concise way that testers understand what they need to test is rare in my experience Gonzo...

    Sadly, some of the projects that I've been on in an ITIL capacity often have junior BAs writing up requirements as they're seen as not important/easy (go figure ).Even when I voiced concerns this situation doesn't change.

    That's when I know to take the money and run...

    (PS - yes get the testers in at an early stage, but the BA is the person running the show. I've seen one project go belly up when they thought that they could dispense with BA by getting the testers to write requirements - what a fook up that turned out to be... )

    Leave a comment:


  • Gonzo
    replied
    Originally posted by speedo View Post
    Hi,

    I am a BA who has come across a testing methodology called Requirements Based Testing. Has anyone worked in an environment which uses Requirements Based Testing? I have a checked docs online and i have some understanding, however i have a few questions:

    1. Are the testers/developers involved during the requirements stage in order to help make the requirement clear and easier to test/code (i.e. expected results?)

    2. Can someone who has experience please provide me with a flow of how things work if using Requirements Based Testing, i am thinking it will flow something like this:

    2.1 Requirements Gathering - BA/PM
    2.2 Testers/developers/business review - Testers/developers to remove ambiguity
    2.3 Update Requirements with feedback from above
    2.4 Get sign off from the business
    2.5 Handover to the developers to code
    2.6 Handover to the testers once the BA as completed system testing.

    Obviously there might be a few iterations of points 2.2 - 2.3

    Any help appreciated!

    Thanks in advance!
    Some people have these crazy ideas surrounding software development that Business Analysts should map out how a new piece of software should behave, and Testers should make sure that the finished software does behave as specified.

    Where I have seen anyone attempt "Requirements Based Testing" what they have done is link all the tests in the test plan back to each of the requirements in the business spec. The test frameworks are therefore produced after the requirements spec with the Business Analyst's input (at this stage it is not always possible to create concrete test plans if the software is still vapourware).

    The testers will also usually review the requirements from a "how the f*k am I supposed to validate that?" point of view.

    All it is really is a fancy-arse way of saying "make sure that testing covers everything in the requirements specs", it is not a new form of testing alchemy.

    Wait, you are all going to tell me that testing against requirements is actually quite a rare thing for anyone to do now aren't you?

    Leave a comment:


  • blacjac
    replied
    Ahh, got you.

    Yes I completely agree.

    Leave a comment:


  • foritisme
    replied
    Originally posted by blacjac View Post
    Yeah it can be useful on internal projects.

    But I would never ever ever implement risk based testing on a software product or bespoke system that you are selling.

    To the customer, who is paying for the software, any failure of the system no matter how small gets to be an issue and affects system confidence if it happens regularly.
    The point I was trying to make is that risk based testing does not mean you don't test low risk areas and what is low risk on one project is not low risk on the next.

    Leave a comment:


  • blacjac
    replied
    Yeah it can be useful on internal projects.

    But I would never ever ever implement risk based testing on a software product or bespoke system that you are selling.

    To the customer, who is paying for the software, any failure of the system no matter how small gets to be an issue and affects system confidence if it happens regularly.

    Leave a comment:

Working...
X