• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

So who did BA outsource their IT to?

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    Originally posted by Mordac View Post
    Well plagiarised Mr Troll.
    To be fair I've worked alongside contractors on here who have worked for the same clients so if something like that happened, and I had sufficient information from my sources, I would copy their response.
    "You’re just a bad memory who doesn’t know when to go away" JR

    Comment


      Originally posted by BigRed View Post
      I'm 5 years out of date with bank DRs but you used to be lucky if the switch happened within 4 hrs after the decision was made, which itself took 3-4hrs. Given the chances of it actually being needed and the cost of regular testing I doubt it has improved significantly. I did work for one of the largest banks and we were pushing the limits of technology even when everything worked, worst case DR would be seriously stressing the technology.

      DR testing often has a period of checking everything is in-place and worked correctly before commencement, such as checking all the backups actually worked before DR is invoked.

      I do recall one incident when standalone testing the UPS generators blew everything because they weren't isolated
      This.

      In many big organisations (including one that wasn't able to hive off a certain portion of itself), they know that their DR plan is not worth the paper it's written on. The infrastructure is too old, poorly architected and designed (and not even that sometimes, servers are just shoved anywhere there's space) to cope with a real event. Fixing it would cost far too much money for shareholders liking.

      They're too scared to run a live DR test and most times it's a paper shuffling exercise to get past the auditor.

      The manager's only hope is that it's not actually needed until they've left the company.
      "I can put any old tat in my sig, put quotes around it and attribute to someone of whom I've heard, to make it sound true."
      - Voltaire/Benjamin Franklin/Anne Frank...

      Comment


        Its interesting that the Chief Exec says its nothing to do with the offshoring as far as the DR is concerned.

        He rebuffed a claim from the GMB that the situation had been worsened by the outsourcing of IT jobs to India. “I can confirm that all the parties involved around this particular event have not been involved in any type of outsourcing in any foreign country. They have all been local issues around a local data centre who has been managed and fixed by local resources,” he said.
        However, I will be interested as to why DR really had to be implemented in the first instance
        The Chunt of Chunts.

        Comment


          When I was working in air traffic control it is typical many of the redundant systems operate in hot standby, or parallel operating. As I recall an interruption of service of greater than 90 seconds per annum was unacceptable. Even went to the extent of components used in parallel operating systems avoided nearby serial numbers so as to avoid manufacturing flaws occurring at the same time.

          This is probably all dutch to other industries. The practices above are indeed EU laws, it'll be interesting to see if they are maintained when the UK is no longer part of the EU. Knowing West Drayton as I do years ago...
          "Never argue with stupid people, they will drag you down to their level and beat you with experience". Mark Twain

          Comment


            I'm not resigning. It's only £200 million and everybody else is as bad

            Originally posted by cojak View Post
            This.

            In many big organisations (including one that wasn't able to hive off a certain portion of itself), they know that their DR plan is not worth the paper it's written on.
            I was part of the transfer of systems from Churchill to Direct Line (ie RBS). Whilst I totally agree with your assessment, it was ironic that I would have put faith in Churchill's systems in the event of a major outage (prior to RBS taking everything back to the Stone Age).

            Other big companies I have worked for had recovery that was known to fail. One of them at least had thought through what might have to happen. It was FMCG manufacture and would involve moving the product out into the car park and marking things with post-it notes. This is the one I advised not to go to SAP. A few years later they duly migrated to SAP and suffered a major outage, so major the company had to be bought out.

            The point however is even though there will be problems, you should be able to have partial and increasing functionality. I was at Standard Chartered when the IRA blew up Dashwood House in the City. It took us a couple of weeks to restore the services from destroyed infrastructure but the business carried on right from the start. What seems to be lacking at BA is people managing the situation. You could argue that (like their aircraft and their Quick Reference Manuals) there should have been multiple redundancy and carefully developed recovery procedures to cover all likely scenarios but in the absence of such basic professionalism I would be looking for under-resourcing and haphazard outsourcing as the fatal management flaw here, leading to inability to intervene and hand hold the systems back into some sort of functioning. I do hope and pray that one day the IT boss will get sacked for taking bonuses through big risks, and big risks that backfire on innocent customers.
            "Don't part with your illusions; when they are gone you may still exist, but you have ceased to live" Mark Twain

            Comment


              Originally posted by Mordac View Post
              Well plagiarised Mr Troll.
              Just a random coincidence without checking others responses
              How fortunate for governments that the people they administer don't think

              Comment


                I bet you the use of a blockchain would have prevented this failure in the first place. Multiple copies of the ledger, backups then become irrelevant. UK needs to get on board this tech quick pronto.
                "Never argue with stupid people, they will drag you down to their level and beat you with experience". Mark Twain

                Comment


                  Originally posted by Cirrus View Post
                  The point however is even though there will be problems, you should be able to have partial and increasing functionality. I was at Standard Chartered when the IRA blew up Dashwood House in the City. It took us a couple of weeks to restore the services from destroyed infrastructure but the business carried on right from the start.
                  +1 (we seem to have a mutual appreciation society going on here! ).

                  I was working for a major finance support company when the Bishopsgate IRA bomb went off in the 1990's. Their business continuity plan included a clear desk policy that was maintained to disciplinary standards. When all of the windows blew out of the offices, theirs was the only company not to have vital company documents blowing in the wind, and they were up and running in leased accommodation by the end of the second day - all staff knew where they had to go and had been contacted by phone to confirm.

                  That CEO need to be fired for not ensuring this kind of thing was in place for BA.
                  Last edited by cojak; 30 May 2017, 11:16. Reason: Brillo supplied the name.
                  "I can put any old tat in my sig, put quotes around it and attribute to someone of whom I've heard, to make it sound true."
                  - Voltaire/Benjamin Franklin/Anne Frank...

                  Comment


                    Originally posted by scooterscot View Post
                    I bet you the use of a blockchain would have prevented this failure in the first place. Multiple copies of the ledger, backups then become irrelevant. UK needs to get on board this tech quick pronto.
                    As Sas would say about most people on here - a little knowledge is a dangerous thing. And the above statement shows once again that you have limited knowledge.....

                    Hint I'm not saying that blockchain doesn't have a purpose of auditing purposes but its not a quick solution....
                    merely at clientco for the entertainment

                    Comment


                      Originally posted by scooterscot View Post
                      When I was working in air traffic control it is typical many of the redundant systems operate in hot standby, or parallel operating. As I recall an interruption of service of greater than 90 seconds per annum was unacceptable. Even went to the extent of components used in parallel operating systems avoided nearby serial numbers so as to avoid manufacturing flaws occurring at the same time.

                      This is probably all dutch to other industries. The practices above are indeed EU laws, it'll be interesting to see if they are maintained when the UK is no longer part of the EU. Knowing West Drayton as I do years ago...
                      It's not WD anymore, it all moved down to Swanwick (near Fareham) about 10 yrs ago. Let's hope they didn't outsource that lot to TCS...
                      His heart is in the right place - shame we can't say the same about his brain...

                      Comment

                      Working...
                      X