• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

BA contractor screw up

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #41
    Originally posted by sasguru View Post
    Interesting, thanks for the explanation, this not being my area.
    But doesn't all this assume that BA is paying for state of the art kit and processes as opposed to the cost cutting they've been doing for years.
    If their system was set up in the 80s and they've not upgraded could it be they've not been following best practice?
    It's not the first data problem they've had which suggests their infrastructure is seriously crap.
    Not really. Most of the things I am explaining are standard run books for any DC provider. Even if the batteries are from a U-Boat they will have testers in regularly and everyone will know their condition.

    I think the DC service management company have also called BS on the WW statement.

    Comment


      #42
      "The system was designed and installed there in the mid-1980s. "

      Comment


        #43
        Originally posted by AtW View Post
        "The system was designed and installed there in the mid-1980s. "
        Does that put Suity out of the frame
        The Chunt of Chunts.

        Comment


          #44
          BA to blame computer meltdown on IT engineer | Daily Mail Online

          The Uninterruptible Power System (UPS) system that broke down was at Boadicea House at Heathrow - a building built for state-owned BOAC. The system was designed and installed there in the mid-1980s.

          It failed on Saturday at around 8.30am.
          It appears that alternative power sources including batteries and a diesel generator may also have failed.
          BA's emergency procedures say that the power would then be restored 'gradually' with its other data centre at Heathrow - Comet House - taking 'up the slack'.
          But a source told the Telegraph that power 'resumed in an uncontrolled fashion', damaging servers containing all sorts of data about flights, passengers and even flight paths.

          Comment


            #45
            Originally posted by AtW View Post
            "The system was designed and installed there in the mid-1980s. "
            So were most nuclear bunkers? I thought most of them still required 5.25 inch floppy drives?

            Comment


              #46
              Originally posted by AtW View Post
              "The system was designed and installed there in the mid-1980s. "
              "A people that elect corrupt politicians, imposters, thieves and traitors are not victims, but accomplices," George Orwell

              Comment


                #47
                Originally posted by BrilloPad View Post
                So were most nuclear bunkers? I thought most of them still required 5.25 inch floppy drives?
                I reinstalled <A Large Telecom Providers> Production Reporting systems reports (200+) with a (possibly illegal for me to have) personal USB back up once, their back up tape machine had "broken" .

                The IT director was extremely grateful.

                The Chunt of Chunts.

                Comment


                  #48
                  El Reg has more detail.

                  https://www.theregister.co.uk/2017/0...configuration/

                  Bill Francis, Head of Group IT at BA's owner International Airlines Group (IAG), has sent an email to staff saying an investigation so far had found that an Uninterruptible Power Supply to a core data centre at Heathrow was over-ridden on Saturday morning. He said: "This resulted in the total immediate loss of power to the facility, bypassing the backup generators and batteries. This in turn meant that the controlled contingency migration to other facilities could not be applied. "After a few minutes of this shutdown of power, it was turned back on in an unplanned and uncontrolled fashion, which created physical damage to the system, and significantly exacerbated the problem.

                  EPO??

                  Apparently the UPS units were recent from Socomec

                  BoHo's uninterruptible power supplies (UPSes) were replaced three years ago with equipment from electrical firm Socomec, which refused to comment for this article.
                  Always forgive your enemies; nothing annoys them so much.

                  Comment


                    #49
                    Bobspud describes a modern DC well.
                    Diseasex may be nearer the mark if indeed the site is positively ancient but I can't imagine why it would be.

                    I am not convinced an "IT Engineer" would be responsible though.
                    JCI now CBRE I think would actually use an FM plus Electrical staff/Contractors for any HV/PDU/UPS work probably a small herd of them
                    They also are at pain to go through method statements H&S docs etc. ad nauseum before even picking their noses.

                    So whatever happened I don't think we are seeing the full story yet.
                    No criminal act has occurred share prices need protecting so I think a suitable story is being spun.

                    However maybe, just maybe they had a big red physical or virtual button!

                    An ex client had a big red button on the wall in their DC in Henley.
                    It was alongside a ramp up to a higher level in the room, it was not a clever place to put it

                    One day apparently someone 'protected' it with a polystyrene cup.
                    If you hit a polystyrene cup they squash up don't they.
                    A few day later, yep visiting copier engineer gave it a wallop!
                    Room went dark.

                    Same client this time in London
                    JCI FM who should have known better let in a bunch of sparks to carry out an earth bonding exercise on some new racks is a very large machine room.
                    Everything thing went fine until oops.
                    Room went very dark.

                    Then there is the total dick scenario which I bet you have all seen.
                    I worked for a while for F***x at Stanstead
                    All their comms and local Netware 4.1 servers ran via multiple extension leads dangling over a metal separation wall to a three way adapter using a single 13A socket.
                    before it died it had smoldered for probably years. and you could see how warm the bakelite had been getting.
                    It went pop on my day one!

                    You only need one Cockwomble to create an Outage.
                    So now I am worried, am I being deceived, just how much sugar is really in a spoon full!

                    Comment


                      #50
                      Originally posted by vetran View Post
                      El Reg has more detail.

                      https://www.theregister.co.uk/2017/0...configuration/




                      EPO??

                      Apparently the UPS units were recent from Socomec
                      So in effect they tried to turn it off and on again to fix a bug and failed.

                      I imagine someone logged a call saying x was not working and first line guy asked if you had tried turning it off and on again to see if that solves the problem

                      👀

                      Comment

                      Working...
                      X