Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
With databases SY it is normally necessary to update the Live server.
But if you do, the process is to backup right before, notify and disconnect all users, take down application services, update, then bring it all back up after you've run through the process on test.
Anyway that's what I made one client attempt to do this week. They're test box is still down because they can't work out how to restart the service. I'm bloody glad i didn't go gung ho and try my changes on live first.
With databases SY it is normally necessary to update the Live server.
But if you do, the process is to backup right before, notify and disconnect all users, take down application services, update, then bring it all back up after you've run through the process on test.
Anyway that's what I made one client attempt to do this week. They're test box is still down because they can't work out how to restart the service. I'm bloody glad i didn't go gung ho and try my changes on live first.
You patronising b******
HTH BIDI
Last edited by suityou01; 13 November 2010, 12:42.
I recall when our DB server stopped responding. But the failover wouldn't trigger, because the controller was trying to do a clean shutdown, and couldn't because a process was hanging. So we told our good friend Bob to force a shutdown of the DB server, which Bob was reluctant to do, as it wasn't in the SOP. Eventually, the op manager persuaded him - "Pull the bloody plug out if you have to - JFDI".
Only Bob, didn't. He shut down ALL the servers.
Oh, and naturally, when we tried to bring it all back, the failover db server wouldn't come up. Bob just couldn't work it out. Fortunately, Scotty worked out that one of the network cards had failed, knew how to get in via one of the others, and got that back online somewhat faster than the four hours the datacentre were quoting.
13000 users, around the world, unable to log on for an hour. How we laughed.
I recall when our DB server stopped responding. But the failover wouldn't trigger, because the controller was trying to do a clean shutdown, and couldn't because a process was hanging. So we told our good friend Bob to force a shutdown of the DB server, which Bob was reluctant to do, as it wasn't in the SOP. Eventually, the op manager persuaded him - "Pull the bloody plug out if you have to - JFDI".
Only Bob, didn't. He shut down ALL the servers.
Oh, and naturally, when we tried to bring it all back, the failover db server wouldn't come up. Uhuru just couldn't work it out. Fortunately, Scotty worked out that one of the network cards had failed, knew how to get in via one of the others, and got that back online somewhat faster than the four hours the datacentre were quoting.
13000 users, around the world, unable to log on for an hour. How we laughed.
Comment