• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Sql Server 2005 - Maintenance Job on Production Table

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    Sql Server 2005 - Maintenance Job on Production Table

    Hi all,

    Wonder if anyone can suggest an answer to my problem.

    Currently got a huge Session table in Sql Server 2005, I need to perform some kind of maintenance job periodically to start reducing the amount of rows. However I need to leave the system fully functional and avoid any table locks, e.g delete from Session where date < today

    I have thought of a job that copies the table to a temp table, does the processing and identifies the top 100 rows which it can delete by the primary key thus avoiding a table lock.

    Is there a better way? Will table partitioning help?

    Cheers in advance
    whats the lowest you can do this for?

    #2
    I has a similar issue in the past when dealing with a CODA database.

    I did it close to how you currently do it

    copy rows to keep to a temp table
    truncate main table
    copy rows back from temp to main
    truncate temp

    reason for doing it that way was to minimise writing to the transaction log when deleting a million+ rows of session specific crap.
    Coffee's for closers

    Comment


      #3
      use rename feature - do new version of main table in temp, then rename main into main_old, then rename temp into main, and finally (if all works) you can drop main_old - easy to rollback in case of a mistake a lot less copying takes place: if you have indices then it makes sense to build them after first mass copying took place.

      If you constantly getting new session info into session table then the following algorithm will be pretty optimal:

      1) stop server
      2) rename sessions to sessions_old
      3) create new empty sessions tables
      4) start server

      ^^^^^^^

      The above will be very fast because all you do is renaming - no locks etc.

      5) hack sessions_old any way you want - can be done offline on another server, just BCP out
      6) either BCP new sessions_old back or it will just contain cleaned data
      7) stop server
      8) copy NEW data from sessions into sessions_old - this will be fast since in all probability you won't have as many new sessions
      9) rename sessions to sessions_new_old
      10) rename sessions_old (with updated data) into sessions
      11) start server
      Last edited by AtW; 10 May 2007, 15:50.

      Comment


        #4
        Originally posted by Spacecadet
        I has a similar issue in the past when dealing with a CODA database.

        I did it close to how you currently do it

        copy rows to keep to a temp table
        truncate main table
        copy rows back from temp to main
        truncate temp

        reason for doing it that way was to minimise writing to the transaction log when deleting a million+ rows of session specific crap.

        Hi Space,
        Cheers for the reply,

        During the truncation and move did this cause full table locks? If so how did you keep the app up and running without timing out or did you have some app_offline time
        whats the lowest you can do this for?

        Comment


          #5
          can't you use lock hints?
          --------------------------------------------------------------------------------

          SA - Is it like a dragons nostril?

          Comment


            #6
            Originally posted by kramer
            can't you use lock hints?
            Yes but they are exactly that, hints not actual implementation
            whats the lowest you can do this for?

            Comment


              #7

              Comment


                #8
                Originally posted by AtW

                I thought if you passed in a hint it would override it if needed. Will try the lock hints, seems better than the stop rename delete copy start approach which would cause downtime for some users
                whats the lowest you can do this for?

                Comment


                  #9
                  Originally posted by HankWangford
                  Hi Space,
                  Cheers for the reply,

                  During the truncation and move did this cause full table locks? If so how did you keep the app up and running without timing out or did you have some app_offline time
                  I always did it out of office hours, there was some usage but nothing significant. The table would have been locked when the rows were being coped back but as the transaction took only a second or so users never noticed and the applications didn't complain.

                  Atw's method could well be faster for larger amounts of retained data. Personally I'd like to test that sometime, I'm not 100% sure it would be faster as the drop command does mean writing to the transaction log (hence the ability to roll it back), I'm just not sure what data is written to it.
                  Time to build some incredibly large testing tables

                  edit:
                  tested with 5.5 million rows, keeping 10000. No difference between mine and atw's methods.
                  Last edited by Spacecadet; 11 May 2007, 09:47.
                  Coffee's for closers

                  Comment


                    #10
                    Originally posted by Spacecadet
                    I always did it out of office hours, there was some usage but nothing significant. The table would have been locked when the rows were being coped back but as the transaction took only a second or so users never noticed and the applications didn't complain.

                    Atw's method could well be faster for larger amounts of retained data. Personally I'd like to test that sometime, I'm not 100% sure it would be faster as the drop command does mean writing to the transaction log (hence the ability to roll it back), I'm just not sure what data is written to it.
                    Time to build some incredibly large testing tables

                    edit:
                    tested with 5.5 million rows, keeping 10000. No difference between mine and atw's methods.

                    Cheers Space, gone with your method and lock hints.
                    whats the lowest you can do this for?

                    Comment

                    Working...
                    X