Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:
You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.
Logging in...
Previously on "Sql Server 2005 - Maintenance Job on Production Table"
I always did it out of office hours, there was some usage but nothing significant. The table would have been locked when the rows were being coped back but as the transaction took only a second or so users never noticed and the applications didn't complain.
Atw's method could well be faster for larger amounts of retained data. Personally I'd like to test that sometime, I'm not 100% sure it would be faster as the drop command does mean writing to the transaction log (hence the ability to roll it back), I'm just not sure what data is written to it.
Time to build some incredibly large testing tables
edit:
tested with 5.5 million rows, keeping 10000. No difference between mine and atw's methods.
Cheers Space, gone with your method and lock hints.
During the truncation and move did this cause full table locks? If so how did you keep the app up and running without timing out or did you have some app_offline time
I always did it out of office hours, there was some usage but nothing significant. The table would have been locked when the rows were being coped back but as the transaction took only a second or so users never noticed and the applications didn't complain.
Atw's method could well be faster for larger amounts of retained data. Personally I'd like to test that sometime, I'm not 100% sure it would be faster as the drop command does mean writing to the transaction log (hence the ability to roll it back), I'm just not sure what data is written to it.
Time to build some incredibly large testing tables
edit:
tested with 5.5 million rows, keeping 10000. No difference between mine and atw's methods.
I thought if you passed in a hint it would override it if needed. Will try the lock hints, seems better than the stop rename delete copy start approach which would cause downtime for some users
I has a similar issue in the past when dealing with a CODA database.
I did it close to how you currently do it
copy rows to keep to a temp table
truncate main table
copy rows back from temp to main
truncate temp
reason for doing it that way was to minimise writing to the transaction log when deleting a million+ rows of session specific crap.
Hi Space,
Cheers for the reply,
During the truncation and move did this cause full table locks? If so how did you keep the app up and running without timing out or did you have some app_offline time
use rename feature - do new version of main table in temp, then rename main into main_old, then rename temp into main, and finally (if all works) you can drop main_old - easy to rollback in case of a mistake a lot less copying takes place: if you have indices then it makes sense to build them after first mass copying took place.
If you constantly getting new session info into session table then the following algorithm will be pretty optimal:
1) stop server
2) rename sessions to sessions_old
3) create new empty sessions tables
4) start server
^^^^^^^
The above will be very fast because all you do is renaming - no locks etc.
5) hack sessions_old any way you want - can be done offline on another server, just BCP out
6) either BCP new sessions_old back or it will just contain cleaned data
7) stop server
8) copy NEW data from sessions into sessions_old - this will be fast since in all probability you won't have as many new sessions
9) rename sessions to sessions_new_old
10) rename sessions_old (with updated data) into sessions
11) start server
Sql Server 2005 - Maintenance Job on Production Table
Hi all,
Wonder if anyone can suggest an answer to my problem.
Currently got a huge Session table in Sql Server 2005, I need to perform some kind of maintenance job periodically to start reducing the amount of rows. However I need to leave the system fully functional and avoid any table locks, e.g delete from Session where date < today
I have thought of a job that copies the table to a temp table, does the processing and identifies the top 100 rows which it can delete by the primary key thus avoiding a table lock.
Is there a better way? Will table partitioning help?
Leave a comment: