• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

DBA question - UPDATE STATISTICS

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    DBA question - UPDATE STATISTICS

    FAO: A DBA

    Any DBA wizards on CUK?

    Could use a quick summary and tips what to look for in a SQL query execution plan without getting blinded by science.

    I understand index seek and scan but this topic goes over my head.

    Many thanks in advance for any help.
    one day at a time

    #2
    In general for a query plan I look for positioning at start of table. If there are 2 of those its generally trouble.

    Comment


      #3
      Okay folks, I was up most of last night - I'm getting my head around it now (I have to meet with a DBA later...)

      Would appreciate if anyone could verify my basic understanding.

      Update statistics allows the query optimiser to decide whether to use index seek or scan. If a ropey maintenance plan is in place, statistics may become stale.

      Update statistics stores a histogram re: data density etc against each index.

      In this extreme example I try and prove the point:-

      Say in the database I have 1 record of postcode district NE1 before a bulk insert of 100,000 NE records. If statistics aren't updated after the bulk insert, the optimiser maybe will use 'seek index' whereas after update statistics the query optimiser may use 'scan index' instead. If it's trying to read 100,000/150,000 records scan index would be more efficient.
      Last edited by oscarose; 12 February 2014, 11:11. Reason: edited in error
      one day at a time

      Comment


        #4
        https://www.simple-talk.com/sql/perf...er-statistics/

        link removed in error
        one day at a time

        Comment


          #5
          It might help if you mentioned which database you are talking about.

          From a "rules of thumb" perspective, finding a single record in a unique index is usually more efficient than finding a range of records in an index which is usually more efficient than finding a range of records in a table directly. Statistics let the optimiser fine tune things so that on those occasions where it's actually more efficient to do the "less efficient" thing that's what it will do.

          In your example, retrieving 100,000 rows of 100,001, unless the index contains all the fields the query wants avoiding the index altogether and scanning the table would likely be more efficient.

          If there are joins involved there are different ways to go about it and some are more efficient than others. Which ones are available and which one is best depends on the row sources and their relative cardinality. Statistics will again be used to estimate & optimise this.
          While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

          Comment


            #6
            Originally posted by doodab View Post
            It might help if you mentioned which database you are talking about.

            From a "rules of thumb" perspective, finding a single record in a unique index is usually more efficient than finding a range of records in an index which is usually more efficient than finding a range of records in a table directly. Statistics let the optimiser fine tune things so that on those occasions where it's actually more efficient to do the "less efficient" thing that's what it will do.

            In your example, retrieving 100,000 rows of 100,001, unless the index contains all the fields the query wants avoiding the index altogether and scanning the table would likely be more efficient.

            If there are joins involved there are different ways to go about it and some are more efficient than others. Which ones are available and which one is best depends on the row sources and their relative cardinality. Statistics will again be used to estimate & optimise this.
            Thanks for assistance. SQLServer.

            Still reading through various things, it's making more sense - but a highly complex area. Just looking for 'rules of thumb' level of understanding.

            The example was reading 100,001/150,000 records. Probably still quicker to use index scan. Various articles mention various % thresholds in terms of whether scan or seek is used.
            Last edited by oscarose; 12 February 2014, 11:24.
            one day at a time

            Comment


              #7
              I know very complex SQL runs quicker when broken down into smaller logical chunks via the use of temporary tables. This logically make sense.

              However, what are the mechanics in the query optimiser that facilitates this?

              Thanks in advance.
              one day at a time

              Comment


                #8
                Originally posted by oscarose View Post
                Thanks for assistance. SQLServer.

                Still reading through various things, it's making more sense - but a highly complex area. Just looking for 'rules of thumb' level of understanding.

                The example was reading 100,001/150,000 records. Probably still quicker to use index scan. Various articles mention various % thresholds in terms of whether scan or seek is used.
                It depends on the size of the records and hence the number of data blocks that need to be read. If the number of index blocks to be read + number of table blocks to be retrieved + computational overhead of using an index comes out as more than the number of blocks that would be read during a table scan, the table scan is going to be more efficient overall.
                While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

                Comment


                  #9
                  Originally posted by oscarose View Post
                  I know very complex SQL runs quicker when broken down into smaller logical chunks via the use of temporary tables. This logically make sense.

                  However, what are the mechanics in the query optimiser that facilitates this?

                  Thanks in advance.
                  I'm not really a SQL server expert TBH, more of an oracle guy, but I would guess the use of a temp table simplifies the job of figuring out the join order or saves repeated executions of a subquery. IIRC you have to explicitly write your SQL to use them though don't you?
                  While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

                  Comment


                    #10
                    Originally posted by doodab View Post
                    It depends on the size of the records and hence the number of data blocks that need to be read. If the number of index blocks to be read + number of table blocks to be retrieved + computational overhead of using an index comes out as more than the number of blocks that would be read during a table scan, the table scan is going to be more efficient overall.
                    I get that. Good explanation.

                    Originally posted by doodab View Post
                    I'm not really a SQL server expert TBH, more of an oracle guy, but I would guess the use of a temp table simplifies the job of figuring out the join order or saves repeated executions of a subquery. IIRC you have to explicitly write your SQL to use them though don't you?
                    Yes, that's correct. But it boils down to writing good SQL in the 1st instance and not relying on the optimiser to sort out any old 'spagetti' SQL you throw at it.

                    Thanks doodab, I'm DBAed out now...

                    All good stuff, much appreciated.
                    one day at a time

                    Comment

                    Working...
                    X