Originally posted by fullyautomatix
View Post
- Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
- Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
Reply to: Sql Server Caching
Collapse
You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:
- You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
- You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
- If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.
Logging in...
Previously on "Sql Server Caching"
Collapse
-
-
Originally posted by DimPrawn View PostGood question.
Now, if SQL Server is on the same box as you app/webserver, then the overheard of the database access to some data cached by SQL Server is going to be small, but still larger than a local cache in the due to the opening of a TCP or named pipe connection, the creation of the TDS datastream to SQL Server, SQL Server looking up the query plan and then retrieving the cached data, converting it to TDS protocol for the results and it getting into ADO.NET objects etc and reprocessing the data readers to get the data out.
If the SQL Server instance is on another box on the network, then you can add network latency to this too.
So, the answer is, hitting the database always going to be slower and perhaps much slower than accessing a memory cache in the same address space as the IIS process and for a heavily loaded website, this is going to be siginificant.
Also, a cache can be scaled using a distributed cache, whilst a SQL Server database is a bottleneck that scales up rather than out.
This is why products such as Alachisoft - NCache, Distributed Caching, Distributed .NET Cache, Java Distributed Cache, ASP.NET Session State, JSP Session Cache, StorageEdge, SharePoint Storage Optimization and Performance, Externalize BLOBs exist, otherwise everyone would just hit the database all the time.
Leave a comment:
-
Originally posted by fullyautomatix View PostAre you saying the results are cached ? or the query execution plan ?
I am not hot on DBA stuff but I recall that SQL server loads a lot of data pages into RAM for quick access and the size of this cache in RAM depends on how big the RAM is.
SQL server will however use whatever memory it has available to keep recently used data in memory so its easy to get confused. The best documentation I can see regarding the life cycle of a cached query plan states that it becomes invalid if the table is changed or a lot of data is entered so I suspect that the real answer is that the core of the query is stored in memory.
Either way the original argument for using sql server is irrelevant. If you want to cache information you need to do it as close to the system managing the caching as possible. When stackoverflow first started they discovered that simply caching all requested information for 2 seconds was enough to more than half the load on the single server they used for everything at the time.Last edited by eek; 16 May 2011, 12:51.
Leave a comment:
-
Originally posted by MarillionFan View PostI was always under the impression that cache is created in SQL Server after the query has run. So for example, select count(*) from database where user= 'Mad as a hatter' will run normally the first time. It will then cache. Run the query again as the result is cached the result is instant. Now run the same where user = 'MF is cool', the first time it needs to run, caches etc.
For BI reporting we tend to set up an automated caching routine of the most common requests so when a user run it's give a fast response.
So effectively the first user takes the hit unless you implement a proactive caching plan. (that's how I understand it to work)
I am not hot on DBA stuff but I recall that SQL server loads a lot of data pages into RAM for quick access and the size of this cache in RAM depends on how big the RAM is.
Leave a comment:
-
Originally posted by DimPrawn View PostPS.
All DBA's tell you their database server is brilliant and you don't need anything else (hell why do you need a webserver, a database is much better than a web server).
They tell you all queries are instantaneous, you can process a trillion transactions per second and it won't even use any resources it's that good.
You go along with this and write a heavily loaded web app and the database server is brought to it's knees.
The DBA's then tell you you can't hit the DB like that, don't be a moron, what sort of idiot developer are you.
Leave a comment:
-
PS.
All DBA's tell you their database server is brilliant and you don't need anything else (hell why do you need a webserver, a database is much better than a web server).
They tell you all queries are instantaneous, you can process a trillion transactions per second and it won't even use any resources it's that good.
You go along with this and write a heavily loaded web app and the database server is brought to it's knees.
The DBA's then tell you you can't hit the DB like that, don't be a moron, what sort of idiot developer are you.
Leave a comment:
-
Originally posted by pacharan View PostAn old chestnut this one. Have the argument in almost every. contract but still don't know the definitive answer.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.
Now, if SQL Server is on the same box as you app/webserver, then the overheard of the database access to some data cached by SQL Server is going to be small, but still larger than a local cache in the due to the opening of a TCP or named pipe connection, the creation of the TDS datastream to SQL Server, SQL Server looking up the query plan and then retrieving the cached data, converting it to TDS protocol for the results and it getting into ADO.NET objects etc and reprocessing the data readers to get the data out.
If the SQL Server instance is on another box on the network, then you can add network latency to this too.
So, the answer is, hitting the database always going to be slower and perhaps much slower than accessing a memory cache in the same address space as the IIS process and for a heavily loaded website, this is going to be siginificant.
Also, a cache can be scaled using a distributed cache, whilst a SQL Server database is a bottleneck that scales up rather than out.
This is why products such as http://www.alachisoft.com/ exist, otherwise everyone would just hit the database all the time.Last edited by DimPrawn; 16 May 2011, 10:22.
Leave a comment:
-
Originally posted by pacharan View PostAn old chestnut this one. Have the argument in almost every. contract but still don't know the definitive answer.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.
For BI reporting we tend to set up an automated caching routine of the most common requests so when a user run it's give a fast response.
So effectively the first user takes the hit unless you implement a proactive caching plan. (that's how I understand it to work)
Leave a comment:
-
Originally posted by pacharan View PostAn old chestnut this one. Have the argument in almost every. contract but still don't know the definitive answer.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.
There is still the round trip to the database, the milliseconds it takes for the database to return the static data and the additional code on the website to handle the database query to consider.
The way IIS handles caching may also be a significant advantage but as you don't explicitly state how you are caching the data its an irrelevance to the basic argument.
Leave a comment:
-
If your app is on a separate box to your sql server, it makes sense to cache on the app server doesn't it?
Leave a comment:
-
I know diddly squat about this area of SQL Server, but I have come across this argument before on non-MS platforms.
Generally speaking, those who assume that stuff that should be cached is getting cached are wrong. The only way to find out for your particular workload mix is some good old fashioned monitoring.
Leave a comment:
-
Sql Server Caching
An old chestnut this one. Have the argument in almost every. contract but still don't know the definitive answer.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.Tags: None
- Home
- News & Features
- First Timers
- IR35 / S660 / BN66
- Employee Benefit Trusts
- Agency Workers Regulations
- MSC Legislation
- Limited Companies
- Dividends
- Umbrella Company
- VAT / Flat Rate VAT
- Job News & Guides
- Money News & Guides
- Guide to Contracts
- Successful Contracting
- Contracting Overseas
- Contractor Calculators
- MVL
- Contractor Expenses
Advertisers
Contractor Services
CUK News
- Labour’s plan to regulate umbrella companies: a closer look Nov 21 09:24
- When HMRC misses an FTT deadline but still wins another CJRS case Nov 20 09:20
- How 15% employer NICs will sting the umbrella company market Nov 19 09:16
- Contracting Awards 2024 hails 19 firms as best of the best Nov 18 09:13
- How to answer at interview, ‘What’s your greatest weakness?’ Nov 14 09:59
- Business Asset Disposal Relief changes in April 2025: Q&A Nov 13 09:37
- How debt transfer rules will hit umbrella companies in 2026 Nov 12 09:28
- IT contractor demand floundering despite Autumn Budget 2024 Nov 11 09:30
- An IR35 bill of £19m for National Resources Wales may be just the tip of its iceberg Nov 7 09:20
- Micro-entity accounts: Overview, and how to file with HMRC Nov 6 09:27
Leave a comment: