I disagree, it's not very elegant but it can be the best all round option if read performance is really important and you can tolerate dirty reads. Which can be applicable to many busy websites.
- Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
- Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
Going from MS SQL to MySQL - How difficult?
Collapse
X
-
-
In MS SQL you can do a dirty read as it's called but that's a feature which is only going to be useful in very limited circumstances and is something which, for good reason, you have to explicitly enable within a query.Originally posted by TheFaQQer View PostReading uncommitted data is a bad idea. The only time you can rely on uncommitted data is within the same session before you either commit it or roll back. If there are other sessions which are relying on your uncommitted data, which is then rolled back, you need to be careful about what users are presented with - are they going to see data that never gets committed? If that uncommitted data is then used to make decisions (either programmatically or manually) then that decision is based on duff data. MySQL will read the uncommitted data from your own session, but cross-session uncommitted reads are a bad idea.
I don't agree with defining tables on the fly - define the schema, document the schema, code. Define a temporary table in the schema and use that, rather than creating and dropping objects in code. MySQL doesn't allow it for this very reason.
If you have a high volume transaction table which is only ever written to and you can live with the occasional row being read which is subsequently rolled back then reading uncommitted data has it's uses.
Like wise temporary tables are great if you know what you're doing with them. You can get some very impressive performance benefits when working with large volumes of data.
Of course if the database has been set up by an idiot and the temp DB is stored on the C drive then you'll run in to problems.
You could define the the table in the schema but this means cluttering up the schema with IMO unnecessary objects, potentially left with a lot of data in them which SQL Server will then want to do maintenance on and will be included in the backups including log shipping if you're doing any.Coffee's for closersComment
-
Also Re: Temp Tables. If you have one defined permie (lol) table in the schema to act as a temporary table, and you execute a procedures multiple times at the same time presumably doing inserts and deletes in this psuedo-temp table to simulate a temp table, you're potentially going to run into performance issues.Comment
-
Yes it does. Google "MySQL tools". I use MySQL Administrator and MySQL Query Manager. The MySQL Workbench is an abomination though.Originally posted by doodab View PostIt has a command line tool as well. It doesn't have anything like the SQL Server GUI tools, at least not that I am aware of. You can use things like the eclipse database tools although they aren't the greatest.
Another option is Navicat, which is free in the "Lite" version for Mac, Linux, or Windows as well as having a paid version which adds a few extra capabilities.Comment
-
Depends how you define it:Originally posted by Robinho View PostAlso Re: Temp Tables. If you have one defined permie (lol) table in the schema to act as a temporary table, and you execute a procedures multiple times at the same time presumably doing inserts and deletes in this psuedo-temp table to simulate a temp table, you're potentially going to run into performance issues.
would preserve the rows until you log off.Code:SQL> create global temporary table temp_table_session 2 on commit preserve rows 3 as 4 select * from scott.emp where 1=0 5 / Table created.
would preserve the rows until you commit, at which point the table gets emptied.Code:SQL> create global temporary table temp_table_transaction 2 on commit delete rows 3 as 4 select * from scott.emp where 1=0 5 / Table created.
If you define it correctly, then you're not going to have performance issues - or define more than one temporary table?Comment
-
From MySQL Administrator documentationOriginally posted by NickFitz View PostYes it does. Google "MySQL tools". I use MySQL Administrator and MySQL Query Manager. The MySQL Workbench is an abomination though.
Another option is Navicat, which is free in the "Lite" version for Mac, Linux, or Windows as well as having a paid version which adds a few extra capabilities.
MySQL Administrator has reached EOL. See the EOL notice. Please upgrade to MySQL Workbench.Behold the warranty -- the bold print giveth and the fine print taketh away.Comment
-
If the table is only ever written to, then it's not an issue for any reads, whether committed, uncommitted or made up.Originally posted by Spacecadet View PostIn MS SQL you can do a dirty read as it's called but that's a feature which is only going to be useful in very limited circumstances and is something which, for good reason, you have to explicitly enable within a query.
If you have a high volume transaction table which is only ever written to and you can live with the occasional row being read which is subsequently rolled back then reading uncommitted data has it's uses.
I agreeOriginally posted by Spacecadet View PostLike wise temporary tables are great if you know what you're doing with them. You can get some very impressive performance benefits when working with large volumes of data.
YesOriginally posted by Spacecadet View PostOf course if the database has been set up by an idiot and the temp DB is stored on the C drive then you'll run in to problems.
Well, if you define the table correctly, then it's not going to fill up - either keep the data until you commit, or keep the data until you logoff. Either way, unless there is no concept of logging off, there shouldn't be too many data problems.Originally posted by Spacecadet View PostYou could define the the table in the schema but this means cluttering up the schema with IMO unnecessary objects, potentially left with a lot of data in them which SQL Server will then want to do maintenance on and will be included in the backups including log shipping if you're doing any.Comment
-
Comment
-
-
Scratches head... that's a temporary tableOriginally posted by TheFaQQer View PostDepends how you define it:
would preserve the rows until you log off.Code:SQL> create global temporary table temp_table_session 2 on commit preserve rows 3 as 4 select * from scott.emp where 1=0 5 / Table created.
would preserve the rows until you commit, at which point the table gets emptied.Code:SQL> create global temporary table temp_table_transaction 2 on commit delete rows 3 as 4 select * from scott.emp where 1=0 5 / Table created.
If you define it correctly, then you're not going to have performance issues - or define more than one temporary table?
Tables don't fill upOriginally posted by TheFaQQer View PostWell, if you define the table correctly, then it's not going to fill up - either keep the data until you commit, or keep the data until you logoff. Either way, unless there is no concept of logging off, there shouldn't be too many data problems.Coffee's for closersComment
- Home
- News & Features
- First Timers
- IR35 / S660 / BN66
- Employee Benefit Trusts
- Agency Workers Regulations
- MSC Legislation
- Limited Companies
- Dividends
- Umbrella Company
- VAT / Flat Rate VAT
- Job News & Guides
- Money News & Guides
- Guide to Contracts
- Successful Contracting
- Contracting Overseas
- Contractor Calculators
- MVL
- Contractor Expenses
Advertisers

Comment