• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Going from MS SQL to MySQL - How difficult?

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #11
    I disagree, it's not very elegant but it can be the best all round option if read performance is really important and you can tolerate dirty reads. Which can be applicable to many busy websites.

    Comment


      #12
      Originally posted by TheFaQQer View Post
      Reading uncommitted data is a bad idea. The only time you can rely on uncommitted data is within the same session before you either commit it or roll back. If there are other sessions which are relying on your uncommitted data, which is then rolled back, you need to be careful about what users are presented with - are they going to see data that never gets committed? If that uncommitted data is then used to make decisions (either programmatically or manually) then that decision is based on duff data. MySQL will read the uncommitted data from your own session, but cross-session uncommitted reads are a bad idea.

      I don't agree with defining tables on the fly - define the schema, document the schema, code. Define a temporary table in the schema and use that, rather than creating and dropping objects in code. MySQL doesn't allow it for this very reason.
      In MS SQL you can do a dirty read as it's called but that's a feature which is only going to be useful in very limited circumstances and is something which, for good reason, you have to explicitly enable within a query.
      If you have a high volume transaction table which is only ever written to and you can live with the occasional row being read which is subsequently rolled back then reading uncommitted data has it's uses.

      Like wise temporary tables are great if you know what you're doing with them. You can get some very impressive performance benefits when working with large volumes of data.
      Of course if the database has been set up by an idiot and the temp DB is stored on the C drive then you'll run in to problems.
      You could define the the table in the schema but this means cluttering up the schema with IMO unnecessary objects, potentially left with a lot of data in them which SQL Server will then want to do maintenance on and will be included in the backups including log shipping if you're doing any.
      Coffee's for closers

      Comment


        #13
        Also Re: Temp Tables. If you have one defined permie (lol) table in the schema to act as a temporary table, and you execute a procedures multiple times at the same time presumably doing inserts and deletes in this psuedo-temp table to simulate a temp table, you're potentially going to run into performance issues.

        Comment


          #14
          Originally posted by doodab View Post
          It has a command line tool as well. It doesn't have anything like the SQL Server GUI tools, at least not that I am aware of. You can use things like the eclipse database tools although they aren't the greatest.
          Yes it does. Google "MySQL tools". I use MySQL Administrator and MySQL Query Manager. The MySQL Workbench is an abomination though.

          Another option is Navicat, which is free in the "Lite" version for Mac, Linux, or Windows as well as having a paid version which adds a few extra capabilities.

          Comment


            #15
            Originally posted by Robinho View Post
            Also Re: Temp Tables. If you have one defined permie (lol) table in the schema to act as a temporary table, and you execute a procedures multiple times at the same time presumably doing inserts and deletes in this psuedo-temp table to simulate a temp table, you're potentially going to run into performance issues.
            Depends how you define it:

            Code:
            SQL> create global temporary table temp_table_session
              2  on commit preserve rows
              3  as
              4  select * from scott.emp where 1=0
              5  /
            Table created.
            would preserve the rows until you log off.

            Code:
            SQL> create global temporary table temp_table_transaction
              2  on commit delete rows
              3  as
              4  select * from scott.emp where 1=0
              5  /
            Table created.
            would preserve the rows until you commit, at which point the table gets emptied.

            If you define it correctly, then you're not going to have performance issues - or define more than one temporary table?
            Best Forum Advisor 2014
            Work in the public sector? You can read my FAQ here
            Click here to get 15% off your first year's IPSE membership

            Comment


              #16
              Originally posted by NickFitz View Post
              Yes it does. Google "MySQL tools". I use MySQL Administrator and MySQL Query Manager. The MySQL Workbench is an abomination though.

              Another option is Navicat, which is free in the "Lite" version for Mac, Linux, or Windows as well as having a paid version which adds a few extra capabilities.
              From MySQL Administrator documentation

              MySQL Administrator has reached EOL. See the EOL notice. Please upgrade to MySQL Workbench.
              Behold the warranty -- the bold print giveth and the fine print taketh away.

              Comment


                #17
                Originally posted by Spacecadet View Post
                In MS SQL you can do a dirty read as it's called but that's a feature which is only going to be useful in very limited circumstances and is something which, for good reason, you have to explicitly enable within a query.
                If you have a high volume transaction table which is only ever written to and you can live with the occasional row being read which is subsequently rolled back then reading uncommitted data has it's uses.
                If the table is only ever written to, then it's not an issue for any reads, whether committed, uncommitted or made up.

                Originally posted by Spacecadet View Post
                Like wise temporary tables are great if you know what you're doing with them. You can get some very impressive performance benefits when working with large volumes of data.
                I agree

                Originally posted by Spacecadet View Post
                Of course if the database has been set up by an idiot and the temp DB is stored on the C drive then you'll run in to problems.
                Yes
                Originally posted by Spacecadet View Post
                You could define the the table in the schema but this means cluttering up the schema with IMO unnecessary objects, potentially left with a lot of data in them which SQL Server will then want to do maintenance on and will be included in the backups including log shipping if you're doing any.
                Well, if you define the table correctly, then it's not going to fill up - either keep the data until you commit, or keep the data until you logoff. Either way, unless there is no concept of logging off, there shouldn't be too many data problems.
                Best Forum Advisor 2014
                Work in the public sector? You can read my FAQ here
                Click here to get 15% off your first year's IPSE membership

                Comment


                  #18
                  Originally posted by NickFitz View Post
                  The MySQL Workbench is an abomination though.
                  Oracle will fix that. The same way they "fixed" Oracle Enterprise Manager
                  Best Forum Advisor 2014
                  Work in the public sector? You can read my FAQ here
                  Click here to get 15% off your first year's IPSE membership

                  Comment


                    #19
                    Originally posted by Sysman View Post
                    They still work though

                    Comment


                      #20
                      Originally posted by TheFaQQer View Post
                      Depends how you define it:

                      Code:
                      SQL> create global temporary table temp_table_session
                        2  on commit preserve rows
                        3  as
                        4  select * from scott.emp where 1=0
                        5  /
                      Table created.
                      would preserve the rows until you log off.

                      Code:
                      SQL> create global temporary table temp_table_transaction
                        2  on commit delete rows
                        3  as
                        4  select * from scott.emp where 1=0
                        5  /
                      Table created.
                      would preserve the rows until you commit, at which point the table gets emptied.

                      If you define it correctly, then you're not going to have performance issues - or define more than one temporary table?
                      Scratches head... that's a temporary table


                      Originally posted by TheFaQQer View Post
                      Well, if you define the table correctly, then it's not going to fill up - either keep the data until you commit, or keep the data until you logoff. Either way, unless there is no concept of logging off, there shouldn't be too many data problems.
                      Tables don't fill up
                      Coffee's for closers

                      Comment

                      Working...
                      X