• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
Collapse

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Azure function, slooooow."

Collapse

  • mudskipper
    replied
    Originally posted by woohoo View Post
    The Azure function timing reduced itself from 6 mins to 17 seconds without changing anything.
    Most unsatisfactory The black art of databases.

    Leave a comment:


  • woohoo
    replied
    Just in case anyone is interested. The Azure function timing reduced itself from 6 mins to 17 seconds without changing anything.

    I didn't get a chance to deploy the change to the Azure function to SET ARITHABORT ON for the login session but if it helps someone else here is the code I pinched and tested.

    Code:
                    
    conn.StateChange += new StateChangeEventHandler(OnStateChange);
    conn.Open();
    Code:
            
    private static void OnStateChange(object sender, StateChangeEventArgs args)
    {
        if (args.OriginalState == ConnectionState.Closed
            && args.CurrentState == ConnectionState.Open)
        {
            using (SqlCommand cmd = ((SqlConnection)sender).CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText = "SET ARITHABORT OFF;";
    
                cmd.ExecuteNonQuery();
            }
        }
    }

    Leave a comment:


  • woohoo
    replied
    I’ve read the article before thanks.

    Just as easy to change the azure function. I’m having a day off and will take a look at it Sunday.

    Leave a comment:


  • mudskipper
    replied
    Originally posted by woohoo View Post

    I did try setting SET ARITHABORT ON in the stored procedure but no difference. But from what I've read the actual connection from ado.net trumps the stored procedure settings. So I'm going to attempt to change the code in the Azure function to set this on for the connection. Working with live data so will make the change and allow it to run in the morning.
    At the risk of stating the bleedin' obvious, you could try setting to OFF in the SP and see if it runs slow in SSMS - would at least indicate whether it's worth pursuing.

    https://docs.microsoft.com/en-us/sql...t-transact-sql

    Leave a comment:


  • woohoo
    replied
    Originally posted by mudskipper View Post
    There's also the parameter sniffing stuff. I changed my SPs to assign the passed parameters to local variables and use them in the query. Can't be sure whether it made a difference - my comment said " -- This might help... "
    hah yeah, I'm currently reading this about parameter sniffing...

    Slow in the Application, Fast in SSMS?

    Leave a comment:


  • mudskipper
    replied
    Originally posted by woohoo View Post
    I think it's very likely to be something related to the statistics/dodgy plan.

    I think Mudskipper might be onto something with SET ARITHABORT ON. I've read a couple of answers on SO and people say it's sorted their problem out then others have pointed out that it may be a red herring and it's actually a dodgy plan.

    I did try setting SET ARITHABORT ON in the stored procedure but no difference. But from what I've read the actual connection from ado.net trumps the stored procedure settings. So I'm going to attempt to change the code in the Azure function to set this on for the connection. Working with live data so will make the change and allow it to run in the morning.

    I'm limited on time at the moment so going for the the low hanging fruit. But my feeling is it's actually not related to the function at all and as you have said it's db.
    There's also the parameter sniffing stuff. I changed my SPs to assign the passed parameters to local variables and use them in the query. Can't be sure whether it made a difference - my comment said " -- This might help... "

    Leave a comment:


  • woohoo
    replied
    Originally posted by BlueSharp View Post
    The only other thing I can think of is a none azure specific issue such as a bad execution plan OPTION(RECOMPILE) on the azure function call to the proc or corruption in the buffer.

    Edit: Are the database indexes and statistics fresh?

    https://docs.microsoft.com/en-us/sql...s-transact-sql

    Failing that try turning turing sql server on and off again if you can get a window to do that.
    I think it's very likely to be something related to the statistics/dodgy plan.

    I think Mudskipper might be onto something with SET ARITHABORT ON. I've read a couple of answers on SO and people say it's sorted their problem out then others have pointed out that it may be a red herring and it's actually a dodgy plan.

    I did try setting SET ARITHABORT ON in the stored procedure but no difference. But from what I've read the actual connection from ado.net trumps the stored procedure settings. So I'm going to attempt to change the code in the Azure function to set this on for the connection. Working with live data so will make the change and allow it to run in the morning.

    I'm limited on time at the moment so going for the the low hanging fruit. But my feeling is it's actually not related to the function at all and as you have said it's db.

    Leave a comment:


  • BlueSharp
    replied
    The only other thing I can think of is a none azure specific issue such as a bad execution plan OPTION(RECOMPILE) on the azure function call to the proc or corruption in the buffer.

    Edit: Are the database indexes and statistics fresh?

    https://docs.microsoft.com/en-us/sql...s-transact-sql

    Failing that try turning turing sql server on and off again if you can get a window to do that.
    Last edited by BlueSharp; 22 March 2018, 22:47.

    Leave a comment:


  • DimPrawn
    replied
    Move it all to AWS Lambda, M$ suck.

    Leave a comment:


  • Einstein Jnr
    replied
    What if you run a logic app with sql connector instead of the function?

    Leave a comment:


  • woohoo
    replied
    Originally posted by mudskipper View Post
    Have you looked at the query plans?

    I had a similar issue calling a stored proc from .Net - turns out the .Net framework always runs with ARITHABORT off, whereas in SSMS it's on by default. Sticking SET ARITHABORT ON at the top of my SP fixed the issue.

    Dunno if Azure functions is the same, but worth a try perhaps.
    That's given me an option to look at. Just reading about it on stackoverflow now, though some argument about whether it does make a difference or it's because of dodgy cached plans.

    Nice one.

    Leave a comment:


  • mudskipper
    replied
    Originally posted by woohoo View Post
    Thanks both.

    It could well be that it's a cold start that is causing part of the delay. I've read a few things on github and stackoverflow about people warming up the function. But the saving in time does not match the results I'm seeing.

    I've had issues with the function failing because of the sql connection timing out. I've increased the timeout so now I can see the function is taking around 6 minutes to complete. But it's the timeout of the running stored procedure that indicates it not just a slow start.

    I've improved the performance of the proc and it takes less than 10 seconds. So I'm finding it hard to reconcile the difference in time.
    Really all the function is doing is calling a proc, I'm just doing an update so no real data to return or processing in the function.
    Have you looked at the query plans?

    I had a similar issue calling a stored proc from .Net - turns out the .Net framework always runs with ARITHABORT off, whereas in SSMS it's on by default. Sticking SET ARITHABORT ON at the top of my SP fixed the issue.

    Dunno if Azure functions is the same, but worth a try perhaps.

    Leave a comment:


  • woohoo
    replied
    Originally posted by BlueSharp View Post
    The auto scaling we use is for message queue subscribers, the timer job we have is on the traditional "always on plan" as it runs for over ten minutes. You can see the number of instances span up in the portal. Click on your azure function and click on monitor. Also worth sticking in some trace writer events and writing the logs to blob storage when debugging these things.
    So, there is no mention of instances in monitor so I assume there is only one instance spinning up.
    The logs I have are not duplicated or anything so again I assume there is only one instance.

    The logging I have shows that opening the connection is near instant and all of the time is spent executing the stored procedure. So I have logging before and after the ExecuteNonQuery and it's this that is taking all the time.

    When I get a bit of time I'm going to run the function from visual studio and see how it compares. Unfortunately, the client does not have a test system so I have to be careful.

    Leave a comment:


  • BlueSharp
    replied
    Originally posted by woohoo View Post
    The scale out is interesting. I'm having trouble finding much about scaling when the function is triggered by a timer. I can't seem to find this information in the Portal (in might be in front of me but I can't see it).

    What I can find about scaling is relevant to handling lots of requests or when the cpu hits max. But I can't see calling a stored proc as something that is CPU intensive for the function. It's the db doing the work.


    Thanks for another avenue to look down.
    The auto scaling we use is for message queue subscribers, the timer job we have is on the traditional "always on plan" as it runs for over ten minutes. You can see the number of instances span up in the portal. Click on your azure function and click on monitor. Also worth sticking in some trace writer events and writing the logs to blob storage when debugging these things.

    Leave a comment:


  • woohoo
    replied
    Originally posted by BlueSharp View Post
    6 minutes seems to be a long time to do a warm up. With azure functions they will auto scale out, is the function scaling out and causing a locking issue? i.e. multiple issues being ran at the same time. If it's only ran once a day it might be worth looking at an azure logic app.
    The scale out is interesting. I'm having trouble finding much about scaling when the function is triggered by a timer. I can't seem to find this information in the Portal (in might be in front of me but I can't see it).

    What I can find about scaling is relevant to handling lots of requests or when the cpu hits max. But I can't see calling a stored proc as something that is CPU intensive for the function. It's the db doing the work.


    Thanks for another avenue to look down.

    Leave a comment:

Working...
X