• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "How would you design a background C# application to accept external commands?"

Collapse

  • BlueSharp
    replied
    Originally posted by woohoo View Post
    I've not read it properly but it annoyed me in the past that azure functions had a low timeout, especially as I used them much like scheduled jobs.
    Azure functions only have low timeouts if hosted in the consumption based plan. Host them in a App Service plan there is no timeout.

    Leave a comment:


  • woohoo
    replied
    I've not read it properly but it annoyed me in the past that azure functions had a low timeout, especially as I used them much like scheduled jobs.

    Leave a comment:


  • minestrone
    replied
    Durable Functions Overview - Azure | Microsoft Docs

    Leave a comment:


  • minestrone
    replied
    Originally posted by woohoo View Post
    Fair enough, but honestly, seems a very simple approach given the requirements. Doesn't seem any more complex that writing azure functions to start processes, does the timeout on azure functions not limit this approach?
    You know after I posted that I realised that it sounded over critical of your suggestion, which is wasn't, sorry for that, more the conditions of the requirements that have been set.

    Leave a comment:


  • woohoo
    replied
    Originally posted by minestrone View Post
    I've written pretty much this exact system, although it was java threads and CORBA waiting on multiple other systems connecting.

    It was about 15 years ago though and I wouldn't even entertain that kind of architecture now. If pushed and as said I would be more inclined to fire off new processes as at least they are viewable on the OS.
    Fair enough, but honestly, seems a very simple approach given the requirements. Doesn't seem any more complex that writing azure functions to start processes, does the timeout on azure functions not limit this approach?
    Last edited by woohoo; 30 September 2019, 18:09.

    Leave a comment:


  • minestrone
    replied
    Originally posted by woohoo View Post
    My first thought would be to write a windows service and have it spawn multiple threads that connect to each source. I would then have a config table with the source details and if disabled or not.
    I've written pretty much this exact system, although it was java threads and CORBA waiting on multiple other systems connecting.

    It was about 15 years ago though and I wouldn't even entertain that kind of architecture now. If pushed and as said I would be more inclined to fire off new processes as at least they are viewable on the OS.

    Leave a comment:


  • minestrone
    replied
    I'm not sure I find the idea of hosting a server that takes HTTP commands to open ports that appealing.

    Leave a comment:


  • Freewill
    replied
    I think I would embed an HTTP server inside the app running on a specific port, and then when you want to update the configuration you could just POST commands to it using curl.

    If you're feeling a bit more fancy you could serve up a static index.html page with some inputs and a button which posts the right commands for you. You would access this page using your browser at e.g. http://localhost:9000/index.html

    And if you're feeling even more fancy than that then you could build a complete React app and serve it up.

    I haven't used it but something like this might be suitable: GitHub - unosquare/embedio: A tiny, cross-platform, module based web server for .NET
    Last edited by Freewill; 27 September 2019, 18:29.

    Leave a comment:


  • woohoo
    replied
    Originally posted by d000hg View Post
    My current understanding is that first of all this is all raw TCP/IP, you build a message byte-by-byte in a very exact way and then send it over ethernet. This message says "I want to receive a datastream". The remote device then will start tossing data at you each time something of interest happens; as mentioned above it's not clear if the connection remains open and I haven't done this low-level stuff for a long time - but it IS pretty low level. I believe the underlying hardware is really working over serial interface, etc, with an ethernet adapter (can't go into any more details for NDA etc).
    Right get yer.

    So my guess is create a tcp listener server that runs on the server (console app or win service etc). In an infinite loop just have the server listening for a tcp connection and spawn a thread to handle that connection.

    As you said have another thread running to check config file or table, whatever then kill the relevant thread. Though, if the thread dealing with the connection is going to update a DB, don't know why you cant check config table and end the thread.

    Leave a comment:


  • CheeseSlice
    replied
    Originally posted by d000hg View Post
    My current understanding is that first of all this is all raw TCP/IP, you build a message byte-by-byte in a very exact way and then send it over ethernet. This message says "I want to receive a datastream". The remote device then will start tossing data at you each time something of interest happens; as mentioned above it's not clear if the connection remains open and I haven't done this low-level stuff for a long time - but it IS pretty low level. I believe the underlying hardware is really working over serial interface, etc, with an ethernet adapter (can't go into any more details for NDA etc).
    To deal with that low level stuff, have a look at IoT gateways. Device sits at the remote location talking RS232 or whatever, and forwards telemetry etc onto the IoT hub,... if you were considering that architecture.
    All of this of course depends on how many devices (a few, thousands?), what you need to do with the data, what capabilities you want, etc, etc.

    Leave a comment:


  • d000hg
    replied
    My current understanding is that first of all this is all raw TCP/IP, you build a message byte-by-byte in a very exact way and then send it over ethernet. This message says "I want to receive a datastream". The remote device then will start tossing data at you each time something of interest happens; as mentioned above it's not clear if the connection remains open and I haven't done this low-level stuff for a long time - but it IS pretty low level. I believe the underlying hardware is really working over serial interface, etc, with an ethernet adapter (can't go into any more details for NDA etc).

    Leave a comment:


  • BlueSharp
    replied
    Do the machines push their data on an event, can you control the protocol that the event is triggered on? If you can I would have each device push to a Queue (MSQueue or RabbitMQ) and have a listener on a server to pull each event from the queue and process it.

    In the cloud IoT Hub is perfect for this. I would then have an azure function to process the data from the IoT Hub Queue and store it it to a database.

    If you are pulling the data a webService (azure function on a timer ;-) which runs every n seconds and scans a config database and connects to each device to pull the data via what ever communication protocol (named pipes, WCF, HTTP) and pull the data. Do the devices poll back and a message and say hey I have data to process please connect"?. You could then push the data to a queue on the machine for further processing to ensure to ensure the receive data and process data services are de-coupled.
    Last edited by BlueSharp; 27 September 2019, 08:12.

    Leave a comment:


  • d000hg
    replied
    The open connection thing may very well be my rustiness on low-level network coding, possibly it just sends stuff to our monitored port which does sound a bit more sensible.

    Why there are multiple is more simple - we're communicating with external specialist devices in several locations. There is no central server so we're communicating independently with each one. Now I suppose that does mean we could launch a whole load of processes, one per external device, instead of a multi-threaded application. But that seems messy

    Leave a comment:


  • woohoo
    replied
    I am still deciphering the communication documentation but I think we maintain an open port to each remote source and it pushes updates to us
    The last time I did anything similar to this and really don't know if it helps was for a profiler. So would inject a static class, into c# code, IL before it was compiled. I then had an app that acted as a WCF Server. I used NetNamedPipeBinding for fast communication on the same machine (probably not applicable to you).

    The reason why I mention this is because of the communication documentation you mentioned and not sure how it works yet. It would be much more common for the remote source to hit your WCF server (for example) and pass in the information for you then to log it. I'm not entirely sure why a remote source would keep an open connection and why you would have several of these open connections at the same time.

    But I'm genuinely interested and if you could let me know more information or the approach you take. Either way let me know more.

    Leave a comment:


  • d000hg
    replied
    Originally posted by woohoo View Post
    My first thought would be to write a windows service and have it spawn multiple threads that connect to each source. I would then have a config table with the source details and if disabled or not.

    Each thread could connect to the remote connection and after updating the db with data from remote source, check the config table, if source is disabled then disconnect. Then the thread would have some kind of timer to connect to the DB to see if its enabled again. If enabled initiate the connection again. That way you could disable sources via a config table.

    Not sure how your remote connections work, do you poll the remote source via the open connection for data then update the db or does the remote source initiate the call via the open connection. Not sure how this works?
    Thanks. I am still deciphering the communication documentation but I think we maintain an open port to each remote source and it pushes updates to us. Which probably would mean a worker thread waiting on each open connection anyway. So then if we take your approach, I guess either a single thread could keep checking the config data and create/destroy connection threads as needed OR we still have one thread per source and that thread has a separate worker.

    Now there's also the case we add a new source rather than just temporarily disabling existing configured ones, but TBH I think we'd be happier just restarting the system in such cases anyway!

    It'd be neater if the thing was event driven rather than polling the config file/DB but I don't particularly want to design the entire architecture around that when in reality it makes no difference to anyone if it takes a minute for config changes to take effect.
    Last edited by d000hg; 25 September 2019, 10:15.

    Leave a comment:

Working...
X