简体   繁体   中英

Sending request to ASP.Net Core Web API running on a specific node in a Service Fabric Cluster

I am working on a Service Fabric Application, in which I am running my Application that contains a bunch of ASP.NET Core Web APIs. Now when I run my application on my local service fabric cluster that is configured with 5 nodes, the application runs successfully and I am able to send post requests the exposed Web APIs. Actually I want to hit the code running on a same cluster node with different post requests to the exposed APIs on that particular node.

For further explanation, for example there is an API exposed on Node '0' that accept a post request and execute a Job, and also there is an API that abort the running job. Now when I request to execute a Job, it starts to execute on Node '0' but when I try to abort the Job, the service fabric cluster forward the request to a different node for example say node '1'. In resulting I could not able to abort the running Job because there is no running Job available on Node '1'. I don't know how to handle this situation.

For states, I am using a Stateless service of type ASP.Net Core Web API and running the app on 5 nodes of my local service fabric cluster.

Please suggest what should be the best approach.

Your problem is because you are running your APIs to do a Worker task.

You should use your API to schedule the work in the Background(Process\\Worker) and return to the user a token or operation id. The user will use this token to request the status or cancel the task.

The first step : When you call your API the first time, you could generate a GUID(Or insert in DB) and put this message in a queue(ie: Service Bus), and then return the GUID to the caller.

The second step : A worker process will be running in your cluster listening for messages from this queue and process these messages whenever a message arrives. You can make this a single thread service that process message by message in a loop, or a multi-threaded service that process multiple messages using one Thread for each message. It will depend how complex you want to be:

  • In a single threaded listener, to scale you application, you have to span multiple instances so that multiple tasks will run in parallel, you can do that in SF with a simple scale command and SF will distribute the service instances across your available nodes.

  • In a multi-threaded version you will have to manage the concurrency for better performance, you might have to consider memory, cpu, disk and so on, otherwise you risk having too much load in a single node.

The third step, the cancellation: The cancellation process is easy and there are many approaches:

  • Using a similar approach and enqueue a cancellation message
    • Your service will listen for the cancellation in a separate thread and cancel the running task(if running).
    • Using a different queue to send the cancellation messages is better
    • If running multiple listener instances you might consider a topic instead of a queue.
  • Using a cache key to store the job status and check on every iteration if the cancellation has been requested.
  • Table with job status, where you check on every iteration as you would do with the cache key.
  • Creating a Remote endpoint to make a direct call to the service and trigger a cancellation token.

There are many approaches, these are simple, and you might make use of multiple in combination to have a better control of your tasks.

You'll need some storage to do that.

Create a table (eg JobQueue ). Before starting to process the job, you store in a database, store the status (eg Running , it could be an enum), and then return the ID to the caller. Once you need to abort/cancel the job, you call the abort method from the API sending the ID you want to abort. In the abort method, you just update the status of the job to Aborting . Inside the first method (which runs the job), you'll need to check this table onde in a while, if it's aborting , then you stop the job (and update the status to Aborted ). Or you could just delete from the database once the job has been aborted or finished.

Alternatively, if you want the data to be temporary, you could use a sixth server as a cache server and store data there. This cache server could be a clustered server as well, but then you would need to use something like Redis .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM