In one of the previous blogs, I discussed building background services that can scale by tenant. There could however be scenarios where you want the system to scale by load (e.g. number of messages to be processed from queue). Often in such scenarios you want to have control over the way the load is generated to avoid redundancy, but want the processing to happen as soon as the message arrives. You would also want maximum utilization of the CPU for processing to minimize costs of scaling. Having an effective worker role design can help you better understand the efficiency of background services. The following figure illustrates one such design using the OnStart and Run methods inside the RoleEntryPoint class.
You can use the OnStart method to create instances of scheduled services using utilities such as Quartz.Net cron service scheduler that can then run at a pre-defined interval and populate designated queues with messages to be processed. Typically, you would want only one instance from the configured instances to be able to write into the queue to avoid duplication of messages for processing. The following code shows a typical cron schedule. The service configured will have the implementation of the leased method (we discussed in the previous blog post) that will schedule the messages in queue.
public override bool OnStart()
UnityContainer = UnityHelper.ConfigureUnity();
QueueProvider = UnityContainer.Resolve<IQueueProvider>();
LogService = UnityContainer.Resolve<ILogService>();
The code inside the ScheduledServices method could look like:
DateTimeOffset runTime = DateBuilder.EvenMinuteDate(DateTime.Now);
JobScheduler scheduler = new JobScheduler();
These are examples of different types of cron services that are run by Quartz.net based on the defined schedule.
The following code illustrates the implementation inside the Run method of a worker role that uses Task Parallel Library to process multiple queues the same time.
public override void Run()
ProcessMessages<ISiteConfigurationManager, MaintenanceScheduleItem>(Constants.QueueNames.SiteConfigurationQueue, (m, e) => m.CreateSiteConfiguration(e));
var hasMessages = ProcessMessages<IAggregationManager, QueueMessage>(Constants.QueueNames.PacketDataQueue, null, (m, e) => m.ComputeSiteMetrics(e));
catch (Exception ex)
This can scale to as many instances as you want, depending on the load on the queue and the expected throughput. The parallel processing will ensure that the CPU is optimally utilized in the worker role, and the run method will generate a continuous run of the instance to process items from the queue. You can also use the auto scale configuration to automatically scale the instances based on load.
There is one known issue you must be aware of in this design regarding the data writes on an Azure Table storage. Since multiple instance will be writing to the table, if you are running updates, there is a chance that the data could have been modified between the time you picked the record and updated it back after processing. Azure, by default, rejects such operations. You can, however, force an update by setting the table entity's ETag property to "*". The following code illustrates a generic table entity save--with forced updates.
public void Save<T>(T entity, bool isUpdate = false) where T : ITableEntity, new()
TableName = typeof(T).Name;
entity.ETag = "*";
operations.Value.Add(isUpdate ? TableOperation.Replace(entity) : TableOperation.Insert(entity));
A word of caution though. This may not be the design you want to pursue if the system you are building is completely intolerant to a certain degree of data corruption at any point in time, since a forced update may result in such a behaviour.
Parallel processing, Azure, cron, software design