I am a junior developer working on writing an update for software that receives data from a third-party solution, stores it in a database, and then conditions the data for use by another third-party solution. Our software runs as a Windows service.
Looking at the code from a previous version, I see this:
static Object _workerLocker = new object();
static int _runningWorkers = 0;
int MaxSimultaneousThreads = 5;
foreach(int SomeObject in ListOfObjects)
{
lock (_workerLocker)
{
while (_runningWorkers >= MaxSimultaneousThreads)
{
Monitor.Wait(_workerLocker);
}
}
// check to see if the service has been stopped. If yes, then exit
if (this.IsRunning() == false)
{
break;
}
lock (_workerLocker)
{
_runningWorkers++;
}
ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject);
}
The logic seems clear: Wait for room in the thread pool, make sure the service hasn't been stopped, then increment the thread counter and queue the work. _runningWorkers
is decremented inside SomeMethod()
inside a lock
statement that then calls Monitor.Pulse(_workerLocker)
.
My question is:
Is there any benefit in grouping all the code inside a single lock
, like this:
static Object _workerLocker = new object();
static int _runningWorkers = 0;
int MaxSimultaneousThreads = 5;
foreach (int SomeObject in ListOfObjects)
{
// Is doing all the work inside a single lock better?
lock (_workerLocker)
{
// wait for room in ThreadPool
while (_runningWorkers >= MaxSimultaneousThreads)
{
Monitor.Wait(_workerLocker);
}
// check to see if the service has been stopped.
if (this.IsRunning())
{
ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject);
_runningWorkers++;
}
else
{
break;
}
}
}
It seems like, it may cause a little bit more waiting for other threads, but then it seems like locking repeatedly in a single logical block would also be somewhat time-consuming. However, I'm new to multi-threading, so I'm assuming that there are other concerns here that I'm unaware of.
The only other places where _workerLocker
gets locked is in SomeMethod()
, and only for the purpose of decrementing _runningWorkers
, and then outside the foreach
to wait for the number of _runningWorkers
to go to zero before logging and returning.
Thanks for any help.
EDIT 4/8/15
Thanks to @delnan for the recommendation to use a semaphore. The code becomes:
static int MaxSimultaneousThreads = 5;
static Semaphore WorkerSem = new Semaphore(MaxSimultaneousThreads, MaxSimultaneousThreads);
foreach (int SomeObject in ListOfObjects)
{
// wait for an available thread
WorkerSem.WaitOne();
// check if the service has stopped
if (this.IsRunning())
{
ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject);
}
else
{
break;
}
}
WorkerSem.Release()
is called inside SomeMethod()
.