# Getting Started
When you configure an SQS queue as a trigger for a Lambda function, AWS will automatically poll the queue and invoke the Lambda. It'll start with as low as 5 polling processes, monitor the size of the queue, and scale the number of processes in response to changes in size.
When the lambda function is consuming messages from a single queue, this setup is very efficient. AWS scaling activity ensures the messages are processed as fast as possible, and the cost is kept minimal since the number of processes decrease when the queue size is small.
However, because monoliths weren't considered when AWS designed the SQS-Lambda integration, it lacks some crucial configuration capabilities. In a monolith, the same code-base inside the same lambda function handles jobs from various queues. With such setup, Lambda may get bombarded with too many invocations from all the AWS polling processes consuming jobs from all the queues.
# The Problem
The default SQS-Lambda integration starts several pools of processes, each pool consumes jobs from a specific queue. Processes in a single pool coordinate together, that way they don't scale too fast or too big. But the different pools do not communicate or align with each other in any way. As a result, the available concurrency slots of the queue function may grow too large that other functions start struggling to allocate any slots. This means invocations to other functions in the same AWS account will fail. Because all concurrency slots are consumed by the queue function.
And if reserved concurrency is used, the queue function itself may throttle invocations. When throttling happens to an SQS message, the ApproximateReceiveCount
is incremented even though the message wasn't actually processed. Eventually those throttled messages may end up in the dead letter queue.
This graph shows what happens when too many polling processes try to invoke the lambda. The blue graph (on the left) represents the number of times messages got throttled, and the orange graph (on the right) represents the concurrent executions. When things get really busy, messages may get lost due to throttling.
Moreover, if one queue gets filled with lots of low-priority messages, this queue may use all available function concurrency slots and prevent more important messages from getting processed in a timely manner.
Controlling prioritization & concurrency is crucial in such monolithic functions. Workerless was built to deal with this problem.
# The Fix
Workerless is a single lightweight process that you may start anywhere inside your AWS account. This process starts several workers that coordinate with each other to consume jobs from multiple queues. You can configure the priority of each queue as well as the maximum concurrency you allow.
Concept →