<div dir="ltr">Currently working with an architecture as follows:<br><br>1) A topic exchange is bound to 3 consistent hash exchanges with routing keys that match 3 distinct types of messages (e.g. type-a, type-b, type-c). <br>
<br>2) Each of these type-specific consistent hash exchanges is, in turn bound to 16 type-specific queues (e.g. type-a-01, ... , type-a-16) with a weight of 100.<br><br>3) Three consumers are configured to consume messages specific to that type - each consumer connects to all of the 16 queues for it's type.<br>
<br>4) Producers connect to the broker and send messages to the topic exchange with a routing key that matches the type of message they are sending.<div><br></div><div>5) Only one type is active at this time.<br><div><br>
</div><div>The reason we went down this path was to mitigate the 1 queue : 1 cpu core limitation within RabbitMQ (or Erlange in general?).</div><div><br></div><div>We *are* seeing (relatively) even message distribution across the queues for the on message type that is current active (yay), however I am only seeing utilization on between 4-6 (depending on the time) out of the 16 total cores (others are 100% idle). This wasn't the behavior in my initial testing (I saw relatively equal load across all cores). <br>
<br>Can someone confirm that in the architecture outlined above I would expect to see (relatively) equal distribution across all available cores? Furthermore, if my expectation is correct, are there Erlang / RabbitMQ specific configurations that would impact things like core distribution / affinity? I am also currently pursuing the idea that it might be some hypervisor-specific setting that is causing this (no good info on that as of yet).</div>
</div><div><br></div><div>Thank you in advance for any assistance.</div><div><br></div><div>Regards,</div><div><br></div><div>Richard Raseley</div></div>