Most likely (IMHO), what is actually happening to you right now is a phenomenon known as
thrashing. (Link here.) Your system's reliance on a swap-area that's almost half again as big as your RAM strongly suggests this. That is to say, "the problem isn't your swap area – it's how you use it."
At the risk of getting too technical, virtual memory relies on the concept of
locality of reference: "the next memory address requested by the CPU is likely to be close-by to another address that the CPU has requested recently." As long as this holds true, virtual memory works quite well and the running process does not experience excessive delays due to so-called "page faults." And, when this holds true, it does so only
up to a point. Beyond that point, as they say, "it hits the wall."
Process performance almost-instantly degrades: a basically-linear curve suddenly turns
exponential.
To cite one example from my long-ago past, I discovered that if
less than three engineering jobs were running at the same time, each one of them would reliably complete in under 45 seconds. But, if
six were running, each one might take well more than an hour.
(I became quite famous for diverting them into a separate job queue and placing a limit on that queue, back in the day.) Yes, the difference is "
that extreme."
Programmers often get into this mess when they try to over-use "hash-table" based "in memory" storage methods when they ought to be using some kind of indexed
file. Hash algorithms are designed to produce random distributions of hash keys in order to minimize the number of nanoseconds needed to locate a value ... which is just fine as long as "memory access is cost-free." But they can play hell with virtual-memory algorithms, and your process is the one that pays the price. You have to
re-design the application to fix it, unless you can just "throw sand (silicon ...) at it."