In addition to the type batch system, it can also depend on the configuration of that batch system.
Most have ways to designate queues that define the number of CPUs available (say 32), and only allow one queue on a server - thus multiple jobs on one compute server are not possible. This is usually done to prevent overloading a particular compute resource. If the queue was configured for two concurrent jobs, then the CPUs would also tend to be restricted to 16 CPUs each... This would allow the fastest throughput.
Now on a low priority configuration NOT configured for high throughput (for testing jobs for example) the configuration COULD allow for two concurrent jobs, on a compute service that has only 32 CPUs available with jobs that use 32 CPUs - at the cost of potential thrashing.
The users desire may conflict with site policy, and available queue structures.
Where I worked, we had high priority queues and low priority queues - and a "whatever is left over" queue. In our case, the 16 CPUs (it was a Cray mainframe) had queues for 4 cpus (high priority, only one job possible), a queue for 4 more CPUS, but with two jobs possible (a medium priority), a production queue of 8 CPUs.... and for very high priority, a queue with 16 CPUs, but it was idled (jobs could be submitted, but they wouldn't run - and it required special permission to use it as operations had to stop all other queues before enabling this one to run).
The "whatever was left" was a low priority queue (it only ran if none of the higher priority queues had a job), and was permitted only 2 CPUs (but 4 concurrent jobs). Its purpose was just for testing.