As with above (and your statement) 200M sounds definitely wrong. I'd also be surprised if its that difficult to fix; sounds more like laziness to me (& I've worked with RDBMSes for many yrs).
In any case, the DB is likely IO bound as mentioned above. check the process statuses.
You can use top to get page fault cnts http://linux.die.net/man/1/top
or use iostat http://linux.die.net/man/1/iostat
for more IO specific details.
Given the scenario described, I'd also have a look at the SQL code; they may not be using the correct indexes or there may not be an index that matches their needs.
How much RAM do you have; a large SGA may help to cache the data the they keep asking for.
If its a lot of large queries over many new rows, a re-write sounds good as you won't be able to take advantage of caching.