Quote:
Originally Posted by chrism01
|
Yeah, you'll need stuff like
this to monitor the SQLs that run against (your? not yours?) database and when you see a big consumer then take note of it and contact who submitted it.
But again, that's a bad way of doing things because the users will of course keep on trying to get the max out of your DB and you'll be running around trying to stop people from doing whatever they're doing without really achieving anything.
As dugan basically wrote, you should first of all go through my.cnf and ensure that mysql gets all the resources that it's supposed to get so that the users get the max out of that DB. Then it will be in the user's hands to handle their queries so that they don't run for too much time or whatever.
Of course things get more complex if you have 2 or more "types" of users, for example user type "A" who runs small queries which are supposed to return almost immediately and user type "B" which trashes your DB every time s/he sneezes.
In such a case you'll need to separate them and assign them to different physical servers or VMs. In the case of VMs you'll then perform in the host which hosts those VMs how many resources (RAM, disk, CPU, network) each single one is allowded to use, so that the small type "A" users always get back a result quickly even if there are 100 user types "B" trying to screw up their DB.
Once you have optimized everything and you keep monitoring it without identifying any improvement potential you shouldn't worry about the read/write I/O as when user type "B" tries to retrieve 10TB data as you won't be able to stop the I/O from happening.
And by the way, to come back to your original question, you cannot set up a DB in a way to "drop all SQLs that will write/read more then e.g. 10MBs" because the DB won't know before executing it how much I/O will be performed as it depends on many factors - it wouldn't be impossible to have this kind of feature but as of 2014 mysql, sqlite, postgresql nor oracle have this functionality.
The one that comes nearest to this is Oracle which kills the sql AFTER such a limit is reached, but it's an ugly way of dealing with such problems and the other DBs don't have as much as I know anything like this.