Originally Posted by David the H.
awk's rand function outputs numbers as a six-decimal-place floating-point between zero and one (i.e. "0.566305"). Multiplying the result by 10000 shifts it so there are four-digits in front of the decimal, which is then truncated into an integer with the %d (digit) printf option. You could use any multiplier 100 or greater, really.
Right. In general, you can use int(rand() * N)
to produce a random integer between 0 and N-1
inclusive, up to at least a million. The value will be always less than N
. Remember to call srand()
first though, to set a new random seed based on the system time, or you will get the same sequence (depending on awk variant and version).
Originally Posted by David the H.
I think NA's point with the separate config file is that the script itself should hold only the processing code while things that are user-specified should be set as inputs to the script in some fashion, and in principle I agree.
Exactly. For something as simple as this script it is not necessary, but it is an important principle.
(As an aside, I think two different example solutions are always better than just one. It gives the reader multiple viewpoints to the problem, and may eventually lead to a better solution than either one alone. I think it is a very good thing you showed a different approach, David the H. Thank you.)
I know of several largeish Linux clusters that use a script to maintain firewall settings, with the settings contained in the script itself. Roughly once a year somebody manages to lock up a cluster by mis-editing the script. I did write a replacement script that uses separate config files, and cancels the changes unless the user expressly and interactively confirms the changes, but it was rejected: apparently the rare firewall lock-up is less of a disruption than changing something that 'works'.
Which is kind of my point: Learn the way to do things efficiently in the long term
. You never know which script ends up being used for the next decade or so. Separating configuration from the script is one of the ways. It lowers the probability of typos due to changes. For simple variables, sourcing a config file (like in my example above) works well, and you can use
sh -c '. path/to/config.file >/dev/null 2>/dev/null' || echo "Error in config file!"
to check the syntax in the configuration file. (The above command does it in a specific separate shell, so current shell is irrelevant, and unaffected by sourcing the config file.)
If you have the settings within the script, the only way you can test it is by running it. If you were to expand your script to a full service, you could do the check before reload
-- like Apache does. Then, when an administrator modifies the configuration and issues sudo service this-service reload
, the configuration is checked first. If there is a problem in the configuration, the service is not taken offline
; the administrator only gets a warning that there is a b0rk in the config, so no changes are done. (It still surprises me how most admins still use restart
instead of reload
-- and then do a headdesk when their buggy changes stop the service from working, and get an angry call from their boss and/or client. As you can see from my examples above, avoiding stuff like that does not require that
much more effort.)
There is a secondary reason why I wrote the example script the way I did. I suspect you will eventually need to change the script from being a standalone script running always, to either a script run regularly via cron
, or to a full-blown service. The former is trivial; just omit the loop and the sleep $INTERVAL
bit. For the latter, I'd personally write a small C program instead, mainly to keep resource use minimal. The logic would follow the script very closely, though.
(The typical reason for moving to cron or a service script is ease of maintenance, for example through a web-based interface. Standalone scripts tend to be a pain to manage automatically -- each one needing their own management stuff --, while cron scripts and services follow very simple rules, and are therefore much easier to manage: you only need one interface to manage all possible cron scripts, and one interface to manage all service scripts, regardless of the script contents.)