LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-20-2008, 10:47 PM   #1
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Rep: Reputation: Disabled
bash command substitution performed "as encountered"?


I realized this after trying to test my program for hours (looking in the wrong places,) not being able to get my output to go to the terminal. Basically what the program does is logs output to a log file, so I was trying to provide `tty` as the log file argument as well as read from piped input.

Apparently bash performs command substitution as it's encountered, which also means in the context of its associated command-line element. Take the outputs of the two lines below as an example:
Code:
bash> echo `tty`
/dev/pts/3

bash> true | echo `tty`
not a tty
I guess I always thought of command substitution as a sort of pre-processor. It's stuff like the code below that made me think that in the first place:
Code:
bash> `echo exec` sleep 1
Is this a standard way for substitutions to be made among *nix shells? Is there a way to get the substitution to take place before anything else on the line is executed? Thanks.
ta0kira
 
Old 04-21-2008, 01:58 AM   #2
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
the pseudo filename for the terminal of a process is usually /dev/tty
is it 'eval' you are looking for?
don't quite get the problem.
 
Old 04-21-2008, 06:32 AM   #3
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
It isn't really a problem. I was just wondering if this was a consistent thing that I should count on (not the tty substitution itself, but the context of its execution.) I can obviously just type tty then substitute manually, but I'm more concerned with whether or not making command substitutions in their own respective command-line contexts was standard across Bourne shells.
ta0kira
 
Old 04-21-2008, 10:48 AM   #4
bgoodr
Member
 
Registered: Dec 2006
Location: Oregon
Distribution: RHEL[567] x86_64, Ubuntu 17.10 x86_64
Posts: 221

Rep: Reputation: 36
Quote:
Originally Posted by ta0kira View Post
Basically what the program does is logs output to a log file, so I was trying to provide `tty` as the log file argument as well as read from piped input.
Hi ta0kira,

I wouldn't recommend emitting output to a logfile using the tty command. Is your Bourne shell script needing to manage a subprocess that then needs to output to a logfile at the same time (in parallel with) the operation of the script? If so, use the & operator to spawn a subprocess and manage that subprocess in the calling shell.

Presuming it is not that complicated a situation: As for reading input using a Bourne shell, there may be various ways to do so, but the one way I know of is to use the read command. Here is a small script to demonstrate its use:

Code:
#!/bin/sh
# -*-mode: Shell-script; indent-tabs-mode: nil; -*-

logFile=/tmp/myLogFile.txt
while read someValue
do
    echo "someValue is <$someValue>"
    if [ "$someValue" = "q" ]
    then
        # quiting
        break
    fi
    echo "$someValue" >> $logFile
done
echo "User has quit. User input lines were:"
cat $logFile
# for this example, get rid of the file:
rm -f $logFile
Here is a transcript of its use:

Code:
$ /tmp$ /tmp/script1.sh 
foo
someValue is <foo>
bar
someValue is <bar>
baz
someValue is <baz>
q
someValue is <q>
User has quit. User input lines were:
foo
bar
baz
$
Note that I'm emitting the output into some logfile. The script blocks waiting for user input from standard input in the call to read that is the subcommand of the while command, and so there are no child processes running "in the background" w.r.t. that script.

AFAIK, the read command is portable between different Bourne shell implementations (such as Solaris, Linux, Cygwin, and MKS). And, Bourne shell is a good choice for portability (stay away from Korn and C-Shell for portability reasons).

bgoodr
 
Old 04-21-2008, 08:49 PM   #5
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
Thanks for the suggestions. The log-to-tty thing is just for debugging. Normally I'll have a separate terminal open for scrolling log output as I run the program in the first terminal. The process does spawn child processes, but they all use the same log file format and they all inherit the same log file. The system is designed so that all can use the same log file. The real issue isn't the tty, however. That's just what made me notice it. There really isn't an issue that needs to be solved, either. I'm just trying to get an idea of if all shells make command subs like this, or if some actually do it as a first-pass preprocessor.
ta0kira
 
Old 04-21-2008, 09:13 PM   #6
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
This is indeed an interesting dilemma. It is even hard to find in the standard, but here’s what the standard says:
Quote:
Originally Posted by SUSv3 2.12 Shell Execution Environment
Utilities other than the special built-ins (see Special Built-In Utilities) shall be invoked in a separate environment that consists of the following. The initial value of these objects shall be the same as that for the parent shell, except as noted below.
  • Open files inherited on invocation of the shell, open files controlled by the exec special built-in plus any modifications, and additions specified by any redirections to the utility
  • Current working directory
  • File creation mask
  • If the utility is a shell script, traps caught by the shell shall be set to the default values and traps ignored by the shell shall be set to be ignored by the utility; if the utility is not a shell script, the trap actions (default or ignore) shall be mapped into the appropriate signal handling actions for the utility
  • Variables with the export attribute, along with those explicitly exported for the duration of the command, shall be passed to the utility environment variables
<SNIP>

A subshell environment shall be created as a duplicate of the shell environment, except that signal traps set by that shell environment shall be set to the default values. Changes made to the subshell environment shall not affect the shell environment. Command substitution, commands that are grouped with parentheses, and asynchronous lists shall be executed in a subshell environment. Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment. All other commands shall be executed in the current shell environment.
So it seems the behavior you are encountering is “each command of a multi-command pipeline is in a subshell”. In your example, the subshell executes the following (with stdin from the pipe):
Code:
echo `tty`
Then, the command-substitution is in its own subshell which inherits stdin from the first subshell.

Notice, however, that the standard has an extension in which all commands in a pipeline may be executed without a subshell, in which case you would get the behavior you expected.

So I guess the answer is, if you want to remain portable, avoid the construct because it is ambiguous depending on whether or not the shell is using that extension.
 
Old 04-22-2008, 12:38 AM   #7
bgoodr
Member
 
Registered: Dec 2006
Location: Oregon
Distribution: RHEL[567] x86_64, Ubuntu 17.10 x86_64
Posts: 221

Rep: Reputation: 36
Quote:
Originally Posted by ta0kira View Post
The process does spawn child processes, but they all use the same log file format and they all inherit the same log file.
I would not rely upon multiple subprocesses using that same inherited logfile without colliding eventually (presuming you aren't using some file locking), the main symptom being output lines from process #1 being half-way written into the logfile, and then an output from process #2 starting right in the middle. But then again, you said this was for debugging, so maybe a little messy output doesn't matter to you in this case.

As for the question about context for when `tty` is executed, I normally expect there to be a separate UNIX process taken up by each command in the pipeline:

Code:
command1 | command2 | command3 ...
meaning, a separate process for command1, a different separate process for command 2, and yet a third and different process for command 3, and so on.

You would think of the command substitution as a sort of pre-processor, and it is for environment variable expansion for example, but it is probably not helpful to think of that in the case of the backtick operator. Per your examples in your original post:
Code:
bash> echo `tty`
/dev/pts/3
In this command pipeline, there is only one builtin command, echo, but a child shell that invokes the tty command which is not a builtin AFAIK. That tty commands immediate parent is this interactive shell you are typing the command into. Interactive shells have terminals associated with them, and so the tty command reports that terminal device file path.

For the second command pipeline example you provided:

Code:
bash> true | echo `tty`
not a tty
Here, the true command is executing in its own subshell, and its parent process is the interactive shell. But, that interactive shell is going to invoke a non-interactive shell to house the second command, and it does so as to hook up the standard output of the first command into the standard input of the second subshell for the echo `tty` command. In that second shell, there is yet a third child process for that tty call, but this time since there is the intervening pipe, there is no terminal associated with that latter process because it is in a non-interactive shell. So it is not surprising to find that latter, third, process that executes the tty command to report that it cannot "see" a terminal with "not a tty".

Here is a little diagram of the process hierarchy to hopefully clarify the above description, with the vertical lines indicating the parent/child process relationship:

Code:
echo `tty`   executes as:

    interactive_shell
                    |
                    |
                    tty

true | echo `tty`    executes as:

    interactive_shell
      |
      |
      true
         |
         |
         non_interactive_shell
                             |
                             |
                             tty
Notice that I did not include the "echo" commands here because since I believe the echo is a shell builtin and not a separate process. I don't know if "echo" is a builtin for all Bourne shell implementations, but I wouldn't write your scripts to rely upon it being a builtin.

Perhaps the easiest thing to do is to communicate the terminal to use by an environment variable that all of your child processes inherit:

Code:
MY_LOGFILE=`tty`; export MY_LOGFILE
# ... call a bunch of background processes that reference $MY_LOGFILE to log debugging output ...
bgoodr
 
Old 04-22-2008, 01:01 PM   #8
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by bgoodr View Post
I would not rely upon multiple subprocesses using that same inherited logfile without colliding eventually (presuming you aren't using some file locking), the main symptom being output lines from process #1 being half-way written into the logfile, and then an output from process #2 starting right in the middle. But then again, you said this was for debugging, so maybe a little messy output doesn't matter to you in this case.
The file is opened line-buffered, so all output segments are written at once. File locks aren't necessary for appending line output where each line can stand alone. A write system call will write the entire segment at one time (within reason, at least up to the size of a block I assume,) and while it's doing that nothing else can happen unless there's a second core or processor. I don't know for sure, but I assume that all kernels that support "guaranteed append" mode wouldn't go and allow a second write system call on the same file while in the middle of another one.

Additionally, I need to see the time correlation between actions of all processes. Yes, I use timestamps, but I don't have time to interleave 10+ different log files to see in what order operations took place. It's sort of like a transcript of numerous conversations held between a server and multiple clients.
Code:
//compile and call using one argument to denote a log file

#include <errno.h>
#include <fcntl.h>
#include <signal.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/wait.h>

#define __USE_GNU
#include <unistd.h>

#define CHILDREN      128
#define TRANSMISSIONS 64


static void write_output(const char*);


int main(int argc, char *argv[])
{
    if (argc != 2)
    {
    fprintf(stderr, "call with one file name\n");
    exit(1);
    }


    //open the file and duplicate the descriptor

    int old_fd = -1;
    if ( (old_fd = open(argv[1], O_RDWR | O_CREAT | O_APPEND, 0644)) < 0 ||
         dup2(old_fd, STDOUT_FILENO) < 0 )
    {
    fprintf(stderr, "could not open or create file '%s': %s\n", argv[1], strerror(errno));
    exit(1);
    }

    close(old_fd);
    ftruncate(STDOUT_FILENO, 0);

#ifdef BUFFERED
    //this is necessary in case the default isn't line buffering
    setlinebuf(stdout);
#endif


    unsigned int number_left = CHILDREN;
    pid_t new_child = -1, first_child = -1;

    fprintf(stderr, "parent process: %i\n", (int) getpid());


    //fork numerous processes

    while (number_left && (new_child = fork()) > 0)
    {
    fprintf(stderr, "new child started: %i\n", (int) new_child);
    if (number_left-- == CHILDREN) first_child = new_child;
    setpgid(new_child, first_child);
    }


    if (new_child < 0)
    {
    fprintf(stderr, "new child error: %s\n", strerror(errno));
    killpg(first_child, SIGKILL);
    }


    //CHILD

    if (!new_child)
    {
    raise(SIGSTOP);

    unsigned int transmissions_left = TRANSMISSIONS;

    while (transmissions_left--)
     {
    write_output("this is line 1");
    write_output("this is line 2");
    write_output("this is line 3");
    write_output("this is line 4");
    write_output("this is line 5");
    write_output("this is line 6");
    write_output("this is line 7");
    write_output("this is line 8");
     }

    _exit(0);
    }


    //PARENT

    else
    {
    sleep(1);
    killpg(first_child, SIGCONT);

    int status = 0;
    while (waitpid(-first_child, &status, 0) > 0)
    if (status != 0) fprintf(stderr, "child error: %i\n", status);
    }

    exit(0);
}


//output function

static void write_output(const char *dData)
{
    if (dData)
    {
#ifdef BUFFERED
    printf("{%i} %s %s %s %s %s %s %s %s\n", (int) getpid(),
      dData, dData, dData, dData, dData, dData, dData, dData);
#else
    static char format_buffer[256];

    snprintf(format_buffer, 256, "{%i} %s %s %s %s %s %s %s %s\n",
      (int) getpid(), dData, dData, dData, dData, dData, dData, dData, dData);

    if (TEMP_FAILURE_RETRY( write(STDOUT_FILENO, format_buffer, strlen(format_buffer)) ) == (ssize_t) -1)
    fprintf(stderr, "{%i} write error: %s\n", getpid(), strerror(errno));
#endif
    }
}
I think the example above adequately illustrates how unlikely it is that a write system call will be interrupted by another. Compile with -DBUFFERED to demonstrate FILE-buffered output and without to demonstrate direct writes.
ta0kira

Last edited by ta0kira; 04-22-2008 at 03:41 PM.
 
Old 04-22-2008, 03:39 PM   #9
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by osor View Post
This is indeed an interesting dilemma. It is even hard to find in the standard, but here’s what the standard says:
Thanks! That's very helpful.
ta0kira
 
Old 04-22-2008, 10:52 PM   #10
bgoodr
Member
 
Registered: Dec 2006
Location: Oregon
Distribution: RHEL[567] x86_64, Ubuntu 17.10 x86_64
Posts: 221

Rep: Reputation: 36
Quote:
Originally Posted by ta0kira View Post
The file is opened line-buffered, so all output segments are written at once. File locks aren't necessary for appending line output where each line can stand alone.
That might be true from within the same process, in the same thread, using the same file descriptor, but that isn't the case from your earlier description involving only Bourne shell scripts, and not C code. I don't believe that is true when multiple processes are writing into the file using different file descriptors. I do not believe you could arrange your Bourne shell scripts to coordinate to use the same file descriptor either in a portable way (but if you or anyone knows the portable Bourne shell syntax for that, please do tell).

Quote:
Originally Posted by ta0kira View Post
Additionally, I need to see the time correlation between actions of all processes. Yes, I use timestamps, but I don't have time to interleave 10+ different log files to see in what order operations took place.

Have you thought about concatenating and then sorting all of the log files together, where you would have a separate log file for each separate thread in each separate process you are monitoring? You could do so after your run completes (of course, not dynamically if you want to watch the output). Granted, if we are talking about a lot of data, then you have to worry about the capacity of the sort command, which would foil that approach. That is how I do it.

Anyhow, good luck!
bgoodr
 
Old 04-23-2008, 11:42 AM   #11
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by bgoodr View Post
That might be true from within the same process, in the same thread, using the same file descriptor, but that isn't the case from your earlier description involving only Bourne shell scripts, and not C code. I don't believe that is true when multiple processes are writing into the file using different file descriptors.
My example of 128 processes trying to write 512 lines of their own to the same file at the same time didn't convince you? It convinced me, so I think I'll stick with it.
Quote:
Originally Posted by bgoodr View Post
Have you thought about concatenating and then sorting all of the log files together, where you would have a separate log file for each separate thread in each separate process you are monitoring? You could do so after your run completes (of course, not dynamically if you want to watch the output)
The events occur within microseconds of each other and my timestamps aren't that accurate. Right now the timestamps are already too long and they'd probably have to be 20 digits to be able to cat/sort them.

Here's actual log output from a test. This is extremely short, and is probably the absolute least amount of logging for a single legitimate test run.
Code:
[20080422142605 svr: '' ()] /n/ new log file started
[20080422142605 svr: '' (rservr)] /n/ server name changed: '' -> 'server'
[20080422142605 svr: 'server' (rservr)] /n/ server started
[20080422142605 svr: 'server' (rservr)] /m/ server creating a new session
[20080422142605 svr: 'server' (rservr)] /m/ reading config from standard input (blocking further standard input)
[20080422142605 svr: 'server' (rservr)] /n/ starting new system client: rservrd -dx
[20080422142605 svr: 'server' (rservr)] /n/ new client added: rservrd -dx (27600)
[20080422142605 cli: '(none)' (rservrd / 27756)] /n/ initializing client
[20080422142605 cli: '(none)' (rservrd / 27756)] /m/ inherited process ID used (27600)
[20080422142605 cli: '(none)' (rservrd / 27600)] /n/ client initialized
[20080422142605 cli: '(none)' (rservrd / 27600)] {rsvp-netcntl} loading plug-in commands
[20080422142605 cli: '(none)' (rservrd / 27600)] {rsvp-netcntl} plug-in commands loaded
[20080422142605 cli: '(none)' (rservrd / 27600)] {rsvp-rqconfig} loading plug-in commands
[20080422142605 cli: '(none)' (rservrd / 27600)] {rsvp-rqconfig} plug-in commands loaded
[20080422142605 cli: '(none)' (rservrd / 27600)] /n/ internal command plug-ins loaded
[20080422142605 svr: 'server' (rservr)] /n/ server loop entered
[20080422142605 svr: 'server' (rservr)] /n/ client 'rservrd' allowed full server control (27600)
[20080422142605 svr: 'server' (rservr)] /n/ client 'rservrd' registered as type 0x0028 (27600)
[20080422142605 cli: '(none)' (rservrd / 27600)] /n/ timing table received from 'server'
[20080422142605 cli: '(none)' (rservrd / 27600)] /n/ timing table processed
[20080422142605 cli: '(none)' (rservrd / 27600)] /n/ new timing settings compiled
[20080422142605 cli: 'rservrd' (rservrd / 27600)] /n/ client registered: 'rservrd' (0x0028)
[20080422142641 svr: 'server' (rservr)] /n/ server exit requested by 'rservrd' (27600)
[20080422142641 svr: 'server' (rservr)] /n/ server exiting
[20080422142641 svr: 'server' (rservr)] /n/ server loop exited
[20080422142641 svr: 'server' (rservr)] /m/ normal server termination (27480)
That output is taken directly from a run of Resourcerver. Basically how it works is a single application consists of the server plus all of the processes it forks. Though there are a lot of processes involved, they're all a part of that same application.
ta0kira
 
Old 04-23-2008, 05:27 PM   #12
bgoodr
Member
 
Registered: Dec 2006
Location: Oregon
Distribution: RHEL[567] x86_64, Ubuntu 17.10 x86_64
Posts: 221

Rep: Reputation: 36
To be fair, I was a bit confused. I thought this was originally limited only to shell script programs, given your original posting. Now you have set me straight, and we are talking about C code based clients and servers. My appologies.

Quote:
Originally Posted by ta0kira View Post
My example of 128 processes trying to write 512 lines of their own to the same file at the same time didn't convince you? It convinced me, so I think I'll stick with it.
That would be ok as long as you plan on deploying only on Linux and you have those log files written locally and not out to an NFS server (see Note 2 below since you are using O_APPEND in your open system call). Not writing the logfiles to NFS-mounted partitions would be a reasonable restriction IMO, since the writing to the logfiles probably should be done fairly quickly so as to not disrupt the relative timing between clients and servers too much (no matter what you do, there is bound to be some distortion).

In your C-based example, the child processes are inheriting the same open file descriptor, and not opening new logfiles (see Note 1 below). If each child process had opened a brand new file descriptor, then I believe you may have seen the file resource collision that I warned about. Even then you might not have seen it until much later when the client and server are under heavy load. Things may or may not change in your real program (versus your the C example), where the exec system call is brought into play in the child process(es).

Quote:
Originally Posted by ta0kira View Post
The events occur within microseconds of each other and my timestamps aren't that accurate. Right now the timestamps are already too long and they'd probably have to be 20 digits to be able to cat/sort them.
I suspect that writing the logfile into a big chunk 'o' memory, and then dumping that memory out to the real logfile at the very end of the run would not be out of the question, that is, if performance is important. Doing so might also allow you to get finer granularity on your timestamps depending upon the size of each diagnostics output "line" you are recording. I don't know of the system call API off hand, but there should be a way to get finer-grained time-stamps as long as the logfile doesn't have to conform to some pretty-printed time stamp format.

Perhaps for debugging/diagnostics, you don't care about the exact date and time of events, but their occurance relative to other events (ie., what happened in the client relative to what happened in the server, and how long a wait time in between those events, etc. etc). In that case, why not just store some long-enough number as the timestamp, in microseconds (or in clock tics, presuming you are talking about only one single machine and not distributed computing)? That might shorten the logfiles down, mean less I/O going out, and mean less chance for the diagnostics to distort the timing you are presumably needing to debug.

Quote:
Originally Posted by ta0kira View Post
Basically how it works is a single application consists of the server plus all of the processes it forks. Though there are a lot of processes involved, they're all a part of that same application.
ta0kira
Agreed. I presumed there was something more complex going on than a bag-o-forks situation. Perhaps even exec system calls are not being done in the child processes, which throws additional complexity into the mix.

bgoodr

Notes:

Note 1: Refer to the fork(2) manpage for the kinds of resources that are inherited by the child during a fork system call. In your example, all open file descriptors are inherited, which includes the dup'ed logfile that you are writing into via STDOUT.

Note 2: Refer to the O_APPEND section of the open(2) manpage (from a Debian web site, but I think it loosely applies to other POSIX platforms). That section refers to NFS being a potential problem:
Code:
       O_APPEND
              The file is opened in append mode.  Before  each  write(2),  the
              file  offset  is  positioned  at the end of the file, as if with
              lseek(2).  O_APPEND may lead to corrupted files on NFS file sys-
              tems  if  more  than one process appends data to a file at once.
              This is because NFS does not support appending to a file, so the
              client  kernel has to simulate it, which can't be done without a
              race condition.
 
Old 04-23-2008, 09:20 PM   #13
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
Quote:
Originally Posted by bgoodr View Post
I do not believe you could arrange your Bourne shell scripts to coordinate to use the same file descriptor either in a portable way (but if you or anyone knows the portable Bourne shell syntax for that, please do tell).
For what it’s worth it is possible to do this. In a parent script, you could do:
Code:
exec 3>logfile
In any child scripts, just do:
Code:
echo "Message" >&3
 
Old 04-24-2008, 01:16 AM   #14
bgoodr
Member
 
Registered: Dec 2006
Location: Oregon
Distribution: RHEL[567] x86_64, Ubuntu 17.10 x86_64
Posts: 221

Rep: Reputation: 36
Quote:
Originally Posted by osor View Post
For what it’s worth it is possible to do this.
Thanks osor. I always wanted to know the reason for that syntax. I believe I saw it used in autoconf generated scripts (./configure), and now I know why.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
bash script: using "select" to show multi-word options? (like "option 1"/"o zidane_tribal Programming 7 12-19-2015 01:03 AM
Standard commands give "-bash: open: command not found" even in "su -" and "su root" mibo12 Linux - General 4 11-11-2007 10:18 PM
Bash: "after" or "waitfor" command enemorales Programming 3 06-15-2005 12:04 AM
wincvs to cvs via pserver "encountered a problem" during login julianop Linux - Software 0 02-17-2005 09:09 PM
ignoring the "non-portable whitespace encountered at line " warning Jake13 Linux - Software 3 08-18-2004 12:34 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 04:23 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration