LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

Reply
 
Search this Thread
Old 10-12-2008, 12:41 AM   #16
gnashley
Amigo developer
 
Registered: Dec 2003
Location: Germany
Distribution: Slackware
Posts: 4,763

Rep: Reputation: 471Reputation: 471Reputation: 471Reputation: 471Reputation: 471

You could resduce the verbosity by having it echo only when ping fails. And as far as goggle not being available, if ping fails for the first domain, have it try an alternate domain before echoing the error. I see nothing at all wrong with what you are doing -though that may not be much comfort coming from me...
 
Old 10-12-2008, 08:39 AM   #17
jong357
Senior Member
 
Registered: May 2003
Location: Columbus, OH
Distribution: DIYSlackware
Posts: 1,914

Original Poster
Rep: Reputation: 52
Thanks Gnashley. Turns out it aint' gonna work anyway. Takes way too long for ping to timeout if the host is unreachable. I was hoping deadline would do the trick but in order for -w to work, the address your pinging has to be available. Remarkably stupid IMO.

Doesn't seem to be any general timeout for ping unless I'm overlooking it.
 
Old 10-12-2008, 10:47 AM   #18
gnashley
Amigo developer
 
Registered: Dec 2003
Location: Germany
Distribution: Slackware
Posts: 4,763

Rep: Reputation: 471Reputation: 471Reputation: 471Reputation: 471Reputation: 471
I get this right away if the connection is dead:
connect: Network is unreachable
You might use wget to try and download a known file from somewhere and check for 404 errors.. not very elegant...

Other ideas: use 'lynx --dump'
Maybe there's something here useful:

Code:
#!/bin/bash
# Copyright 2008 GilbertAshley <amigo@ibiblio.org>
# BashTrix wget is a minimal implementation of wget
# written in pure BASH, with only a few options.
# The original idea and basic code for this are Copyright 2006 Ainsley Pereira.
# The idea for verify_url is from code which is Copyright 2007 Piete Sartain
# But the above code fragments both still used 'cat'.
# Copyright 2008 Noam Postavsky worked out how to
# get rid of 'cat' and provided other improvements

VERSION=0.2
# Minimum number of arguments needed by this program
MINARGS=1

show_usage() {
echo "Usage: ${0#*/} [OPTIONS] URL"
echo "${0#*/} [-hiOqV] URL"
echo ""
echo "  -i FILE --input-file=FILE		read filenames from FILE"
echo "  -o FILE --output-document=FILE	concatenate output to FILE"
echo "  -q --quiet				Turn off wget's output"
echo "  -h --help				Show this help page"
echo "  -V --version				Show BashTrix wget version"
echo
exit
}

show_version() {
echo "BashTrix: wget $VERSION"
echo "BashTrix wget is a minimal implementation of wget"
echo "written in pure BASH, with only a few options."
exit
}

# show usage if '-h' or  '--help' is the first argument or no argument is given
case $1 in
	""|"-h"|"--help") show_usage ;;
	"-V"|"--version") show_version ;;
esac

# get the number of command-line arguments given
ARGC=${#}

# check to make sure enough arguments were given or exit
if [[ $ARGC -lt $MINARGS ]] ; then
 echo "Too few arguments given (Minimum:$MINARGS)"
 echo
 show_usage
fi

# process command-line arguments
for WORD in "$@" ; do
	case $WORD in
		-*)  true ;
			case $WORD in
				--debug) [[ $DEBUG ]] && echo "Long Option"
					DEBUG=1
					shift ;;
				--input-file=*) [[ $DEBUG ]] && echo "Long FIELD Option using '='"
					INPUT_FILE=${WORD:13}
					shift ;;
				-i) [[ $DEBUG ]] && echo "Short split FIELD Option"
					if [[ ${2:0:1} != "-" ]] ; then
					 INPUT_FILE=$2
					 shift 2
					else
					 echo "Missing argument"
					 show_usage
					fi ;;
				-i*) [[ $DEBUG ]] && echo "Short FIELD Option range -Bad syntax"
					echo "Bad syntax. Did you mean this?:"
					echo "-i ${WORD:2}"
					 show_usage
					shift ;;
				--output-document=*) [[ $DEBUG ]] && echo "Long FIELD Option using '='"
					DEST=${WORD:18}
					shift ;;
				-O) [[ $DEBUG ]] && echo "Short split FIELD Option"
					if [[ ${2:0:1} != "-" ]] ; then
					 DEST=$2
					 shift 2
					else
					 echo "Missing argument"
					 show_usage
					fi ;;
				-O*) [[ $DEBUG ]] && echo "Short FIELD Option range -Bad syntax"
					echo "Bad syntax. Did you mean this?:"
					echo "-i ${WORD:2}"
					 show_usage
					shift ;;
				-q|--quiet) BE_QUIET=1
					shift;;
			esac
		;;
	esac
done

# Starts reading from ${HOST}/${URL}. Throws away HTTP headers so
# page contents can be read from file descriptor "$1"
fetch-page()
{
    # eval's are necessary so that bash parses expansion of $1<> as a single token
    eval "exec $1<>/dev/tcp/${HOST}/80"
    eval "echo -e 'GET ${URL} HTTP/0.9\r\n\r\n' >&$1"
    # read and throw away HTTP headers, the end of headers is
    # indicated by an empty line (all lines are terminated \r\n)
    OLD_IFS="$IFS"
    IFS=$'\r'$'\n'
    while read -u$1 i && [ "${i/$'\r'/}" != "" ]; do : ; done
    IFS="$OLD_IFS"
}

# puts contents of ${HOST}/${URL} into ${DEST}
get_it()
{
# make sure $DEST starts empty
: > $DEST
fetch-page 3
fetch-page 4
# clear IFS, otherwise the bytes in it would read as empty
OLD_IFS="$IFS"
IFS=
# we read a single byte at a time from 3 with delimiter 'A',
# and from 4 with delimiter 'B'.
while read -r -n1 -dA -u3 A && read -r -n1 -dB -u4 B ; do
    # Now $A is the empty string if the true byte is 'A' or NULL, and
    # $B is the empty string if the true byte is 'B' or NULL.
    # Therefore if either $A or $B is not empty they have the true byte
    if [ -n "$B" ] ; then
        echo -n "$B" >> $DEST
    elif [ -n "$A" ] ; then
        echo -n "$A" >> $DEST
    else
        # both are empty so the true byte is NULL
	echo -en '\0' >> $DEST
    fi
done
# restore IFS
IFS="$OLD_IFS"
}

verify_url() {
exec 3<>"/dev/tcp/${HOST}/80"
echo -e "GET ${URL} HTTP/0.9\r\n\r\n" >&3
read -u3 i
if [[ $i =~ "200 OK" ]]; then
	echo 1
else
	echo 0
fi
}

strip_url() {
# remove the http:// or ftp:// from the RAW_URL
RAW_URL=$1
if [[ ${RAW_URL:0:7} = "http://" ]] ; then
	URL=${RAW_URL:7}
elif [[ ${RAW_URL:0:6} = "ftp://" ]] ; then
	URL=${RAW_URL:6}
else
	URL=${RAW_URL}
fi
}

show_error_404() {
if ! [[ $BE_QUIET ]] ; then
	echo "${HOST}/${URL}:"
	echo "ERROR 404: Not Found."
fi
}

if [[ $INPUT_FILE ]] ; then
	for RAW_URL in $(cat $INPUT_FILE) ; do
		# remove the http:// or ftp:// from the RAW_URL
		strip_url $RAW_URL
		# the HOST is the base name of the website
		HOST=${URL%%/*}
		# the url is the remaining path to the file(plus the leading '/'
		URL=/${URL#*/}
		# if the --output-file is not being used, then the DEST is $(basename $URL)
		if [[ $DEST = "" ]] ; then
			DEST=${URL##*/}
		fi
		# make sure the URL exists
		if [[ "$(verify_url)" = 1  ]] ; then
			[[ $DEBUG ]] && echo "${HOST}/${URL} - ${GREEN}found."
			get_it
		else
			show_error_404
		fi
	done
else
	RAW_URL="$@"
	# this is the same as above, but for single files
	strip_url $RAW_URL
	HOST=${URL%%/*}
	URL=/${URL#*/}
	if [[ $DEST = "" ]] ; then
		DEST=${URL##*/}
	fi
	if [[ "$(verify_url)" = "1" ]] ; then
		get_it
	else
		show_error_404
	fi
fi
And here's something called 'lastbash' doing a similar thing that I just found yesterday:
http://sourceforge.net/project/showf...roup_id=182040

The pertinent lines look something like the above:
Code:
# Function: HTTP GET method: builtin implementation {{{1
#-----------------------------------------------------------------------------
function http_get_builtin()
{
	# $1 - output file
	# $2 - host
	# $3 - port
	# $4 - path

	local REQUEST RET="0"

	# Construct the request
	REQUEST="GET $4 HTTP/1.0\r\nHost: $2:$3\r\nUser-Agent: ${PROG_TITLE}/${PROG_VER}\r\nConnection: Close\r\n\r\n"

	# Map the FD no.5 to tcp connection
	if exec 5<>/dev/tcp/$2/$3
	then
		# Send the request
		if echo -ne "${REQUEST}" >&5 2>/dev/null
		then
			# Save the response to the temporary file
			if dd <&5 >"${1}" 2>/dev/null
			then
				# Close the connection
				if exec 5>&-
				then
					RET="0"
				else
					RET="4"
				fi
			else
				RET="3"
			fi
		else
			RET="2"
		fi
	else
		RET="1"
	fi

	# Return the status code
	return "${RET}"
}
Both the above are using bash's builtin support for tcp connections -it may be easier to control than other tools -it's certainly a much cooler way to do it -I'm a bit crazy about bash-only stuff -even if I'm no shell guru.

Of course, you've made me have a look at how I'm doing that in src2pkg. I just directly try to download the thing Im after -I've not had any problems with it, but I nearly always have a reliable connection.
Code:
download_url() {
	if ! [[ $DOWNLOADER ]] ; then
		if [[ $(which wget) ]] ; then
			DOWNLOADER=wget
		elif [[ $(which rsync) ]] ; then
			DOWNLOADER=rsync
		elif [[ $(which curl) ]] ; then
			DOWNLOADER=curl
		elif [[ $(which lynx) ]] ; then
			DOWNLOADER=lynx
		else
			DOWNLOADER=""
		fi
	fi

	case $DOWNLOADER in
		wget)	echo $BLUE"Downloading ${URL_TYPE} with wget from: "$NORMAL"$URL_ADDRESS" ;
			wget --tries=3 --timeout=15 -O "${URL_DEST}" "${URL_ADDRESS}" &> /dev/null ;;
		rsync)	echo $BLUE"Downloading ${URL_TYPE} with rsync from: "$NORMAL"$URL_ADDRESS"
			rsync "${URL_ADDRESS}" >"${URL_DEST}" &> /dev/null ;;
		curl)	echo $BLUE"Downloading ${URL_TYPE} with curl from: "$NORMAL"$URL_ADDRESS" ;
			curl -s "${URL_ADDRESS}" >"${URL_DEST}" &> /dev/null ;;
		lynx)	echo $BLUE"Downloading ${URL_TYPE} with lynx from: "$NORMAL"$URL_ADDRESS" ;
			lynx -source "${URL_ADDRESS}" >"${URL_DEST}" &> /dev/null ;;
		*)	echo "No downloader available." ;
			exit ;;
	esac
}
 
Old 10-12-2008, 11:46 AM   #19
jong357
Senior Member
 
Registered: May 2003
Location: Columbus, OH
Distribution: DIYSlackware
Posts: 1,914

Original Poster
Rep: Reputation: 52
Quote:
Originally Posted by gnashley View Post
I get this right away if the connection is dead:
connect: Network is unreachable
Using an IP with ping, that happens. Specifying google.com it hangs for upwards to 30 seconds. Guess it looks to resolve the domain name first but can't do it because of no internet connection.

Thanks for the tidbits. I'm sure I'll come up with something sooner or later.
 
Old 10-12-2008, 01:51 PM   #20
Woodsman
Senior Member
 
Registered: Oct 2005
Distribution: Slackware 14.1
Posts: 3,482

Rep: Reputation: 534Reputation: 534Reputation: 534Reputation: 534Reputation: 534Reputation: 534
Quote:
Using an IP with ping, that happens. Specifying google.com it hangs for upwards to 30 seconds.
Place google.com and the IP address in /etc/hosts. Then ensure /etc/resolv.conf queries the hosts file before querying other DNS servers:

Code:
# /etc/resolv.conf
# used to resolve DNS server look-ups

# first query this computer's hosts file
search localhost

# next search this computer for a caching nameserver (e.g., dnsmasq)
nameserver 127.0.0.1

# next query a local network nameserver
nameserver 192.168.1.1

# next query outside DNS servers, usually automatically assigned by ISP
# search local.isp.net
I use a ping test in my rc.ntpd script. I ping to verify the pool servers are available, otherwise the script does not start the ntpd. I placed the pool servers in /etc/hosts. If the ping fails the result is immediate. Similarly, my main box, which runs ntpd, is my time server for my remaining boxes. They ping my main box before trying to sync time. All of my boxes use static addresses and I have the names and IP addresses in /etc/hosts.

Regardless, use ping -w1 to limit the wait time.

My boxes are behind a router. My router is connected to a wireless sending unit (similar to a cable "modem"). Therefore grepping ifconfig would not tell me whether the internet is actually available. A ping test to the outside world is the only valid approach.

Do the Google people care that you ping their servers often? Probably not. Nonetheless, I had the same concern several years ago when I was on dial-up. I had a script that contained a list of a dozen search engines that I used for a ping test. In the script I randomly selected one of the dozen servers to "spread the load." The script chose a second address from the dozen when the first attempt failed.

The -c option limits the number of pings. I have a system alias for ping on all of my boxes that substitutes ping -c3 for ping. For online testing purposes, -c1 is sufficient. Give me a ping, Vasily. One ping only please.
 
Old 10-12-2008, 02:02 PM   #21
jong357
Senior Member
 
Registered: May 2003
Location: Columbus, OH
Distribution: DIYSlackware
Posts: 1,914

Original Poster
Rep: Reputation: 52
Thanks. All good info but I can't likely modify people's config files without ruffling alot of feathers, nor would I want to in the first place.

Also, the deadline option does nothing when a domain name is used instead of a static IP.

I would probably do as you suggest were it my own box however. ping c1 IP is what I'll roll with for the time being. It works out quite nicely, however lame and unimaginative it may be...

Last edited by jong357; 10-12-2008 at 02:03 PM.
 
Old 10-14-2008, 04:29 AM   #22
rg3
Member
 
Registered: Jul 2007
Distribution: Slackware Linux
Posts: 512

Rep: Reputation: Disabled
Maybe I'm too late for the party, but I'd check for the existence of a default route. If you have a default route, it usually means you have some way of accessing the internet. IMHO, it's the simplest and best option for something that works in 99% of the cases. This code, for example, is used in rc.inet1 to check if it should set the default route:

Code:
# Function to bring up the gateway if there is not yet a default route:
gateway_up() {
  if ! /sbin/route -n | grep "^0.0.0.0" 1> /dev/null ; then
    if [ ! "$GATEWAY" = "" ]; then
      echo "/etc/rc.d/rc.inet1:  /sbin/route add default gw ${GATEWAY} metric 1" | $LOGGER
      /sbin/route add default gw ${GATEWAY} metric 1 2>&1 | $LOGGER
    fi
  fi
}
The key here is the 'if ! /sbin/route -n | grep "^0.0.0.0" 1> /dev/null ; then' which reads "if there's no default route, then".

Last edited by rg3; 10-14-2008 at 04:37 AM. Reason: Grammar
 
Old 10-14-2008, 04:58 AM   #23
keefaz
Senior Member
 
Registered: Mar 2004
Distribution: Slackware
Posts: 4,614

Rep: Reputation: 136Reputation: 136
Checking the route does not take care of dns errors though, and for a 100% working internet, you need a working dns setup imho
 
Old 10-14-2008, 01:29 PM   #24
rg3
Member
 
Registered: Jul 2007
Distribution: Slackware Linux
Posts: 512

Rep: Reputation: Disabled
Having a default route and a bad DNS configuration falls in the other 1% of cases. =)
 
Old 10-14-2008, 01:48 PM   #25
keefaz
Senior Member
 
Registered: Mar 2004
Distribution: Slackware
Posts: 4,614

Rep: Reputation: 136Reputation: 136
Well that depends on network topology I guess, in my case it is often the ADSL modem/router that fails at 99% of my internet problems
 
Old 10-14-2008, 03:57 PM   #26
bassmadrigal
Member
 
Registered: Nov 2003
Location: Newport News, VA
Distribution: Slackware
Posts: 246

Rep: Reputation: 52
Maybe this is a little late, but I did up a quick script that would do this as well. Mine will check for an external ip from whatsmyip.org then it uses regexp to verify if it is an ip (i do realize that 999.999.999.999 would show valid as well, but we are getting this from the internet, so the connection is valid anyway), from there you can change the script to do what you want it to.

Keep in mind though, that whatsmyip.org doesn't like to be queried more than 3 times in 60 seconds. In fact it will dump out an error. So if you will be checking from the same network more than 3 times in 60 seconds this script won't work.

Code:
#!/bin/bash

result=`curl www.whatismyip.org -silent -i`

if [[ $result =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
        echo "yes"
else
        echo "no"
fi
The nice thing is, with this getting info from online and checking it, you can always be assured you are online if it shows up yes. It has to get an ip address back from that site, otherwise it will dump no (the downfall being that if whatsmyip goes down, well you are in the same boat, but that is why they have redundant servers).

Last edited by bassmadrigal; 10-14-2008 at 03:59 PM.
 
Old 10-14-2008, 04:51 PM   #27
raconteur
Member
 
Registered: Dec 2007
Location: Slightly left of center
Distribution: slackware
Posts: 276
Blog Entries: 2

Rep: Reputation: 44
Quote:
Originally Posted by bassmadrigal View Post
[...]Keep in mind though, that whatsmyip.org doesn't like to be queried more than 3 times in 60 seconds.[...]
Most time servers don't mind frequent queries.
 
Old 10-14-2008, 05:07 PM   #28
bassmadrigal
Member
 
Registered: Nov 2003
Location: Newport News, VA
Distribution: Slackware
Posts: 246

Rep: Reputation: 52
Quote:
Originally Posted by raconteur View Post
Most time servers don't mind frequent queries.
Well whatsmyip isn't a time server, and this way you get something back that can prove you are connected to the internet.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Checking if a TCP/UDP connection is actually local blank87 Linux - Networking 8 07-17-2008 08:44 AM
(k)Ubuntu checking Internet connection Merlyn Ubuntu 2 11-09-2006 06:33 AM
checking if a connection is closed using select(); Thinking Programming 7 12-13-2005 03:19 PM
C - checking internet connection szparag Programming 4 10-27-2004 03:07 PM
Checking if you are on the internet RedRabbit Linux - General 1 01-20-2004 08:21 AM


All times are GMT -5. The time now is 01:05 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration