LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices


Reply
  Search this Thread
Old 12-09-2020, 08:59 AM   #1
camerabambai
Member
 
Registered: Mar 2010
Distribution: Slackware
Posts: 408

Rep: Reputation: 54
A little difficult: real ZFS raid


On Solaris 11 is really easy to obtain a real raid ZFS: you can loose one hard disk and the system boot.

This is what I did on Solaris 11


a)Check the partitions
Code:
zpool status rpool
format
b)I can copy the partitions
Code:
prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
c)I attach the second disk to the first for mirroring
Code:
zpool attach -f rpool c0t0d0s0 c0t1d0s0
d)Wait until resilvering is complete
Code:
zpool status
e)Grub configure
Code:
#on x86 with Solaris =< 11.3
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0
#On Sparc
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
#on x86 with Solaris => 11.4
bootadm install-bootloader /dev/rdsk/c1t1d0s0
h)On reboot is OK, works fine.
If you remove one of the disk the system boot without problems.

Now return on Linux, is easy(but long) to install zfs on Debian starting with a livecd, I have successfully boot from a raidz(!!) but with only one problem: the bootpool and rootpool are on raidz so are redundant.
But the efi partition was single! So if you lost the disc you have to boot from a livecd, recreate the efi partition, recreate the grub structures, etc...very boring and not immediate as Solaris did.

So I had an idea: try to booting from a raid efi. I don't know if is possible here is what i did

a)The system is a virtual machine, Linux debian host, Linux Debian guest.
I start with the live-cd with xfce, and two qcow2 disks. Ram is 4gb for guest.

b)I start the livecd and install ssh to control it from a remote
Code:
sudo su -
apt -y update
apt -y install openssh-server vim
systemctl start ssh
echo -e 'password\npassword\n' user
c)from my shell..
Code:
ssh -l user debianzfs
sudo su -
d)on livecd we install git python3-pip and the requirements for zfs
Code:
apt -y install git python3-pip
git clone https://github.com/openzfs/openzfs-docs
cd openzfs-docs
pip3 install -r docs/requirements.txt
PATH=$HOME/.local/bin:$PATH
e)I edit apt
Code:
cat > /etc/apt/sources.list <<EOF
deb http://deb.debian.org/debian buster main contrib
deb http://deb.debian.org/debian buster-backports main contrib
EOF
f)install zfs on livecd
Code:
apt -y update
apt install -y cryptsetup
apt install -y debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
apt install -y -t buster-backports --no-install-recommends zfs-dkms mdadm gdisk
modprobe zfs
apt install -y -t buster-backports zfsutils-linux
g)setup the disks
Code:
apt -y install mdadm
mdadm --stop /dev/md*
mdadm --zero-superblock /dev/vda*... etc
Code:
wipefs -a /dev/vda
wipefs -a /dev/vdb

h)partitioning(I use FD00, but also using EF00 the system fail to boot)
Code:
sgdisk -n 1:2048:+512M -t1:FD00 -c1:EFI /dev/vda
sgdisk -n 2:`sgdisk -f`:+1G -t2:BF01 -c2:BOOTPOOL /dev/vda
sgdisk -n 3:`sgdisk -f`:0 -t3:BF00 -c3:ROOTPOOL /dev/vda

sgdisk -n 1:2048:+512M -t1:FD00 -c1:EFI2 /dev/vdb
sgdisk -n 2:`sgdisk -f`:+1G -t2:BF01 -c2:BOOTPOOL2 /dev/vdb
sgdisk -n 3:`sgdisk -f`:0 -t3:BF00 -c3:ROOTPOOL2 /dev/vdb
i)creating the bootpool
Code:
zpool create -f -o ashift=12 -d -o feature@async_destroy=enabled -o feature@bookmarks=enabled -o feature@embedded_data=enabled -o feature@empty_bpobj=enabled -o feature@enabled_txg=enabled -o feature@extensible_dataset=enabled -o feature@filesystem_limits=enabled -o feature@hole_birth=enabled -o feature@large_blocks=enabled -o feature@lz4_compress=enabled -o feature@spacemap_histogram=enabled -o feature@zpool_checkpoint=enabled -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off -O normalization=formD -O relatime=on -O xattr=sa -O mountpoint=/boot -R /mnt bpool mirror /dev/disk/by-partlabel/BOOTPOOL /dev/disk/by-partlabel/BOOTPOOL2
m)status control
Code:
zpool list
zpool status
o)now I create the rootpool
Code:
zpool create \
    -o ashift=12 \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on \
    -O xattr=sa -O mountpoint=/ -R /mnt rpool mirror /dev/disk/by-partlabel/ROOTPOOL /dev/disk/by-partlabel/ROOTPOOL2
p)creating zfs filesets
Code:
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
zfs mount rpool/ROOT/debian
zfs create -o mountpoint=/boot bpool/BOOT/debian
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off rpool/var
zfs create -o canmount=off rpool/var/lib
zfs create rpool/var/log
zfs create rpool/var/spool
Code:
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
Code:
zfs create                                 rpool/opt
zfs create -o canmount=off                 rpool/usr
zfs create                                 rpool/usr/local
zfs create                                 rpool/var/snap
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
zfs create -o com.sun:auto-snapshot=false  rpool/tmp
chmod 1777 /mnt/tmp
s)installing Debian on mnt
Code:
debootstrap buster /mnt
t)I have a working dns, so no need to edit etc/hosts, etc...

u)configure apt/sources on mnt
Code:
cat > /mnt/etc/apt/sources.list<<EOF
deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib

deb http://security.debian.org/debian-security buster/updates main contrib
deb-src http://security.debian.org/debian-security buster/updates main contrib

deb http://deb.debian.org/debian buster-updates main contrib
deb-src http://deb.debian.org/debian buster-updates main contrib
EOF

cat > /mnt/etc/apt/sources.list.d/buster-backports.list <<EOF
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
EOF

cat > /mnt/etc/apt/preferences.d/90_zfs <<EOF
Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
EOF
v)chroot!
Code:
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt  bash --login
z)on chroot I install some packages

Code:
ln -s /proc/self/mounts /etc/mtab
apt -y update
apt -y install console-setup locales vim bash-completion network-manager apt-file
z1)I reconfigure some of them..
Code:
dpkg-reconfigure locales tzdata keyboard-configuration console-setup
z2)I install zfs on chroot
Code:
apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 mdadm
apt install --yes zfs-initramfs
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
z3)installing grub, mdadm, configure efi for fstab, etc
Code:
apt -y install dosfstools mdadm
mdadm --stop /dev/md127
mdadm --create /dev/md0 --level=1 --name debianzfs:0 --raid-devices=2 /dev/disk/by-partlabel/EFI /dev/disk/by-partlabel/EFI2 
mkdosfs -F 32 -s 1 -n EFI /dev/md0
mkdir /boot/efi
echo /dev/disk/by-label/EFI \
   /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
mount /boot/efi
apt install --yes grub-efi-amd64 shim-signed
z4)I remove osprober
Code:
dpkg --purge os-prober
z5)I set the root password on chroot
Code:
echo -e 'yourpass\nyourpass\n'|passwd root
z6)I create the zfs-import service for systemd
Code:
cat >/etc/systemd/system/zfs-import-bpool.service <<EOF
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool

[Install]
WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service

z8)we need some modules for raid and efi
Code:
vim /etc/initramfs-tools/modules
raid1
fat
vfat
z9)reconfigure grub and initramfs, and mdadm.conf
grub-probe /boot

Code:
vim /etc/default/grub
GRUB_CMDLINE_LINUX="rd.md.uuid=4e9080bf:37029179:6f75e264:95e69e28 root=ZFS=rpool/ROOT/debian mds=full,nosmt mitigations=auto console=tty1 console=ttyS0,115200 quiet"
GRUB_TERMINAL="console serial"
Code:
vim /etc/mdadm/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 UUID=4e9080bf:37029179:6f75e264:95e69e28 name=debianzfs:0

update-initramfs -c -k all
update-grub
z9)now install grub on both disks of raid1

Code:
for i in vda vdb ;do grub-install --removable --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy /dev/$i;done
z10)configure zed

Code:
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/bpool
touch /etc/zfs/zfs-list.cache/rpool
ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
zed -F &
Code:
zfs set canmount=on bpool/BOOT/debian
zfs set canmount=noauto rpool/ROOT/debian

cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
z12)ONLY if the cat bpool and cat rpool on z11 return something(not empty lines) terminate zed
Code:
%
CTRL+C
z13)remove /mnt from cache files
Code:
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
z14)check if mnt is removed

Code:
cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
z14)installing openssh on chroot

Code:
apt -y install openssh-server
z15)create the startup.nsh..

Code:
cat > /boot/efi/startup.nsh <<EOF
\EFI\debian\grubx64.efi
EOF
z16)finally we can reboot..
Code:
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    xargs -i{} umount -lf {}
zpool export -a
reboot
but the result is a little bad..

https://images2.imgbox.com/cb/f2/SR7SBZCV_o.png

What in your opinion doesn't work? The firmware efi cannot read the md0?

Last edited by camerabambai; 12-09-2020 at 09:04 AM.
 
Old 12-09-2020, 11:11 AM   #2
camerabambai
Member
 
Registered: Mar 2010
Distribution: Slackware
Posts: 408

Original Poster
Rep: Reputation: 54
Workaround found: instead of using efi, using legacy bios booting. Is more easy and no need of mdadm or complicated stuff. Tested now and booting fine without the primary disk
Code:
zpool status
  pool: bpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: none requested
config:

	NAME                    STATE     READ WRITE CKSUM
	bpool                   DEGRADED     0     0     0
	  mirror-0              DEGRADED     0     0     0
	    921713157437892977  UNAVAIL      0     0     0  was /dev/disk/by-partlabel/BOOTPOOL
	    BOOTPOOL2           ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: none requested
config:

	NAME                      STATE     READ WRITE CKSUM
	rpool                     DEGRADED     0     0     0
	  mirror-0                DEGRADED     0     0     0
	    10688473980588619828  UNAVAIL      0     0     0  was /dev/disk/by-partlabel/ROOTPOOL
	    ROOTPOOL2             ONLINE       0     0     0

errors: No known data errors
root@debian:~#

If interested this is the guide.

a)The system is a virtual machine, Linux debian host, Linux Debian guest.
I start with the live-cd with xfce, and two qcow2 disks. Ram is 4gb for guest.

b)I start the livecd and install ssh to control it from a remote
Code:
sudo su -
apt -y update
apt -y install openssh-server vim
systemctl start ssh
c)from a remote shell connect to the livecd
Code:
ssh -l user debianzfs
sudo su -
d)we install git and python3-pip, then the reqs for zfs in livecd
Code:
apt -y install git python3-pip
git clone https://github.com/openzfs/openzfs-docs
cd openzfs-docs
pip3 install -r docs/requirements.txt
PATH=$HOME/.local/bin:$PATH
e)editing the apt sources in livecd
Code:
cat > /etc/apt/sources.list <<EOF
deb http://deb.debian.org/debian buster main contrib
deb http://deb.debian.org/debian buster-backports main contrib
EOF
f)install zfs on livecd
Code:
apt -y update
apt install -y cryptsetup
apt install -y debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
apt install -y -t buster-backports --no-install-recommends zfs-dkms mdadm gdisk
modprobe zfs
apt install -y -t buster-backports zfsutils-linux
g)we need mdadm to remove an eventually old md raid
Code:
apt -y install mdadm
mdadm --stop /dev/md*
mdadm --zero-superblock /dev/vda*... etc
to remove other fs use wipefs
Code:
wipefs -a /dev/vda
wipefs -a /dev/vdb
h)format the two disks for raid1, we use boot bios partition
Code:
sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/vda
sgdisk -n 2:`sgdisk -f`:+1G -t2:BF01 -c2:BOOTPOOL /dev/vda
sgdisk -n 3:`sgdisk -f`:0 -t3:BF00 -c3:ROOTPOOL /dev/vda

sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/vdb
sgdisk -n 2:`sgdisk -f`:+1G -t2:BF01 -c2:BOOTPOOL2 /dev/vdb
sgdisk -n 3:`sgdisk -f`:0 -t3:BF00 -c3:ROOTPOOL2 /dev/vdb
for security reason check if there are three partitions for every disk
Code:
fdisk -l /dev/vda
fdisk -l /dev/vdb
i)create the bootpool
Code:
zpool create -f -o ashift=12 -d -o feature@async_destroy=enabled -o feature@bookmarks=enabled -o feature@embedded_data=enabled -o feature@empty_bpobj=enabled -o feature@enabled_txg=enabled -o feature@extensible_dataset=enabled -o feature@filesystem_limits=enabled -o feature@hole_birth=enabled -o feature@large_blocks=enabled -o feature@lz4_compress=enabled -o feature@spacemap_histogram=enabled -o feature@zpool_checkpoint=enabled -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off -O normalization=formD -O relatime=on -O xattr=sa -O mountpoint=/boot -R /mnt bpool mirror /dev/disk/by-partlabel/BOOTPOOL /dev/disk/by-partlabel/BOOTPOOL2
m)check
Code:
zpool list
zpool status
n)create the rootpool, using label is nice and easy
Code:
zpool create -f -o ashift=12 -O acltype=posixacl -O canmount=off -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa -O mountpoint=/ -R /mnt rpool mirror /dev/disk/by-partlabel/ROOTPOOL /dev/disk/by-partlabel/ROOTPOOL2
o)create the filesets
Code:
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
zfs mount rpool/ROOT/debian
zfs create -o mountpoint=/boot bpool/BOOT/debian
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off rpool/var
zfs create -o canmount=off rpool/var/lib
zfs create rpool/var/log
zfs create rpool/var/spool
p)create other filesets
Code:
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
zfs create                                 rpool/opt
zfs create -o canmount=off                 rpool/usr
zfs create                                 rpool/usr/local
zfs create                                 rpool/var/snap
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
zfs create -o com.sun:auto-snapshot=false  rpool/tmp
chmod 1777 /mnt/tmp
r)install Debian on zfs mounted on mnt
Code:
debootstrap buster /mnt
s)I have a working dns on my testlab, you can use networkmanager(install after) or edit
/etc/network/interfaces for classical Debian configuration.

t)configure apt/sources
Code:
cat > /mnt/etc/apt/sources.list<<EOF
deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib

deb http://security.debian.org/debian-security buster/updates main contrib
deb-src http://security.debian.org/debian-security buster/updates main contrib

deb http://deb.debian.org/debian buster-updates main contrib
deb-src http://deb.debian.org/debian buster-updates main contrib
EOF

cat > /mnt/etc/apt/sources.list.d/buster-backports.list <<EOF
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
EOF

cat > /mnt/etc/apt/preferences.d/90_zfs <<EOF
Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
EOF
u)do chroot on mnt
Code:
mount --rbind /dev  /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys  /mnt/sys
chroot /mnt  bash --login
v)install some packages on chroot, network-manager is not a requirement but I use it
Code:
ln -s /proc/self/mounts /etc/mtab
apt -y update
apt -y install console-setup locales vim bash-completion network-manager apt-file
z1)reconfigure some packages
Code:
dpkg-reconfigure locales tzdata keyboard-configuration console-setup
z2)install zfs tools on chroot
Code:
apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
apt install --yes zfs-initramfs
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
z3)install grub on chroot
Code:
apt install --yes grub-pc
z4)remove osprober
Code:
dpkg --purge os-prober
z5)set root password
Code:
echo -e 'yourpass\nyourpass\n'|passwd root
z6)create systemd service to import the zfs pools
Code:
cat >/etc/systemd/system/zfs-import-bpool.service <<EOF
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool

[Install]
WantedBy=zfs-import.target
EOF

systemctl enable zfs-import-bpool.service
z8)configure grub and initramfs
Code:
grub-probe /boot
must return zfs

vim /etc/default/grub

Code:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian mds=full,nosmt mitigations=auto console=tty1 console=ttyS0,115200 quiet"
GRUB_TERMINAL="console serial"
Code:
update-initramfs -c -k all
update-grub
z9)install grub on both disks
Code:
for i in vda vdb ;do grub-install /dev/$i;done
z10)configure zed
Code:
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/bpool
touch /etc/zfs/zfs-list.cache/rpool
ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
zed -F &
z11)check if zfs-list-cache is updated
Code:
cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
if return empty use those commands
Code:
zfs set canmount=on bpool/BOOT/debian
zfs set canmount=noauto rpool/ROOT/debian
wait 10 seconds and retry
Code:
cat /etc/zfs/zfs-list.cache/rpool
cat /etc/zfs/zfs-list.cache/bpool
must return some lines, not empty

z12)stop zed
Code:
%
CTRL+C
z13)remove "/mnt" from cache files
Code:
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
z14)check, mnt must be removed from paths(/home is ok, mnt/home not)
Code:
cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
z15)install openssh-server on chroot
Code:
apt -y install openssh-server
z16)exit from chroot, umount all and reboot

Code:
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    xargs -i{} umount -lf {}
zpool export -a
reboot
z17)connect to virtual machine using virsh console --domain nameofdomain
the system must boot with two disks or with one of the disk only(simulate a
disk failure).

z18)don't forget to create the swap after first login
Code:
zfs create -V 8G rpool/swap
mkswap -L swap /dev/zvol/rpool/swap
echo "/dev/zvol/rpool/swap swap swap defaults 0 0" >> /etc/fstab
swapon -a

Last edited by camerabambai; 12-09-2020 at 11:25 AM.
 
  


Reply

Tags
raid1, solved, zfs



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
ZFS or ZFS-FUSE - Which way to go and why? tallship Slackware 10 07-24-2012 10:48 PM
Zfs zfs sang_froid Solaris / OpenSolaris 6 05-31-2010 03:52 PM
LXer: Article ZFS data integrity testing and more random ZFS thoughts. LXer Syndicated Linux News 0 05-15-2010 12:51 PM
Solaris Express ZFS vs Solaris 10 ZFS? kebabbert Solaris / OpenSolaris 8 06-29-2007 07:05 AM
ZFS Root / Boot into ZFS from a usb flash drive Kataku Solaris / OpenSolaris 1 07-15-2006 04:13 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian

All times are GMT -5. The time now is 12:28 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration