LinuxQuestions.org

LinuxQuestions.org (http://www.linuxquestions.org/questions/index.php)
-   Slackware (http://www.linuxquestions.org/questions/forumdisplay.php?f=14)
-   -   Slackware64-Current Bug in lxc 0.7.5 (http://www.linuxquestions.org/questions/showthread.php?t=4175422987)

Jack128 08-20-2012 12:21 AM

Slackware64-Current Bug in lxc 0.7.5
 
Hello,

I just installed a fresh Slackware64 14RC2 on a spare system for testing
Linux Containers. I have build a container, and managed this to run on a
older Slackware where lxc 0.7.4 is installed. Everything runs smooth.

So I copied over this container to the spare system with 14RC2 installed
and tried the same over a lxc 0.7.5 installation. There I get:

Code:

lxc-start -n system1
lxc-start: Device or resource busy - failed to remove previous cgroup '/cgroup/system1'
lxc-start: failed to spawn 'system1'
lxc-start: No such file or directory - failed to remove cgroup '/cgroup/system1'

Code:

mount
/dev/sda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/sda4 on /home type ext4 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /cgroup type cgroup (rw)

I have tried this both with the stock huge kernel (3.2.27) and with a custom
3.4.9, on both the situation is the same. So I tried to recompile the lxc-0.7.4
on the 14RC2 and downgraded it. Now the container is properly booting up.

Hope this help.
Greetings Jack.

Alien Bob 08-20-2012 04:54 PM

I could not reproduce this. Perhaps it is an issue with having created a container with an older version of LXC? I tested rather simplistically since I have no real knowledge of using containers, nor a real-life template:

Code:

# mkdir /dev/mqueue
# mount -t mqueue none /dev/mqueue
# lxc-create -n foo -t sshd -f ~/lxc.conf
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/lxc/foo/rootfs/etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /var/lib/lxc/foo/rootfs/etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
5b:d0:5b:b0:13:6c:38:42:3d:2f:00:09:4f:b7:b6:38 root@alienteepee
The key's randomart image is:
 <lots of initialization stuff snipped>
'sshd' template installed
'foo' created

# lxc-start -n foo
/usr/lib64/lxc/lxc-init is /usr/lib64/lxc/lxc-init
sshd is /usr/sbin/sshd

Eric

Jack128 08-22-2012 12:48 AM

I,ve tried it with 0.7.5 again, and tested your procedure.

Code:

$ lxc-create -n foo -t sshd -f ~/lxc.conf
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/lxc/foo/rootfs/etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /var/lib/lxc/foo/rootfs/etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
b5:ee:00:84:a5:60:eb:6d:29:ac:7a:29:1a:8a:89:a2 root@Slackware-Node
The key's randomart image is:
<-snip->
'sshd' template installed
'foo' created

$ lxc-start -n foo
lxc-start: Device or resource busy - failed to remove previous cgroup '/cgroup/foo'
lxc-start: failed to spawn 'foo'
lxc-start: No such file or directory - failed to remove cgroup '/cgroup/foo'

Code:

$ mount
/dev/sda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/sda4 on /home type ext4 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /cgroup type cgroup (rw)
none on /dev/mqueue type mqueue (rw)

Code:

$ cat ~/lxc.conf
lxc.utsname = foo

lxc.mount = /var/lib/lxc/foo/rootfs/etc/fstab
lxc.rootfs = /var/lib/lxc/foo/rootfs

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:aa:11:bb:22:ff
lxc.network.ipv4 = 10.0.1.3/24
lxc.network.name = eth0

lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /var/lib/lxc/foo/rootfs

lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm

# we don't trust root user in the container, better safe than sorry.
# comment out only if you know what you're doing.
lxc.cap.drop = sys_module mknod
lxc.cap.drop = mac_override  kill sys_time
lxc.cap.drop = setfcap setpcap sys_boot

# if you want to be even more restrictive with your container's root
# user comment the three lines above and uncomment the following one
# lxc.cap.drop=sys_admin

I've even tried with a minimal config and getting the same error:
Code:

lxc.utsname = foo

lxc.mount = /var/lib/lxc/foo/rootfs/etc/fstab
lxc.rootfs = /var/lib/lxc/foo/rootfs

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:aa:11:bb:22:ff
lxc.network.ipv4 = 10.0.1.3/24
lxc.network.name = eth0

Same happens in a newly installed virtual machine.

audriusk 08-22-2012 03:50 AM

Can't reproduce this on lxc 0.7.5 either, with container created using 0.7.4, although I don't set up cgroups and capabilities in its config file, since it's created for testing purposes and I am the only one that's using it.

Jack128 08-23-2012 12:39 AM

Without a cgroup mountpoint I can start it with 0.7.5, too. Could you please test this with cgroups mountpoint too?

audriusk 08-24-2012 03:45 PM

Yep, when I did
Code:

mkdir /cgroup
mount -t cgroup none /cgroup

I got the same error message as yours.

I'll try to investigate this further tomorrow, feeling too tired and sleepy right now, time to go to bed.

audriusk 08-25-2012 06:02 AM

I recalled there was something cgroups related in /etc/rc.d/rc.S, and it sure is:
Code:

# Mount Control Groups filesystem interface:
if grep -wq cgroup /proc/filesystems ; then
  if [ -d /sys/fs/cgroup ]; then
    mount -t cgroup cgroup /sys/fs/cgroup
  else
    mkdir -p /dev/cgroup
    mount -t cgroup cgroup /dev/cgroup
  fi
fi

So, Slackware already mounts cgroup filesystem, no need to mount it again on /cgroup. Could be the reason why it's failing in your case.

Jack128 08-25-2012 12:53 PM

Wow, after unmounting the /cgroup mountpoint the container is starting properly.
And in /sys/fs/cgroup all is looking ok, a new "system1" directory was created and filled.

Thank you very much audriusk and Alien Bob for your help!
Problem solved.


All times are GMT -5. The time now is 06:35 AM.