Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
07-31-2014, 08:12 PM
|
#1
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Rep: 
|
Problems with properly loading device partitions, and LVM devices
I have fedora 20, 64-bit installed on an LVM. (separate logical volumes for /, /home, /swap, and /boot, one Volume Group, two Disks: One 1TB Seagate 7200rpm hdd and one 250G Samsung 840EVO SSD) I have GRUB installed on both disks, and was previously able to boot into the system off of either. I have Windows 7 installed on the SSD, and an NTFS partition for windows storage on the HDD. My /boot, /, and /swap are all on the SDD's LVM partition (sda1) and /home is on the HDD (sdb1). I have a third, unused, hard drive plugged in as sdc. Using SeaTools on Windows, I have tried upgrading firmware, running disk self-checks, etc. and the disks are in working order. My problem is that, when booting, my system hangs at the process
/dev/disk-by-UUID \x2(UUID of disk)\.device
and sits there for a minute, until I get a recovery console. fdisk -l lists both disks and all of their partitions just fine. But if I were to attempt to use
fdisk /dev/sdb1
it would say "error, no device /dev/sdb1" which is odd, since
fdisk /dev/sdb
lists 3 partitions. If I then run
sfdisk -R
then attempt to run
fdisk /dev/sdb1
again, it works just fine. I have my /etc/fstab set to use the partitions' UUIDs to boot the system. So I see why the boot fails- LVM does not see /dev/sdb1 on boot, so it does not load that part of the LVM (it gives me errors about a missing disk when running any commands, such as pvdisplay) my question is why my system is not loading /dev/sdb[1-3. Ive had this system for several years, and do not wish to do any type of reinstall. Also, i noticed that my /dev/sdb disk is named ST31000528AS in the BIOS, and blkid shows that ST31000528AS
and ST31000528ASp1
and ST31000528AS p2
are listed as dm-0, dm-1, etc. Is the device manager somehow "stealing" the hard drive?
|
|
|
08-01-2014, 01:52 PM
|
#2
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,815
|
Quote:
Originally Posted by splib
fdisk -l lists both disks and all of their partitions just fine. But if I were to attempt to use
fdisk /dev/sdb1
it would say "error, no device /dev/sdb1" which is odd, since
fdisk /dev/sdb
lists 3 partitions. If I then run
sfdisk -R
then attempt to run
fdisk /dev/sdb1
again, it works just fine.
|
Did you really mean to say "fdisk" in those two places? fdisk is run on the whole drive, not on a partition. If you somehow did manage to install another partition table inside partition sdb1, all bets are off regarding what, if anything, will work.
|
|
|
08-05-2014, 09:07 AM
|
#3
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
Hello,
my apologies for scattered that post was. And for the long time to reply- only just got access to another computer again. Running fdisk on the single partition was just to show that the device did not exist- I would not have saved the partition table even if the command HAD worked. But I think that this is now beside the point:
I have found that, despite the odd mappings (/dev/mapper/DISKNAMEp3 instead of /dev/sdb3) the devices are still usable by LVM. It appears that what has actually happened is that my lvm metadata has somehow become corrupted. I have done a pvcreate --UUID --restorefile on the drive, and now it is no longer missing. However, LVS -o +devices lists the device for my lv_home as pvmove0. Which is "missing" yet pvdisplay -m shows pvmove0 as being on my /dev/mapper/DISKNAMEp3 partition. However, I have not pvmoved anything in a while, and the extents where pvmove0 is located should really belong to lv_home. I cannot run lvcfgrestore because it says that one PV is missing (despite both of my disks being read now) and I cannot run commands like pvmove because there is one PV missing (which is pvmove0)
|
|
|
08-05-2014, 09:53 AM
|
#4
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,815
|
I've got no experience with pvmove, but this sounds like either a move that was never completed or else a mix of LVM metadata from while the pvmove was running, perhaps due to a "--restorefile" that referenced the wrong file. Perhaps if you would post the output from
Code:
grep '^description =' /etc/lvm/archive/*
and indicate which file you used for the "pvcreate --restorefile" it would help.
I fear that any "fix" that I suggest now might just make matters worse, but you might just need to run vgcfgrestore with the appropriate file to restore the LVM metadata. The manpage for pvcreate suggests that the "--restorefile" option just ensures that the volume reserves the same space for LVM metadata but does not actually restore that metadata to the drive.
|
|
|
08-05-2014, 04:05 PM
|
#5
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
Thank you for the help. I clearly have not had my head about me lately, I should have provided this already. And that first post really should not ahve been made. Anyways,
Code:
etc/lvm/archive/vg_bryan_00010-629101172.vg:description = "Created *before* executing 'pvresize -v /dev/sda1'"
/etc/lvm/archive/vg_bryan_00011-1713428701.vg:description = "Created *before* executing 'pvmove /dev/sda1:377-1976 /dev/sdc3'"
/etc/lvm/archive/vg_bryan_00012-1845202801.vg:description = "Created *before* executing 'pvmove /dev/sda1:0-376 /dev/sdc3'"
/etc/lvm/archive/vg_bryan_00013-1621926969.vg:description = "Created *before* executing 'pvmove /dev/sdc3:13240-14839 /dev/sda1:0'"
/etc/lvm/archive/vg_bryan_00014-1877123404.vg:description = "Created *before* executing 'pvmove /dev/sdc3:13240-14839 /dev/sda1:0-1599'"
/etc/lvm/archive/vg_bryan_00015-382380061.vg:description = "Created *before* executing 'pvmove /dev/sdc3:32-407 /dev/sda1:1600-1975'"
/etc/lvm/archive/vg_bryan_00016-606168654.vg:description = "Created *before* executing 'pvmove /dev/sdc3:0-31 /dev/sda1:2322-2353'"
/etc/lvm/archive/vg_bryan_00017-630026241.vg:description = "Created *before* executing 'lvcreate -L 500M -n lv_boot vg_bryan'"
/etc/lvm/archive/vg_bryan_00018-452060913.vg:description = "Created *before* executing 'pvmove /dev/sdc3:0-15 /dev/sda1:2306-2321'"
/etc/lvm/archive/vg_bryan_00019-754123587.vg:description = "Created *before* executing 'lvremove /dev/vg_bryan/lv_boot'"
/etc/lvm/archive/vg_bryan_00020-500846505.vg:description = "Created *before* executing 'lvcreate -L 1000M -n lv_boot vg_bryan'"
/etc/lvm/archive/vg_bryan_00021-236603171.vg:description = "Created *before* executing 'pvmove /dev/sdc3:0-31 /dev/sda1:2299-2321'"
/etc/lvm/archive/vg_bryan_00022-1639843090.vg:description = "Created *before* executing 'pvmove /dev/sdc3:0-31 /dev/sda1:2291-2321'"
/etc/lvm/archive/vg_bryan_00023-520897391.vg:description = "Created *before* executing 'pvmove /dev/sdc3:0-31 /dev/sda1:2290-2321'"
/etc/lvm/archive/vg_bryan_00024-1523377253.vg:description = "Created *before* executing 'vgextend vg_bryan /dev/sdb1'"
/etc/lvm/archive/vg_bryan_00025-1636259526.vg:description = "Created *before* executing 'pvmove /dev/mapper/ST31000528AS_6VPFP747p3:440-13239 /dev/sdb1'"
/etc/lvm/archive/vg_bryan_00026-1293694134.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan'"
/etc/lvm/archive/vg_bryan_00027-1888450000.vg:description = "Created *before* executing 'vgextend vg_bryan /dev/sdc3'"
/etc/lvm/archive/vg_bryan_00028-1836579185.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan'"
/etc/lvm/archive/vg_bryan_00029-475335032.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan'"
/etc/lvm/archive/vg_bryan_00030-1751079670.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan --force'"
/etc/lvm/archive/vg_bryan_00031-1255756354.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan'"
/etc/lvm/archive/vg_bryan_00032-1554294873.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan --force'"
/etc/lvm/archive/vg_bryan_00033-1693842333.vg:description = "Created *before* executing 'vgreduce --removemissing vg_bryan --force'"
is what you were looking for. I already tried vgcfgrestore, but get the error
Code:
Cannot restore Volume Group vg_bryan with 1 PVs marked as missing.
which I assume is due to the pv "pvmove0"not being found.
And with those descriptions, at one point I was going to move my /home from the 1tb disk to the 750G disk (750 was /dev/sdb, 1T was /dev/sdc) but I cancelled that operation(numbers 0024 and 0025), and everything appeared normal afterwards... but I am going to guess that was what messed it up. 26 and onwards all failed, as part of lv_home was on the "missing disk".
The output for pvdisplay
Code:
--- Physical volume ---
PV Name /dev/mapper/ST31000528AS_6VPFP747p3
VG Name vg_bryan
PV Size 585.94 GiB / not usable 31.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 18749
Free PE 5949
Allocated PE 12800
PV UUID GGc1Ri-vPNZ-WJ9R-elyk-CdQ9-QGzq-Ce8yLD
--- Physical volume ---
PV Name /dev/sda1
VG Name vg_bryan
PV Size 73.56 GiB / not usable 0
Allocatable yes
PE Size 32.00 MiB
Total PE 2354
Free PE 314
Allocated PE 2040
PV UUID vp59JC-oT9V-QIyE-Heo2-1Xjn-e4kB-JPyt76
--- Physical volume ---
PV Name /dev/mapper/ST3750330AS_9QK23HVG
VG Name vg_bryan
PV Size 698.64 GiB / not usable 11.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 22356
Free PE 9556
Allocated PE 12800
PV UUID 3qj5FD-zXt1-9q2V-aHc0-WbNd-3AnC-3n19B3
and pvdisplay -m
Code:
PV Name /dev/mapper/ST31000528AS_6VPFP747p3
VG Name vg_bryan
PV Size 585.94 GiB / not usable 31.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 18749
Free PE 5949
Allocated PE 12800
PV UUID GGc1Ri-vPNZ-WJ9R-elyk-CdQ9-QGzq-Ce8yLD
--- Physical Segments ---
Physical extent 0 to 439:
FREE
Physical extent 440 to 13239:
Logical volume /dev/vg_bryan/pvmove0
Logical extents 0 to 12799
Physical extent 13240 to 18748:
FREE
--- Physical volume ---
PV Name /dev/sda1
VG Name vg_bryan
PV Size 73.56 GiB / not usable 0
Allocatable yes
PE Size 32.00 MiB
Total PE 2354
Free PE 314
Allocated PE 2040
PV UUID vp59JC-oT9V-QIyE-Heo2-1Xjn-e4kB-JPyt76
--- Physical Segments ---
Physical extent 0 to 1975:
Logical volume /dev/vg_bryan/lv_root
Logical extents 0 to 1975
Physical extent 1976 to 2289:
FREE
Physical extent 2290 to 2321:
Logical volume /dev/vg_bryan/lv_boot
Logical extents 0 to 31
Physical extent 2322 to 2353:
Logical volume /dev/vg_bryan/lv_swap
Logical extents 0 to 31
--- Physical volume ---
PV Name /dev/mapper/ST3750330AS_9QK23HVG
VG Name vg_bryan
PV Size 698.64 GiB / not usable 11.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 22356
Free PE 9556
Allocated PE 12800
PV UUID 3qj5FD-zXt1-9q2V-aHc0-WbNd-3AnC-3n19B3
--- Physical Segments ---
Physical extent 0 to 12799:
Logical volume /dev/vg_bryan/pvmove0
Logical extents 0 to 12799
Physical extent 12800 to 22355:
FREE
and pvdisplay -v
Code:
Scanning for physical volume names
There are 1 physical volumes missing.
There are 1 physical volumes missing.
There are 1 physical volumes missing.
--- Physical volume ---
PV Name /dev/mapper/ST31000528AS_6VPFP747p3
VG Name vg_bryan
PV Size 585.94 GiB / not usable 31.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 18749
Free PE 5949
Allocated PE 12800
PV UUID GGc1Ri-vPNZ-WJ9R-elyk-CdQ9-QGzq-Ce8yLD
There are 1 physical volumes missing.
There are 1 physical volumes missing.
--- Physical volume ---
PV Name /dev/sda1
VG Name vg_bryan
PV Size 73.56 GiB / not usable 0
Allocatable yes
PE Size 32.00 MiB
Total PE 2354
Free PE 314
Allocated PE 2040
PV UUID vp59JC-oT9V-QIyE-Heo2-1Xjn-e4kB-JPyt76
There are 1 physical volumes missing.
There are 1 physical volumes missing.
--- Physical volume ---
PV Name /dev/mapper/ST3750330AS_9QK23HVG
VG Name vg_bryan
PV Size 698.64 GiB / not usable 11.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 22356
Free PE 9556
Allocated PE 12800
PV UUID 3qj5FD-zXt1-9q2V-aHc0-WbNd-3AnC-3n19B3
Thank you again for the help
p.s.- I commented out my /home entry is /etc/fstab, which allowed me to boot into my system and now post this log information correctly.
|
|
|
08-05-2014, 10:31 PM
|
#6
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,815
|
It's going to take me a while to digest that. I think the contents of /etc/lvm/backup/vg_bryan would be helpful.
When you cancelled that pvmove, did you run "pvmove --abort" to abort the operation, or just kill the process? Note that I do not recommend trying to do the "--abort" now.
Have you done anything on the 1TB drive that might have affected the LVM metadata there after aborting the move?
Last edited by rknichols; 08-05-2014 at 10:33 PM.
|
|
|
08-07-2014, 12:39 AM
|
#7
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
well dang. I just used ctrl^c to abort. I knew pvmove copied the data to the new location before erasing the original, so I figured that it would have been fine. Stupid, stupid mistake on my part. I do not believe that I have done anything that would have affected the metadata. I'll post the backup in a bit.
|
|
|
08-07-2014, 08:40 AM
|
#8
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,815
|
That explains a lot. pvmove can be continued after being interrupted. The correct action would have been to interrupt with ctrl-c and then run "pvmove --abort" to clean up the partially completed operation. Since stuff has happened since then, I'll need to take a look at that metadata backup file to see if it looks safe to run "pvmove --abort" now.
|
|
|
08-14-2014, 03:53 PM
|
#9
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
My 6-year-old mediacom modem burnt out. New one arrived, so here we go.
the backup file contains
Code:
# Generated by LVM2 version 2.02.106(2) (2014-04-10): Tue Aug 5 03:34:16 2014
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgreduce --removemissing vg_bryan'"
creation_host = "bryan" # Linux bryan 3.15.6-200.fc20.x86_64 #1 SMP Fri Jul 18 02:36:27 UTC 2014 x86_64
creation_time = 1407227656 # Tue Aug 5 03:34:16 2014
vg_bryan {
id = "eYqeIf-uxCP-zzU1-6S8F-yz7h-h5cQ-egjEBt"
seqno = 56
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 65536 # 32 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "GGc1Ri-vPNZ-WJ9R-elyk-CdQ9-QGzq-Ce8yLD"
device = "/dev/mapper/ST31000528AS_6VPFP747p3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1228797952 # 585.937 Gigabytes
pe_start = 2048
pe_count = 18749 # 585.906 Gigabytes
}
pv1 {
id = "vp59JC-oT9V-QIyE-Heo2-1Xjn-e4kB-JPyt76"
device = "/dev/sda1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 154273792 # 73.5635 Gigabytes
pe_start = 2048
pe_count = 2354 # 73.5625 Gigabytes
}
pv2 {
id = "3qj5FD-zXt1-9q2V-aHc0-WbNd-3AnC-3n19B3"
device = "/dev/mapper/ST3750330AS_9QK23HVG" # Hint only
status = ["ALLOCATABLE"]
flags = ["MISSING"]
dev_size = 1465145344 # 698.636 Gigabytes
pe_start = 2048
pe_count = 22356 # 698.625 Gigabytes
}
}
logical_volumes {
lv_swap {
id = "qnTuSC-ywDj-OPgg-vEKZ-Cnkw-fylf-gQ5NmW"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 32 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 2322
]
}
}
lv_home {
id = "mBe3A4-NgMm-WOS3-SJHF-5UkH-Uvg4-nmtYW4"
status = ["READ", "WRITE", "VISIBLE", "LOCKED"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 400 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pvmove0", 0
]
}
}
lv_root {
id = "yoZV2j-znNL-6AKa-gn32-cGL0-ZNOj-Wesgff"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 1976 # 61.75 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
]
}
}
lv_boot {
id = "QeMWXu-fDDC-sn3b-rCeW-EFXI-4geV-2YNZk7"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "bryan"
creation_time = 1406357392 # 2014-07-26 01:49:52 -0500
segment_count = 1
segment1 {
start_extent = 0
extent_count = 32 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 2290
]
}
}
pvmove0 {
id = "9Fu3hM-9Nu1-YM6o-RKPp-G6ee-tqWe-Ts4eRP"
status = ["READ", "WRITE", "PVMOVE", "LOCKED"]
flags = []
creation_host = "bryan"
creation_time = 1406363707 # 2014-07-26 03:35:07 -0500
allocation_policy = "contiguous"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 400 Gigabytes
type = "mirror"
mirror_count = 2
extents_moved = 0 # 0 Kilobytes
mirrors = [
"pv0", 440,
"pv2", 0
]
}
}
}
}
Alright, just let me know if you need anything else. Thank you.
|
|
|
08-14-2014, 05:00 PM
|
#10
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,815
|
I was wondering what happened to you.
The good news is that the file shows that no extents were moved. The first thing I would do is back up the current LVM metadata from each of the 3 PVs. It is the first 2048 sectors of each PV.
Code:
dd if=/dev/mapper/ST31000528AS_6VPFP747p3 of=pv0.head count=2048
dd if=/dev/sda1 of=pv1.head count=2048
dd if=/dev/mapper/ST3750330AS_9QK23HVG of=pv2.head count=2048
You should put those files somewhere outside of the LVM structure, perhaps on a USB flash drive, so that you have a way to restore to the current state without relying on LVM commands or even the current OS.
Next, I would try running "pvmove --abort". That might be enough.
If that fails, I would look at /etc/lvm/archive/vg_bryan_00025-1636259526.vg (the file stored just before the interrupted pvmove) and verify that the stripes shown for lv_home were
Code:
stripes = {
"pv0", 440
}
matching the first mirror shown for LV pvmove0 in the current metadata backup, and that the data for lv_swap, lv_root, and lv_boot match what is in the current metadata backup file. If those match, then you should be able to run
Code:
vgcfgrestore -f /etc/lvm/archive/vg_bryan_00025-1636259526.vg vg_bryan
to restore the metadata to the state it was in before that interrupted pvmove.
|
|
1 members found this post helpful.
|
08-14-2014, 06:02 PM
|
#11
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
Small problem:
Code:
pvmove --abort
Missing device /dev/mapper/ST3750330AS_9QK23HVG reappeared, updating metadata for VG vg_bryan to version 56.
Device still marked missing because of allocated data on it, remove volumes and consider vgreduce --removemissing.
Cannot change VG vg_bryan while PVs are missing.
Consider vgreduce --removemissing.
Skipping volume group vg_bryan
the sripes, etc, is all correct, so
Code:
vgcfgrestore -f /etc/lvm/archive/vg_bryan_00025-1636259526.vg vg_bryan
Restored volume group vg_bryan
and then
so now its time to reboot and see what happens!
|
|
|
08-14-2014, 08:28 PM
|
#12
|
Member
Registered: Nov 2011
Distribution: Fedora
Posts: 72
Rep: 
|
your name looks like mine!
|
|
|
08-14-2014, 08:31 PM
|
#13
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
It worked! Thank you very, very much rknichols.
|
|
|
08-14-2014, 08:40 PM
|
#14
|
LQ Newbie
Registered: Jul 2014
Posts: 8
Original Poster
Rep: 
|
On a side note:
I still do not have /dev/sdb# and /dev/sdc# partitions, it is now /dev/mapper/(disk ID)p# instead. I noticed that I used to do that with SOME USB drives, now it appears to be doing so with all devcies (except /dev/sda)
did something with pvmove somehow cause this?
|
|
|
08-14-2014, 10:29 PM
|
#15
|
Moderator
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
|
Quote:
Originally Posted by bplis*
your name looks like mine!
|
You came out of hibernation to make a fairly random, non-technical contribution to this thread? Seriously?
Can you please make such comments via a more private channel next time?
Thanks,
Tink
|
|
|
All times are GMT -5. The time now is 04:57 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|