lvmscan not showing appropriate information
Hello,
I would try to be as clear as possible. I'm building Oracle Servers. File System requirements are: /opt/oracle -30G /usr/logsdb/redo0 -20GB /usr/logsdb/redo1-20GB /usr/logsdbarchives -40GB /usr/backupdb -300GB The SAN Team has allocated SAN, but they have asked us to use individual san luns to create the above partitions. The SAN LUNs provided are /opt/oracle - 30G Array LDEV Size LUN 43514 83:14 10GB 0A 43514 83:15 10GB 0E 43514 83:16 10GB 0F For /usr/logsdb/redo0 - 20GB /usr/logsdb/redo1 - 20GB Array LDEV Size LUN 43514 83:0E 20GB 0B 43514 83:09 20GB 0D Like this they have provided with individual luns and aksed us to create the File Systems specific to the mentioned LUNS. There is a script by the senior team that actually has this content: Script Name: script1 ------------------------- fdisk /dev/mapper/$1 << EOF n p l p t 8e p w EOF kpartx -a /dev/mapper/$1 fi #End of disk ---------------------------- And the above script is run like this: for i in $( ls mpath*); do script1.sh $i; done When my senior ran this script in other server it created PV and displayed the LUN info like this: #lvmscan MP_F_NAME MP_PART_NAME DEV_NAME1 DEV_PATH1 ... SERIAL LDEV VOL_RP SIZE MULTIPATH N/A N/A c0d0 000:05:00 N/A N/A vg00 34876 N/A mpath0 mpath0p1 sda 2:0:0:1 28264 6502 vg01 10240.31 YES mpath1 mpath1p1 sdb 2:0:0:2 28264 6503 vg01 10240.31 YES mpath2 mpath2p1 sdc 2:0:0:1 28264 6504 vg01 10240.31 YES When he gave the server to me like this, I created, vg01 with the help of LDEV number (say for 6502, 6503), the name is mpath0p1 and mpath1p1 and I was able to create the vg01, extend it and lvcreate the required File System. When I ran the same script in another new server, assuming that once I get these alias names (like mpath0p1) I can vgcreate and then lvcreate, but I broke down :doh: before anything went further. Here is te o/p of lvmscan after I ran that script in that new server which is exactly similar to the above above server built by my senior: #lvmscan MP_F_NAME MP_PART_NAME DEV_NAME1 DEV_PATH1 ... SERIAL LDEV VOL_GRP SZE MULTIPATH (Standardin) 1: parse error mpath1 mpath1p1 unknown unknown unknown unknown free NO (Standardin) 1: parse error mpath1 mpath2p1 unknown unknown unknown unknown free NO Note: I am able to see the partitions mpath1p1, mpath2p1 and several others when I type fdisk -l Its sad:( that, I am unable to handle this, if any noble geek could shower some some light on this and help me get my mpaths back with no errors, I'd give them a huge Kudos... Kudos in advance too :hattip: |
austinbravo, you haven't supplied the content of 'lvmscan', so we can't tell what it is complaining about. ('lvmscan' is not an lvm command but appears to be something written locally in your shop.)
Quote:
It may be that the 'lvmscan' script expects things to be named in a certain manner and possibly what you named them is not what 'lvmscan' expects. Quote:
Try doing a 'pvs' on both systems (the good "reference" system and your new system) and compare them. You can verify that your new partitions were added correctly to LVM as physical volumes. Then do a 'vgs' on both system for comparison, and finally an 'lvs', again for comparison. Essentially the process for the 30GB /opt/oracle filesystem would be like this (assuming it will be a separate new VG) Code:
fdisk /dev/mapper/mpathX Format the new filesystem (assuming ext3 filesystem for this example, yours might be different), Code:
mkfs.ext3 /dev/mapper/oraclevg-oraclelv Create a mount point for it, Code:
mkdir -p /opt/oracle A 'mount /dev/mapper/oraclevg-oraclelv /opt/oracle' would mount it, and verify the filesystem is ok. You'll, of course, lose this mount with a reboot. With the "fstab/mount -a" method you don't have to remember this step later... Finally, don't forget to change ownership and mode as appropriate with 'chown' and 'chmod'. Let us know how you make out. |
p.s.
The company I work for does full volume PV's for SAN disk. That is, we don't put a partition table on the mpath device. In work, I would do this instead, Code:
pvcreate /dev/mapper/mpathX /dev/mapper/mpathY /dev/mapper/mpathZ |
Hi Tommy,
Thank you very much for your help. Your explanation was very much in detail and I was able to understand the backgroud concept. yes I did what you've mentioned and as of now, the partitions are done and I was able to create all the file systems. !Perfecto! Also I have another question if you are able to answer please! Trying to build a RAC server. I have 5 ips assigned. They are like this: LAN IP TSM IP ISO1 (Heartbeat 1) ISO2 (Heartbeat 2) and iLO & vip The physical cards are the 1st four. So I thought I could check the ips of all the nics with ifconfig. But when I do #ifconfig -a I see only the eth0 (LAN) and eth1 (TSM) ips. I don't see any other information. Any idea of how to find the ips for all of the physical NIC cards like ISO1 and ISO2 Thank you |
I have little RAC experience, but I did check on one of our servers with RAC and an 'ifconfig -a' did display all of the interfaces on that system. Not all were up, but they were all listed.
You could post this as a new question and maybe a RAC literate person can get you an answer. Glad your LVM stuff worked out and glad to help. |
All times are GMT -5. The time now is 07:38 AM. |