Welcome to the most active Linux Forum on the web.
Go Back > Forums > Other *NIX Forums > Solaris / OpenSolaris
User Name
Solaris / OpenSolaris This forum is for the discussion of Solaris and OpenSolaris.
General Sun, SunOS and Sparc related questions also go here.


  Search this Thread
Old 06-25-2009, 03:02 AM   #1
LQ Newbie
Registered: Feb 2006
Posts: 8

Rep: Reputation: 0
Global to Local Zone Solaris

Hello. Here comes a big one. :P

System : SunOS 5.10 sun4v sparc SUNW,SPARC
I have a Solaris 10 global zone , on a server, with no external storage and no local zones. I want to "convert" that global zone to a local zone and transfer it to another server.

I have tried creating a flash archive with flarcreate and an ufsdump file from that server and move to another server. But in order to install these files to a local zone I had to install solaris containers(BrandZ), and at the zonecfg creation had to set brand=solaris10.

After, I did zoneadm -z zonename install -v -a flarchive.flar and it installed the zone.

While this works overall (though had to boot the zone with zoneadm -z zonename boot -f (force )), I would like to make that zone native and not branded.

Problem is, if i don't set the brand, and just try to install the zone directly from file doing zoneadm -z zonename install -v -a flarchive.flar starts outputting errors, unrecognizing -v and -a as parameters for native zone.

Only other solution, I have in hand which I am trying to work out at the moment is to create a full zone on another server and try ufsrestore in that localzone of the dumpfile i created previously.

So if anyone has any other idea or can help me with the ideas I had, please give a heads up. Thank you.


Update: As I've said I was trying to work out and ufsrestore on a localzone with the dumpfile. At first glance this seems to work just great. Zone booted without any problems at all and seemed to work just fine.

Though I am still puzzled on how I could make it work to install directly to a NATIVE localzone from a flar file or an ufsdump.

Last edited by AzraelShade; 06-25-2009 at 05:55 AM.
Old 06-25-2009, 04:07 AM   #2
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.3, Oracle Linux, Mint
Posts: 9,669

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
I'm unsure to understand how you did manage to complete the first solution as, AFAIK, there are not (yet) Solaris 10 branded zones like Solaris 8 and 9, ie there is no SUNWsolaris10.xml file in /etc/zones to handle the brand=solaris10 setting. Did you create one ?

The second one involving ufsrestore is unsupported and would probably need some homekeeping to be done in /etc files afterwards but should probably work otherwise.

The closest way to properly achieve what you want is currently at the project stage:
Old 06-26-2009, 03:33 AM   #3
LQ Newbie
Registered: Feb 2006
Posts: 8

Original Poster
Rep: Reputation: 0
Hello again, and thanks for the reply.

Yes jlliagre, kinda created one. Installed the SUNWsolaris9 packages and copied some of the files for the native solaris 10 zone.

I'm shaping up the second method up atm on a test server. Will redo first by just doing a ufsrestore on top of a standard whole-root zone and analyze carefully if everything is ok .

Also will have a second try on another whole-root zone but will delete all the installed files in the zone, and afterwards "making" a clean ufsrestore, since the files related to the local zone are stored in the global zone.

Will see how it goes.

Update: Done with the tests. Doing an ufsrestore on the whole-root zone with files is a somewhat bad idea. Just starting from the fact that any symbolic link will be dead as ( I presume ) they can't be reconnected on the new server beacause they will point to different inodes on the slice.

So, IMO, the way to go atm for a physical to virtual migration should be:
1.Create whole-root zone on future hosting server( with separate partitions - add fs )
2.ufsdump on / and other slices you might have on the server you want to migrate
3.Mount each of the whole-root zone partitions and do newfs on them
4.Do ufsrestore on each of the whole-root zone partitions with their dump files respectively
5.If old system had a different NIC like hme0 and new one has smth like ce0 remember to rename in the localzone files in /etc/
6.Now you can (hopefully) boot the new zone with no errors and work as intended.
7.Check for services that are enabled but not running : svcs -x and do something about them. :P

Good luck and got any other suggestions, shoot up.

Last edited by AzraelShade; 06-26-2009 at 07:31 AM.
Old 07-06-2009, 12:02 PM   #4
LQ Newbie
Registered: Feb 2006
Posts: 8

Original Poster
Rep: Reputation: 0
Hello again.

After my last post, I left point 7 a bit suspended since svcs -x will just show you an unusable system.

Reason for that is svc:/system/sysevent, which is unable to run on a local-zone, but runs on a global-zone. Since the newly migrated system doesn't magically become self-aware that it is now a local-zone, it has sysevent as a dependency for fc-fabric, which in turn is a dependency for milestone-devices and so on.

The solution for this problem was a bit obscure. You have to modify a few XML files in /var/svc/manifest and exclude sysevent as a dependency for the service to start. Those files are
./system/sysevent.xml ( comment the whole dependent part )
./system/device/devices-fc-fabric.xml ( comment the whole dependency sysevent part)
/system/picl.xml ( comment the service_fmri value='svc:/system/sysevent' )

These should be all. If any more file appears i will modify this post ( not sure if I remember them all at the moment )

After all this you have to import the new configurations by issuing;
/usr/sbin/svccfg import /var/svc/manifest/system/sysevent.xml and so on for all the files.

Reboot, and after a svcs -x you should have only sysevent, fmd and scheduler in maintenance mode.

sysevent as I said earlier cannot run in a local-zone.
scheduler should say in it's log file that it cannot run in a non-global zone.
fmd should not be able to run in a local-zone since it's the Fault management Daemon that needs access to I/O bus events,CPU,memory etc.

You can go ahead and disable the above-said services.

Hope this has been helpful.
Old 07-06-2009, 03:12 PM   #5
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.3, Oracle Linux, Mint
Posts: 9,669

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Thanks for the update.


brand, solaris, zones

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Removing Local Zone File Systems On Solaris 10 Unix LXer Syndicated Linux News 0 10-23-2008 12:20 PM
LXer: Modifying Existing Local Zone File Systems On Solaris 10 Unix LXer Syndicated Linux News 0 10-22-2008 02:20 PM
How to check the cpu utilization on all non global zones from Global Zone rajaniyer123 Solaris / OpenSolaris 3 10-09-2008 01:43 AM
XDMCP from global zone to Brandz zone coolster Solaris / OpenSolaris 2 03-03-2008 07:15 AM
How to share a ZFS file system between a global zone and a non global zone? crisostomo_enrico Solaris / OpenSolaris 7 11-28-2007 08:20 AM

All times are GMT -5. The time now is 06:46 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration