MUST - IBM
MUST - IBM MUST - IBM
Live Partition Mobility Initial Experiences Session ID: pAI50 2008 Systems Technical Conference Steven Knudson sjknuds@us.ibm.com May 12 – 15, 2008 – Los Angeles, California © 2008 IBM Corporation
- Page 2 and 3: Agenda � Overview IBM Training -
- Page 4 and 5: Overview IBM Training - 2008 System
- Page 6 and 7: Overview IBM Training - 2008 System
- Page 8 and 9: Prerequisites IBM Training - 2008 S
- Page 10 and 11: Prerequisites IBM Training - 2008 S
- Page 12 and 13: Validation IBM Training - 2008 Syst
- Page 14 and 15: Migration Steps IBM Training - 2008
- Page 16 and 17: Migration Steps IBM Training - 2008
- Page 18 and 19: Effects IBM Training - 2008 Systems
- Page 20 and 21: IBM Training - 2008 Systems Technic
- Page 22 and 23: IBM Training - 2008 Systems Technic
- Page 24 and 25: IBM Training - 2008 Systems Technic
- Page 26 and 27: IBM Training - 2008 Systems Technic
- Page 28 and 29: IBM Training - 2008 Systems Technic
- Page 30 and 31: IBM Training - 2008 Systems Technic
- Page 32 and 33: IBM Training - 2008 Systems Technic
- Page 34 and 35: IBM Training - 2008 Systems Technic
- Page 36 and 37: IBM Training - 2008 Systems Technic
- Page 38 and 39: IBM Training - 2008 Systems Technic
- Page 40 and 41: IBM Training - 2008 Systems Technic
- Page 42 and 43: IBM Training - 2008 Systems Technic
- Page 44 and 45: IBM Training - 2008 Systems Technic
- Page 46 and 47: IBM Training - 2008 Systems Technic
- Page 48 and 49: IBM Training - 2008 Systems Technic
- Page 50 and 51: IBM Training - 2008 Systems Technic
Live Partition Mobility<br />
Initial Experiences<br />
Session ID: pAI50<br />
2008 Systems Technical Conference<br />
Steven Knudson<br />
sjknuds@us.ibm.com<br />
May 12 – 15, 2008 – Los Angeles, California<br />
© 2008 <strong>IBM</strong> Corporation
Agenda<br />
� Overview<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Prerequisites<br />
� Validation<br />
� Migration<br />
� Effects<br />
� Demo<br />
� Supplemental Material<br />
2 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Overview<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Live Partition Mobility moves a running logical partition from one<br />
POWER6 server to another one without disrupting the operation<br />
of the operating system or applications<br />
� Network applications may see a brief (~2 sec) suspension toward<br />
the end of the migration, but connectivity will not be lost<br />
3 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Overview<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Live Partition Mobility is useful for<br />
– Server consolidation<br />
– Workload balancing<br />
– Preparing for planned maintenance<br />
• e.g., planned hardware maintenance or upgrades<br />
• In response to a warning of an impending hardware failure<br />
4 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Overview<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Inactive partition migration moves a powered-off partition from<br />
one system to another<br />
� Less restrictive validation process because the migrated partition<br />
will boot on the target machine; no running state needs to be<br />
transferred<br />
5 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Overview<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Live Partition Mobility is not a replacement for HACMP<br />
– Planned moves only – everything functional<br />
– It is not automatic on a failure event<br />
– Partitions cannot be migrated from failed machines<br />
– Moving a single OS; there is not a redundant, failover OS<br />
that an HACMP resource group is restarted in<br />
� It is not a disaster recovery solution<br />
– Migration across long distances is not supported in the first<br />
release because of SAN and LAN considerations<br />
6 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Prerequisites<br />
From Fix Central website, Partition Mobility:<br />
http://www14.software.ibm.com/webapp/set2/sas/f/pm/component.html<br />
7 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Prerequisites<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Two POWER6 systems managed by a single HMC or IVM on each server<br />
� Advanced POWER Virtualization Enterprise Edition<br />
� VIOS 1.5.1.1 (VIO 1.5.0.0, plus Fixpack 10.1) plus interim fixes<br />
IZ08861.071116.epkg.Z – Partition Mobility fix<br />
642758_vio.080208.epkg.Z – VIO MPIO fix<br />
AX059907_3.080314.epkg.Z – USB Optical Drive fix<br />
IZ16430.080327.epkg.Z – various Qlogic Emulex FC fixes<br />
retrieve interim fixes, place in VIO at /home/padmin/interim_fix<br />
# emgr –d –e IZ16430.080327.epkg.Z –v3 (as root, to see description)<br />
$ updateios –dev /home/padmin/interim_fix –install –accept (install as padmin)<br />
� VIOS 1.5.2.1 (VIO 1.5.0.0 plus Fixpack 11.1) rolls up all interim fixes – Preferred<br />
� Virtualized SAN Storage (rootvg and application vgs)<br />
� Virtualized Ethernet (Shared Ethernet Adapter)<br />
8 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Prerequisites<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� All systems that will host a mobile partition must be on the same subnet and<br />
managed by a single HMC<br />
– POWER6 Blades are managed by IVM instances<br />
� All systems must be connected to shared physical disks (LUNs) in a SAN<br />
subsystem with no scsi reserve<br />
SDDPCM, SVC, RDAC based LUN –<br />
$ chdev –dev hdisk8 –attr reserve_policy=no_reserve<br />
PowerPATH CLARiiON LUN –<br />
$ chdev –dev hdiskpower8 –attr reserve_lock=no<br />
� no LVM-based virtual disks – no virtual disk logical volumes carved in VIO<br />
� All resources must be shared or virtualized prior to migration (e.g., vscsi,<br />
virtual Ethernet)<br />
9 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Prerequisites<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� The pHypervisor will automatically manage migration of CPU and<br />
memory<br />
� Dedicated IO adapters must be de-allocated before migration<br />
� cd0 in VIO may not be attached to mobile LPAR as virtual optical<br />
device<br />
� The operating system and applications must be migration-aware<br />
or migration-enabled<br />
10 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Validation – High Level<br />
� Active partition migration capability and compatibility check<br />
� Resource Monitoring and Control (RMC) check<br />
� Partition readiness<br />
� System resource availability<br />
� Virtual adapter mapping<br />
� Operating system and application readiness check<br />
11 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Validation<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� System Properties support Partition Mobility<br />
– Inactive and Active Partition Mobility Capable = True<br />
� Mover Service Partitions on both Systems<br />
– VIO Servers with VASI device defined, and MSP enabled<br />
12 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Migration<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� If validation passes, “finish” button starts migration<br />
� From this point, all state changes are rolled back if an error occurs<br />
Mobile<br />
Partition<br />
1 2<br />
POWER Hypervisor<br />
MSP MSP<br />
VASI VASI<br />
Mobile<br />
Partition<br />
POWER Hypervisor<br />
Source System Target System<br />
Partition State Transfer Flow<br />
13 20-Aug-08<br />
3<br />
4 5<br />
© 2008 <strong>IBM</strong> Corporation
Migration Steps<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� The HMC creates a shell partition on the destination system<br />
� The HMC configures the source and destination Mover Service<br />
Partitions (MSP)<br />
– MSPs connect to PHYP thru the Virtual Asynchronous Services<br />
Interface (VASI)<br />
� The MSPs set up a private, full-duplex channel to transfer partition state<br />
data<br />
� The HMC sends a Resource Monitoring and Control (RMC) event to the<br />
mobile partition so it can prepare for migration<br />
� The HMC creates the virtual target devices and virtual SCSI adapters in<br />
the destination MSP<br />
� The MSP on the source system starts sending the partition state to the<br />
MSP on the destination server<br />
14 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Migration Steps<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� The source MSP keeps copying memory pages to the target in<br />
successive phases until modified pages have been reduced to near zero<br />
� The MSP on the source instructs the PHYP to suspend the mobile<br />
partition<br />
� The mobile partition confirms the suspension by suspending threads<br />
� The source MSP copies the latest modified memory pages and state<br />
data<br />
� Execution is resumed on the destination server and the partition reestablishes<br />
the operating environment<br />
� The mobile partition recovers I/O on the destination server and retries all<br />
uncompleted I/O operations that were going on during the suspension<br />
– It also sends gratuitous ARP requests to all VLAN adapters (MAC<br />
address(es) are preserved)<br />
15 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Migration Steps<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� When the destination server receives the last modified pages, the<br />
migration is complete<br />
� In the final steps, all resources are returned to the source and<br />
destination systems and the mobile partition is restored to its fully<br />
functional state<br />
� The channel between MSPs is closed<br />
� The VASI channel between MSP and PHYP is closed<br />
� VSCSI adapters on the source MSP are removed<br />
� The HMC informs the MSPs that the migration is complete and all<br />
migration data can be removed from their memory tables<br />
� The mobile partition and all its profiles are deleted from the source<br />
server<br />
� You can now add dedicated adapters to the mobile partition via DLPAR<br />
as needed, or put it in an LPAR workload group<br />
16 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Effects<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Server properties<br />
• The affinity characteristics of the logical memory blocks may change<br />
• The maximum number of potential and installed physical processors may<br />
change<br />
• The L1 and/or L2 cache size and association may change<br />
• This is not a functional issue, but may affect performance characteristics<br />
� Console<br />
• Any active console sessions will be closed when the partition is migrated<br />
• Console sessions must be re-opened on the target system by the user after<br />
migration<br />
� LPAR<br />
• uname will change. Partition ID may change. IP address, MAC address will<br />
not change.<br />
17 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Effects<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Network<br />
– A temporary network outage of seconds is expected to occur as part of<br />
suspending the partition<br />
• Temporary network outages may be visible to application clients, but it is<br />
assumed that these are inherently recoverable<br />
� VSCSI Server Adapters<br />
– Adapters that are configured with the remote partition set to the migrating<br />
partition will be removed<br />
• Adapters that are configured to allow any partition to connect will be left<br />
configured after the migration<br />
• Any I/O operations that were in progress at time of the migration will be<br />
retried once the partition is resumed<br />
– As long as unused virtual slots exist on the target VIO server, the necessary<br />
VSCSI controllers and target devices will be automatically created<br />
18 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Effects<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Error logs<br />
– When a partition migrates all of the error logs that the partition had received<br />
will appear on the target system<br />
– All of the error logs contain the machine type, model, and serial number so it<br />
is possible to correlate the error with the system that detected it<br />
� Partition time<br />
– When a partition is migrated the Time of Day and timebase values of the<br />
partition are migrated.<br />
– The Time of Day of the partition is recalculated ensuring partition timebase<br />
value increases monotonically and accounting for any delays in migration.<br />
19 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
DEMO<br />
20 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Environment<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Two POWER6 servers<br />
– 8-way Mercury<br />
• 01EM320_31<br />
– 16-way Zeus<br />
• 01EM320_31<br />
� Single HMC managing both servers<br />
– HMC V7.3.3.0<br />
� Mobile partition<br />
– bmark26<br />
• OS: AIX 6.1 6100-00-01-0748<br />
• Shared processor pool Test1<br />
• CPU entitlement: Min 0.20, Des 0.20, Max 2.00<br />
• Mode: Uncapped<br />
• Virtual Processors: Min 1, Des 2, Max 4<br />
• Disks: SAN LUN<br />
21 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Supplemental Material<br />
22 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
�Client hdisk0, set<br />
hcheck_interval to 300 before<br />
reboot<br />
�Client sees one hdisk – with two<br />
MPIO paths lspath –l hdisk0<br />
�Paths are fail_over only. No<br />
load balancing in client MPIO<br />
�hdisk6 and 7 in each VIO server<br />
attached to vscsi server<br />
adapter as a raw disk<br />
�No scsi reserve set on hdisk6, 7<br />
in each VIO server. Also, with<br />
two fcs in a VIO server, change<br />
algorithm to round_robin for<br />
hdisk1. SDDPCM, RDAC, or<br />
PowerPATH driver installed in<br />
each VIO server<br />
�LUNs appears in each VIO<br />
server as hdisk6, 7<br />
�RAID5 LUNs carved in storage,<br />
zoned to 4 FC adapters in the<br />
two VIO servers<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration<br />
SDDPCM<br />
23 20-Aug-08<br />
SDDPCM<br />
This LUN is zoned<br />
into another two VIO<br />
LPARs, on the other<br />
Power6 server also<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� “Source” Power6 server mercury has dual VIO LPARs, ec01 and<br />
ec02. SEA Failover primary is ec01, backup is ec02.<br />
� “Destination” Power6 server zeus has dual VIO LPARs, sq17 and<br />
sq18. SEA Failover primary is sq17, backup is sq18<br />
� Profile for client partition bmark29_mobile has virtual scsi client<br />
adapter IDs 8 and 9 connecting to ec01 (39) and ec02 (39)<br />
respectively. Do NOT expect server adapter IDs to remain the same<br />
after partition move.<br />
24 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� In VIO LPARs ec01 and ec02, hdisk6 and hdisk7 are LUNs we use<br />
for bmark26 and bmark29 mobile LPARs.<br />
� $ lspv<br />
NAME PVID VG STATUS<br />
hdisk0 00c23c9f9a1f1da3 rootvg active<br />
hdisk1 00c23c9f9f5993e5 clientvg active<br />
hdisk2 00c23c9f2fb9e5a9 clientvg active<br />
hdisk3 00c23c9fb60af645 None<br />
hdisk4 none None<br />
hdisk5 none None<br />
hdisk6 00c23c9f291cc30b None<br />
hdisk7 00c23c9f291cc438 None<br />
� Without putting LUN hdisk7 in a volume group, we put a PVID on it<br />
$chdev –dev hdisk7 –attr pv=yes -perm<br />
25 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� DS4300, RDAC LUNs can be identified by IEEE Volume Name<br />
� $ cat sk_lsdisk<br />
for d in `ioscli lspv | awk '{print $1}'`<br />
do<br />
echo $d `ioscli lsdev -dev $d -attr | grep ieee | awk '{print $1"<br />
"$2}' `<br />
done<br />
$ sk_lsdisk<br />
NAME<br />
hdisk0<br />
hdisk1<br />
hdisk2<br />
hdisk3<br />
hdisk4<br />
hdisk5<br />
hdisk6 ieee_volname 600A0B800016954000001C7646F142A6<br />
hdisk7 ieee_volname 600A0B8000170BC10000142846F124AD<br />
� Have found that ieee_volname will not be visible up in the client<br />
LPAR<br />
26 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� CLARiiON PowerPATH LUNs can be identified by Universal Identifier (UI)<br />
� $ cat sk_clariion<br />
for d in `ioscli lspv | grep hdiskpower | awk '{print $1}'`<br />
do<br />
ioscli lsdev -dev $d -vpd | grep UI | awk '{print $1“ “$2}’<br />
done<br />
27 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� In both VIO LPARs on “Source” Power6 server mercury, hdisk7 is<br />
attached to virtual scsi server adapter ID 39<br />
� $ cat sk_lsmap<br />
#!/usr/bin/rksh<br />
# sk_lsmap<br />
#<br />
#PATH=/usr/ios/cli:/usr/ios/utils:/home/padmin:<br />
for v in `ioscli lsdev -virtual | grep vhost | awk '{print $1}'`<br />
do<br />
ioscli lsmap -vadapter $v -fmt : | awk -F: '{print $1" "$2" "$4" "$7" "$10}‘<br />
done<br />
$ sk_lsmap<br />
vhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lv<br />
vhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlv<br />
vhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lv<br />
vhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3<br />
vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6<br />
vhost5 U9117.MMA.1023C9F-V1-C13<br />
vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6<br />
vhost7 U9117.MMA.1023C9F-V1-C16<br />
vhost8 U9117.MMA.1023C9F-V1-C21<br />
vhost9 U9117.MMA.1023C9F-V1-C39 vt_bmark29 hdisk7<br />
28 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� The client LPAR was activated, booted to SMS, Remote IPL setup,<br />
boot on virtual Ethernet adapter, from the NIM master.<br />
� Target disk selection – Option 77, alternative disk attributes…<br />
>>> 1 hdisk0 00c23c9f291cc438<br />
PVID from VIO<br />
shows up in client<br />
netboot<br />
� Option 77 again…<br />
>>> 1 hdisk0 U9117.MMA.1023C9F-V9-C8-T1-L8100000000000<br />
29 20-Aug-08<br />
No MPIO in network boot<br />
image, so disk only<br />
shows up on first vscsi<br />
client adapter ID 8<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Initial Configuration (continued)<br />
� NIM install completes. One command included in the NIM script<br />
resource, running at the end of install, and before boot<br />
chdev -l hdisk0 -a hcheck_interval=300 –P<br />
� Sets MPIO to test failed and non-active paths every 5 minutes, bring<br />
them online if available.<br />
� The newly Installed and booted LPAR has two vscsi client adapters<br />
# lsdev -Cc adapter -F "name physloc" | grep vscsi<br />
vscsi0 U9117.MMA.1023C9F-V9-C8-T1<br />
vscsi1 U9117.MMA.1023C9F-V9-C9-T1<br />
� Two MPIO paths to hdisk0<br />
# lspath -l hdisk0<br />
Enabled hdisk0 vscsi0<br />
Enabled hdisk0 vscsi1<br />
� The PVID we expected does come thru from VIO to the Client LPAR<br />
# lspv<br />
hdisk0 00c23c9f291cc438 rootvg active<br />
� The table is now set for Live Partition Mobility<br />
30 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
31 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
32 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
33 20-Aug-08<br />
If you specify a<br />
new profile<br />
name, your<br />
initial profile will<br />
be saved. But<br />
do NOT assume<br />
it is bootable, or<br />
usable on return<br />
to “source”<br />
server. VIO<br />
mappings will<br />
change.<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
34 20-Aug-08<br />
There might be<br />
more than one<br />
destination<br />
server to<br />
choose from<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
… then …<br />
35 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
36 20-Aug-08<br />
I selected<br />
the pair that<br />
were both<br />
SEA<br />
Failover<br />
primary, but<br />
any pair<br />
should do<br />
here<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
37 20-Aug-08<br />
Verify that<br />
the required<br />
(possibly<br />
tagged)<br />
VLAN is<br />
available<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
38 20-Aug-08<br />
These are<br />
my client<br />
LPAR vscsi<br />
adapter IDs,<br />
matched to<br />
destination<br />
VIO LPARs<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
39 20-Aug-08<br />
You may<br />
select from<br />
different<br />
shared pools<br />
on the<br />
destination<br />
server<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
40 20-Aug-08<br />
Left to<br />
default<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Starting Mobility<br />
41 20-Aug-08<br />
The moment<br />
we’ve waited<br />
for…<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
42 20-Aug-08<br />
As migration starts, in the<br />
“All Partitions” view we see<br />
the LPAR residing on both<br />
Power6 servers<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
43 20-Aug-08<br />
Further along in the<br />
migration, we see the<br />
LPAR in “Migrating-<br />
Running” Status<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Migration Complete<br />
44 20-Aug-08<br />
Migrated LPAR<br />
resides solely on new<br />
server.<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Migration Complete<br />
�Migration preserved my old profile, and created a new one<br />
45 20-Aug-08<br />
Same client<br />
adapter IDs, but<br />
different VIO<br />
server adapter<br />
IDs<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Device Mapping after Migration<br />
� Migration used new VIO server adapter IDs, even when same adapter<br />
IDs were available<br />
$ hostname<br />
sq17<br />
$ sk_lsmap<br />
vhost0 U9117.MMA.109A4AF-V1-C15<br />
vhost1 U9117.MMA.109A4AF-V1-C16<br />
vhost2 U9117.MMA.109A4AF-V1-C39<br />
vhost3 U9117.MMA.109A4AF-V1-C14 vtscsi0 hdisk7<br />
46 20-Aug-08<br />
Migration did not<br />
use ID 39 in<br />
destination VIO<br />
LPARs<br />
� When you migrate back, do not expect to be back on your original VIO<br />
Server adapter IDs. Your old client LPAR profile is historical, but will not<br />
likely be usable without some reconfig. Best to create a new profile on<br />
the way back over.<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Device Mapping after Migration<br />
� Back on the “source” server, device mappings for your client LPAR have<br />
been completely removed from the VIO LPARs<br />
� $ hostname ec01<br />
ec01<br />
$ sk_lsmap<br />
vhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lv<br />
vhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlv<br />
vhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lv<br />
vhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3<br />
vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6<br />
vhost5 U9117.MMA.1023C9F-V1-C13<br />
vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6<br />
vhost7 U9117.MMA.1023C9F-V1-C16<br />
vhost8 U9117.MMA.1023C9F-V1-C21<br />
47 20-Aug-08<br />
No longer a vhost<br />
adapter ID 39<br />
(compare with page<br />
30)<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Interpartition Logical LAN, inside one Power6<br />
� Migration can preserve an internal, LPAR to LPAR network<br />
� The LPAR to migrate has virtual Ethernet adapter<br />
� Added this adapter to the Profile<br />
� DLPAR same adapter into the running LPAR<br />
� We added Ethernet adapter ID 5, on a different VLAN - 5<br />
48 20-Aug-08<br />
New<br />
adapter is<br />
on VLAN 5<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Interpartition Logical LAN, inside one Power6<br />
� cfgmgr in running AIX LPAR, DLPAR’d adapter is in<br />
# lsdev –Cc adapter –F « name physloc" | grep ent[0-9]<br />
ent0 U9117.MMA.109A4AF-V9-C2-T1<br />
ent1 U9117.MMA.109A4AF-V9-C5-T1<br />
� smitty chinet, configure en1 interface<br />
# netstat -in<br />
Name Mtu Network Address Ipkts Ierrs Opkts OerrsColl<br />
en0 1500 link#2 4e.c4.31.a8.cf.2 540066 0 46426 0 0<br />
en0 1500 9.19.51 9.19.51.229 540066 0 46426 0 0<br />
en1 1500 link#3 4e.c4.31.a8.cf.5 0 0 3 0 0<br />
en1 1500 192.168.16 192.168.16.1 0 0 3 0 0<br />
lo0 16896 link#1 301 0 318 0 0<br />
lo0 16896 127 127.0.0.1 301 0 318 0 0<br />
lo0 16896 ::1 301 0 318 0 0<br />
� Perform the Migration again, back to “source” server mercury<br />
49 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Interpartition Logical LAN, inside one Power6<br />
� We do get an “error” reported, that there is no support in source VIO<br />
servers for VLAN 5.<br />
� VIO LPARs on source and destination Server must have virtual adapter<br />
on VLAN 5, and this adapter must be “joined” into the SEA<br />
50 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
DLPAR new virtual Ethernet adapter into VIO LPARs<br />
� Do the DLPAR of adapter into both source VIO LPARs, and both<br />
destination LPARs<br />
The new<br />
VLAN id<br />
<strong>MUST</strong><br />
trunk to<br />
join SEA<br />
51 20-Aug-08<br />
Priority <strong>MUST</strong><br />
match existing<br />
trunked SEA virtual<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Adapter DLAR’d into VIOs, but not joined to SEA<br />
� Slightly different error – mkvdev the new virtual onto the SEA<br />
52 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Which adapter to join?<br />
� Do in each of the four VIO LPARs – adapter numbers might not be<br />
same<br />
� $ lsdev -type adapter -field name physloc | grep ent[0-9]<br />
ent0 U789D.001.DQDXYCW-P1-C10-T1<br />
ent1 U9117.MMA.109A4AF-V2-C11-T1<br />
ent2 U9117.MMA.109A4AF-V2-C12-T1<br />
ent3 U9117.MMA.109A4AF-V2-C13-T1<br />
ent4<br />
$ cfgdev<br />
$ lsdev -type adapter -field name physloc | grep ent[0-9]<br />
ent0 U789D.001.DQDXYCW-P1-C10-T1<br />
ent1 U9117.MMA.109A4AF-V2-C11-T1<br />
ent2 U9117.MMA.109A4AF-V2-C12-T1<br />
ent3 U9117.MMA.109A4AF-V2-C13-T1<br />
ent4<br />
ent5 U9117.MMA.109A4AF-V2-C18-T1<br />
$ chdev –dev ent4 –attr virt_adapters=ent1,ent5<br />
ent4 changed<br />
Both trunked<br />
virtual adapters<br />
53 20-Aug-08<br />
The newly<br />
DLPAR’d in<br />
virtual<br />
adapter<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Which adapter to join? Possible errors on chdev<br />
� Forgot to hit “external access” checkbox on new virtual adapter<br />
chgsea: Ioctl NDD_SEA_MODIFY returned error 64 for device ent4<br />
� Trunk priority on new virtual did not match the existing trunked virtual<br />
adapter<br />
chgsea: Ioctl NDD_SEA_MODIFY returned error 22 for device ent4<br />
54 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Now in the Validation before Migration…<br />
�Both VLAN ids show up in both destination VIO servers<br />
55 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Ready to Finish…<br />
56 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Another potential error…<br />
�Error configuring virtual adapter in slot 23 – we had no vhost in slot 23<br />
�Virtual Optical device vtopt0 (cd0) cannot be attached to vhost adapter of migrating<br />
LPAR - not obvious.<br />
�rmdev –l cd0 –d (in client LPAR)<br />
�rmdev –dev vtopt0 (in VIO server)<br />
�Repeat validation<br />
57 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Original SEA redundancy – Can Client Migrate? Yes<br />
Source VIO1<br />
ec01<br />
Client LPAR<br />
SEA NIB<br />
Switch<br />
Source VIO2<br />
ec02<br />
physical VLAN 45<br />
VLAN 55 physical<br />
�Before SEA Failover, we used EtherChannel Network Interface Backup in client<br />
�SEA in each VIO server, with External Access Virtuals, each on different VLAN<br />
�Client LPAR gets virtual adapter on each VLAN, with EtherChannel NIB on top<br />
�Trunk priority on SEA bridged virtuals does not matter; different internal VLANs<br />
�No control channel; This is SEA, but not SEA Failover<br />
SEA<br />
58 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
Original SEA redundancy – Can Client Migrate? Yes<br />
59 20-Aug-08<br />
�Both VLANs in the<br />
client LPAR (45, 55)<br />
were found<br />
- in SEAs<br />
- in VIO servers<br />
- on the destination<br />
POWER6.<br />
�It is going to migrate<br />
© 2008 <strong>IBM</strong> Corporation
Reference<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
� Live Partition Mobility Redbook<br />
http://www.redbooks.ibm.com/redbooks/pdfs/sg247460.pdf<br />
60 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation
Trademarks<br />
<strong>IBM</strong> Training - 2008 Systems Technical Conference<br />
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.<br />
Not all common law marks used by <strong>IBM</strong> are listed on this page. Failure of a mark to appear does not mean that <strong>IBM</strong> does not use the mark nor does it mean that the product is not<br />
actively marketed or is not significant within its relevant market.<br />
Those trademarks followed by ® are registered trademarks of <strong>IBM</strong> in the United States; all others are trademarks or common law marks of <strong>IBM</strong> in the United States.<br />
For a complete list of <strong>IBM</strong> Trademarks, see www.ibm.com/legal/copytrade.shtml:<br />
*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, <strong>IBM</strong>®, <strong>IBM</strong> (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA,<br />
WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®<br />
The following are trademarks or registered trademarks of other companies.<br />
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.<br />
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.<br />
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.<br />
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.<br />
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel<br />
Corporation or its subsidiaries in the United States and other countries.<br />
UNIX is a registered trademark of The Open Group in the United States and other countries.<br />
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.<br />
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.<br />
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.<br />
* All other products may be trademarks or registered trademarks of their respective companies.<br />
Notes:<br />
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard <strong>IBM</strong> benchmarks in a controlled environment. The actual throughput that any user will<br />
experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.<br />
Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.<br />
<strong>IBM</strong> hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.<br />
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used <strong>IBM</strong> products and the results they may have achieved. Actual<br />
environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.<br />
This publication was produced in the United States. <strong>IBM</strong> may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without<br />
notice. Consult your local <strong>IBM</strong> business contact for information on the product or services available in your area.<br />
All statements regarding <strong>IBM</strong>'s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.<br />
Information about non-<strong>IBM</strong> products is obtained from the manufacturers of those products or their published announcements. <strong>IBM</strong> has not tested those products and cannot confirm the performance,<br />
compatibility, or any other claims related to non-<strong>IBM</strong> products. Questions on the capabilities of non-<strong>IBM</strong> products should be addressed to the suppliers of those products.<br />
Prices subject to change without notice. Contact your <strong>IBM</strong> representative or Business Partner for the most current pricing in your geography.<br />
61 20-Aug-08<br />
© 2008 <strong>IBM</strong> Corporation