User Community for HCL Informix
  • Home
  • Blogs
  • Forum
  • About
  • Contact
  • Resources
  • Events

Administration Made Easier with HCL Informix 12.10.xC10

12/19/2017

4 Comments

 
Picture
                                                                                                            Updated: 8/30/18

Primary/Mirror Chunk Swapping
This feature allows you to quickly migrate anything from one HCL INFORMIX chunk up to an entire instance from your current set of disks to a newer and (presumably) faster set of disks, with no downtime.


Say you have an HCL Informix instance with a large space that contains a lot of user data. I'll call it "userdbs1." This space consists of 20 chunks, all of which are located on a set of disk drives that you’d like to retire. You have a new, faster set of drives mounted on the machine, and you want to migrate the data in userdbs1 to those new drives. 
 
In versions prior to 12.10.xC10 the quickest way to achieve this migration involves some pre-planning and downtime: 

1. When creating your chunks, use symbolic links. For example, the path name given to HCL INFORMIX (e.g. /dev/informix/chunk12) would be a link to the actual chunk on your old disk drive. 

2. Run ”onspaces -m” to mirror all the chunks in the space, using symbolic links for the mirrors. For example, you would add /dev/informix/chunk57 as a mirror for  
/dev/informix/chunk12. /dev/informix/chunk57 would be a symbolic link to the actual chunk file on the new, faster disk drive. INFORMIX is really efficient at creating a mirror. In less than a minute you can have an identical copy of even a very large chunk.  

3. Shut down the instance and switch the symbolic links, so that  
/dev/informix/chunk12 now points to the mirror chunk on the fast disk drive, and  
/dev/informix/chunk57 now points to the original primary chunk on the slow drive.

4. Start up the instance and drop the mirrors from the space (onspaces -r). Now each chunk's primary and only copy is on the fast disk drive. 

That symbolic-link-switching trick is no longer necessary in 12.10.xC10. Even the symbolic links are unnecessary. At any time you may now tell HCL INFORMIX to swap a mirror chunk for a primary chunk, even with the instance on-line, and even with users performing I/O on the chunk.  

There are two new sysadmin task() commands for this purpose: 


    
Note: for a single chunk the command is “swap_mirror" (singular) and for a space it is "swap_mirrors" (plural). 
 
A chunk cannot be swapped if either its primary or mirror is down. No damage will be done if you try—the operation will simply be disallowed. 
 
This feature will work on any space and any chunk, including the ROOT chunk. If the ROOT chunk is swapped, HCL INFORMIX automatically updates ROOTPATH, MIRRORPATH, ROOTOFFSET, and MIRROROFFSET in the config file. 
 
The feature will work in a replicated environment as well. A new record (LG_CHKSWAP) is logged for each swap, which is rolled forward on secondaries and applied. 
 
So now, to migrate data to the new disk drives, you don't need to use any symbolic links or shut down the server. You simply do the following: 

1. In the space you want to migrate, add mirrors to all the chunks (onspaces -m), putting the mirrors on the new disks. The server will "recover" the mirrors by quickly copying all pages from the primaries, even while those pages are being modified. 

2. Execute “modify space swap_mirrors” to swap the primaries for the mirrors. A checkpoint will be written for each swapped chunk but otherwise the operation is instantaneous. 

3. Drop the mirrors (the original primaries)—another very quick on-line operation. 

Index “Last Access” Time 

If you’ve ever wondered whether you’re actually making use of all your indexes over the course of a day or week, this new feature should help. oncheck -p[tT] will now indicate the last time each index fragment was used for a query: 

  Index jcind fragment partition rootdbs in DBspace rootdbs 


    
This access time is stored on the partition page on disk, so it will survive an instance restart. Armed with this information you can then decide whether a seldom-used index is worth its overhead and footprint. 
 
This information is currently not available via an SMI query, but will be in a future version. 
 
John (JC) Lengyel
Lead Engineer at HCL

Connect with me on LinkedIn

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

4 Comments
Fernando Nunes
12/19/2017 12:56:10 pm

I appreciate this feature. I believe was an old request. But I have some concerns:

- Eventually the last access time is important for indexes that are rarely used. But if we see a usage everyday at roughly the same time we don't know how many times it was used.... 1, 100, 1M? Some sort of counter coul be interesting

- For heavily used indexes won't this cause a performance overhead and eventually many writes of otherwise stalled partition headers during checkpoints?

Regards

Reply
Gertjan Thomasse
12/29/2017 02:30:23 am

You did not have to shut down the instance.
One could enable mirroring as stated, then bring down the primary chunks onspaces -s ... -D

then relink the primary chunks to the location desired
then bring up the primary chunks again onspaces -s ... -O

have used this a number of times...

Reply
jc
1/2/2018 05:47:15 pm

Clever. I like it :) I'll update the article to use your steps as the preferred pre-xC10 chunk-migration method.

Reply
Javier Weinmeister
1/3/2018 04:51:08 pm

A kind of off-topic. Are there real benefits when it comes to performance when using mirroring? How much?
Thanks

Reply



Leave a Reply.

    Archives

    November 2019
    September 2019
    May 2019
    April 2019
    February 2019
    January 2019
    October 2018
    July 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017

    Categories

    All
    Business
    Technical

    RSS Feed

Proudly powered by Weebly
  • Home
  • Blogs
  • Forum
  • About
  • Contact
  • Resources
  • Events