IIUG Insider (Issue #209) November 2017

Highlights: Informix 12.10 xC10 has been released

Welcome to the International Informix Users Group (IIUG) Insider! Designed for IIUG members and Informix user group leaders, this publication contains timely and relevant information for the IBM Informix community.



There is going to be a very interesting announcement in the near future. I still cannot talk about it but hope it will make the highlight of the next issue.

So far we see progress in many aspects. Informix 12.10 xC10 has been released with lots of cool stuff in it.

There are new blogs and the support corner is back with content regarding the last release.

We want and need much more. But I am optimistic. I believe we will see progress in all aspects regarding our beloved product.

Gary Ben-Israel
IIUG Insider Editor
IIUG Board of Directors


What’s new in Informix 12.10.xC10?

The latest fixpack of Informix was released last month and here is a list of all the new functionality and enhancements included in this version.


Backup to Cloud

  • Sore and retrieve backups directly with Softlayer and Amazon S3 object store using Informix Primary Storage Manager (PSM).

Administration made Easier

Switch Primary and Mirror Chunks

  • The ability to toggle primary and mirror chunks, making the original primary the new mirror, and the original mirror the new primary.

Onstat Enhancements

  • Onstat –g ses shows complete host and user names for each client session.

Performance enhancement for blocking oninit

  • Increases the speed of onmode -c block / unblock.

Identify Unused Indexes

  • Provides the user with the last time an index was used.

Easier Session Tracking

  • CSDK: New CLIENT_LABEL environment variable so client sessions can tag themselves for monitoring purposes (to be used with onstat –g ses).

TRUNCATE TABLE keep statistics

  • There are cases where a user may want to truncate a table but to keep low statistics (nrows). An extension to the truncate table command now allows the user to achieve this.

Smart Trigger session survival:

  • This is an enhancement over the 12.10.xC9 feature to enhance the ability for client sessions to receive events after reconnecting to the server.

  • Audit functionality support; this is an ability to get userid and session id who caused the event.

More Compression Saves More Storage

Compression support for BLOBspace Blobs

  • Partition blobs in a compressed table will now be compressed during insert and those inserts will be replicated by ER.

  • Partition blobs will now be compressed by the auto-compress operation for a table that is created compressed.

  • BLOBspace blobs are now compressed, and will replicate with ER.

  • When auto-compress is triggered, the result will be that all rows in the table are compressed.


TimeSeries compression on timeseries sub-type containing string fields

  • Adds the ability to compress TimeSeries data in a row that contains strings, improving compression ratio for those rows.

Support for Geodetic and projection systems other than WGS 84

  • For tracking parcels in a warehouse that may use an x,y grid system or for trucks that feed their GPS coordinates into a state plain system (different for each state).

Functionality Changes

Deprecated Features no longer available in xC10:

  • DataBlade Developers Kit (DBDK)

  • onsnmp” subagent

  • Blade Manager

Separated from server/client installs in xC10:

Now available at https://www.ibm.com/analytics/us/en/technology/informix

RFEs resolved in Informix 12.10 xC10


Description: onmode_mon 1 second communication interval should be changed to speed up ‘onmode -c block/unblock’

Use Case: The speed of communication between the server processes and the utilities onmode and onstat (in the case of onstat only when database auditing is enabled) is limited by a 1 second poll interval. This limits the speed at which onstat commands can be run in succession when auditing is switched on, breaking many monitoring scripts, and also limits the speed of ‘onmode -c block/unblock’.


Description: A need exists to cleanup unused indexes. so they can be dropped.

Use case: We suspect that there are dozens of indexes in our environments that are never accessed. How can we Identify the last time an index was used? In the mainframe world this information is easily available and I have been tasked to find a similar type of information about the Informix indexes.


Description: Enable switching of primary/mirror chunks with two new sysadmin task() commands

execute function task(“modify chunk swap_mirror”,<chunk_num>)

execute function task(“modify space swap_mirrors”,”<space_name>”);

Use case: Migrating Informix to new storage conveniently and quickly

Conference corner

IIUG WORLD 2018 – Save the Date

Don’t forget to save the date for the premier Informix Event.

This event will be held October 28 – November 1, 2018 at the Renaissance Arlington Capital View Hotel, Washington D.C.

This is a beautiful hotel and convenient to Reagan National Airport. It is also convenient to many historical sites, museums and other local attractions Washington D.C. is noted for. To learn more about this venue check out www.marriott.com/waspy.

Stay tuned for more information as it becomes available.

Hope to see you there.

IIUG 2018 Planning Committee

RFE corner

Just in case you are not aware, some time ago IBM created a public website to collect the requests for new features directly from users. The RFE (Requests For Enhancements) website is included in developerWorks. You can access it here.

Once you logged in with your usual IBM ID, choose “Information Management” in the Brand dropdown box and “Informix Servers” or “Client Products” in the Products dropdown box.

The interesting thing is that any request, including your request, if you place one, is submitted to be voted on. This means the RFEs that receive more votes have a greater chance to be considered by the architecture and development teams for further consideration. In other words, this IS your opportunity to provide enhancement ideas even if you are not the biggest IBM customer on the planet earth.

Some RFEs will be of great interest, others will not seem useful to you. This is why your opinion is important. Do not hesitate to vote and place comments!

The idea of the RFE corner is to provide a digest on new Informix RFEs and make those RFEs more visible and accessible for the community, so that you can vote for them in a faster and easier way. By participating actively in this website, IBM will have solid and useful elements from the customer base to introduce new functionality to Informix product.

Also in the area of IBM website, a new functionality has been released: MyNotifications. You will want to register this webpage in order to receive the notifications of your choice (new product defects, new patch release, new versions etc…, on the frequency of your choice (daily, weekly). I have registered and will definitely remain registered, due to the value of the information delivered.

Check at this place.

New RFEs for November 2017

We had some activity in November, with some interesting new RFEs

Add ability to grant ‘connect’ to database to a role instead of just an individual

I would like to see the ability to grant ‘connect’ on the database to a role as you can in Oracle and DB2. It seems counterproductive to ease administration by allowing object grants to a role and then forcing individual grants to connect to the database for all the users under that role.

Request to add the possibility to grant access to a role Votes: 3

Support of CDC against Informix HDR and RSS instances

My Customer wants use the CDC on the RSS instances.

He can connect the CDC to this RSS instance and see the contain of table but it is impossible to see the change into the table during the connection.

Request to allow use the CDC API against a RSS server Votes: 2

Implement callback functions in J/Foundation for transaction events

Checked the J/Foundation documentation as well and seems like unlike C-udrs, there is no API to register a local callback function when a transaction event happens (like rollback in your case).

Implementing this callback functions registering in J/Foundation, will provide transaction control to Opaque types implementation done with J/Foundation.

Request to implement callback functions in J/foundation for transaction events Votes: 1

Methods to get Database and Table names in J/Foundation SQLDATA Interface

In J Foundation, for opaque type implementation, we need methods in SQLData interface implementation to get current database and table where opaque type column resides.

Request to implement methods to get database and table names in J/Foundation Votes: 1

TOP 14 RFE’s

Abstract Status Votes Progr.
In-Place Alter for varchar, lvarchar and boolean Under Consideration 59 0
Backup from RSS or HDR Secondaries using ontape, onunload, onbar, dbexport Under Consideration 58 +1
SQL interface to obtain the temporary space usage (tables, hash, sorts…) Submitted 50 0
Obtain the query plan of a running query Under Consideration 48 0
Request to track and save a time stamp for last time an index was used. Nee… Submitted 40 0
Backup Individual database, not entire instance Submitted 34 0
ALTER owner of database objects after creation Submitted 29 0
Implement CREATE OR REPLACE option for stored procedures Under Consideration 27 0
Informix should be able to change owner on tables and any other objects Submitted 21 +2
Ability to re-create views and procedures without dependent objects being dropped (34762 Under consideration 20 +2
Need an onmode option to terminate orphaned, tightly coupled global transac… Under Consideration 19 +1
Allow triggers install/updates without taking an outage for the box (57792) Under Consideration 18 +1
Allow “group commit” as other RDBMS (45166)

Under Consideration 17 +1
Allow column ALIAS in HAVING clause (46670)

Submitted 16 +1

Hot RFE’s

This new list gives visibility on hot RFE’s that are not yet in the Top 14 nor in the new RFE’s

Abstract Status Votes
Allow repack of TBLSpace extents Submitted 7 (+5)
Table level restore Submitted 7 (+1)
onmode_mon 1 second communication interval should be changed to speed up ‘o. Planned for Future Release 5 (+1)
Show session_id – port relationship Submitted 5 (+1)

Do not forget to vote for one or several of those RFE’s if they fit your requirements.

You can access each RFE by clicking on the above links. At the bottom of each RFE page you will find a hyperlink to vote for it. You will see the Request stats, including number of votes for this request, on the right side of the request page. The more votes, the greater the chance an enhancement will be addressed by the Development Team, taking into consideration the general interest.

Take some time to examine the full list and vote for the enhancement you would like to see implemented.

Eric Vercelletto

Informix corner

Informix Client Software Development Kit: CSDK_4.10.UC10_SOL_x86_32_Fixpack

Informix Client Software Development Kit: CSDK_4.10.UC10_SOL_x86_32_Fixpack: Informix CSDK 4.10.UC10 for Solaris running on x86-based

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?product=ibm%2FInformation+Management%2FInformix+Client+Software+Development+Kit&fixids=CSDK_4.10.UC10_SOL_x86_32_Fixpack&source=myna&myns=swgimgmt&mynp=OCSSVT2J&mync=R&cm_sp=swgimgmt-_-OCSSVT2J-_-R&function=fixId&parent=ibm/Information Management

Fix list for Informix Client SDK 4.10.xC10

IBM Informix Client SDK provides periodic fixes for release 4.10. The following list is the most recent report of fixes for customer-reported defects in Version


Support corner

Administration Made Easier with Informix 12.10.xC10

Primary/Mirror Chunk Swapping

This feature allows you to quickly migrate anything from one INFORMIX chunk up to an entire instance from your current set of disks to a newer and (presumably) faster set of disks, with no downtime.
Say you have an Informix instance with a large space that contains a lot of user data. I’ll call it “userdbs1.” This space consists of 20 chunks, all of which are located on a set of disk drives that you’d like to retire. You have a new, faster set of drives mounted on the machine, and you want to migrate the data in userdbs1 to those new drives.
In versions prior to 12.10.xC10 the quickest way to achieve this migration involves some pre-planning and downtime:

  1. When creating your chunks, use symbolic links. For example, the path name given to INFORMIX (e.g. /dev/informix/chunk12) would be a link to the actual chunk on your old disk drive.
  2. Run ”onspaces -m” to mirror all the chunks in the space, using symbolic links for the mirrors. For example, you would add /dev/informix/chunk57 as a mirror for
    /dev/informix/chunk12. /dev/informix/chunk57 would be a symbolic link to the actual chunk file on the new, faster disk drive. INFORMIX is really efficient at creating a mirror. In less than a minute you can have an identical copy of even a very large chunk.
  3. Shut down the instance and switch the symbolic links, so that
    /dev/informix/chunk12 now points to the mirror chunk on the fast disk drive, and
    /dev/informix/chunk57 now points to the original primary chunk on the slow drive.
  4. Start up the instance and drop the mirrors from the space (onspaces -r). Now each chunk’s primary and only copy is on the fast disk drive.

That symbolic-link-switching trick is no longer necessary in 12.10.xC10. Even the symbolic links are unnecessary. At any time you may now tell INFORMIX to swap a mirror chunk for a primary chunk, even with the instance on-line, and even with users performing I/O on the chunk.

There are two new sysadmin task() commands for this purpose:

execute function task("modify chunk swap_mirror",<chunk number>);

execute function task(“modify space swap_mirrors”,”<space name>”);

Note: for a single chunk the command is “swap_mirror” (singular) and for a space it is “swap_mirrors” (plural).

A chunk cannot be swapped if either its primary or mirror is down. No damage will be done if you try—the operation will simply be disallowed.

This feature will work on any space and any chunk, including the ROOT chunk. If the ROOT chunk is swapped, INFORMIX automatically updates ROOTPATH, MIRRORPATH, ROOTOFFSET, and MIRROROFFSET in the config file.

The feature will work in a replicated environment as well. A new record (LG_CHKSWAP) is logged for each swap, which is rolled forward on secondaries and applied.

So now, to migrate data to the new disk drives, you don’t need to use any symbolic links or shut down the server. You simply do the following:

  1. In the space you want to migrate, add mirrors to all the chunks (onspaces -m), putting the mirrors on the new disks. The server will “recover” the mirrors by quickly copying all pages from the primaries, even while those pages are being modified.
  2. Execute “modify space swap_mirrors” to swap the primaries for the mirrors. A checkpoint will be written for each swapped chunk but otherwise the operation is instantaneous.
  3. Drop the mirrors (the original primaries)—another very quick on-line operation.

Index “Last Access” Time

If you’ve ever wondered whether you’re actually making use of all your indexes over the course of a day or week, this new feature should help. oncheck -p[tT] will now indicate the last time each index fragment was used for a query:

Index jcind fragment partition rootdbs in DBspace rootdbs

Physical Address 1:50184

Creation date 11/10/2017 07:01:26

TBLspace Flags 801 Page Locking

TBLspace use 4 bit bit-maps

Maximum row size 5

Number of special columns 0

Number of keys 1

Number of extents 1

Current serial value 1

Current SERIAL8 value 1

Current BIGSERIAL value 1

Current REFID value 1

Pagesize (k) 2

First extent size 17

Next extent size 17

Number of pages allocated 17

Number of pages used 3

Number of data pages 0

Number of rows 0

Partition partnum 1049065

Partition lockid 1049064

Last Lookup/Scan Fri Nov 10 07:07:58 2017


Logical Page Physical Page Size Physical Pages

0 1:3121 17 17

This access time is stored on the partition page on disk, so it will survive an instance restart. Armed with this information you can then decide whether a seldom-used index is worth its overhead and footprint.

This information is currently not available via an SMI query, but will be in a future version.

Pradeep Natarajan

Informix Backup in Cloud

Direct backups to amazon s3 USING PSM

With the release of Informix Dynamic Server 12.10.xC10 we are adding a capability in our Primary Storage Manager to store/retrieve backups directly to the ecosystem of selected cloud providers, namely Amazon S3 and IBM Softlayer Object Storage.

Amazon s3

Amazon S3 is a very powerful offering to store vast amounts of data in a cost effective manner. Informix Dynamic Server has a good integration with this service using our ontape utility as an offsite copy.

In Informix Dynamic Server 12.10 we have expanded our capability to take direct backups to Amazon S3 using our Primary Storage manager and On-Bar. We will use this document to do a step by step configuration.

In order to use S3 you will, of course, need an account with Amazon Web Services. Some of the steps to get this working will be done in the AWS console, some of them can be accomplished directly from a console with Informix.

In this tutorial we are granting our user, full, unrestrictive access to S3, which in all likelihood is not what a secure, production level system will need. You will need to design your own permissions and access according to your company’s policies and needs. This is just a simple example on how to get a user ready to be used with the Primary Storage Manager.
Groups and Security in AWS
The first step is to log into the AWS console. The first objective will be to create a user that will be used to transfer data in and out of S3. The first step is to create a group of users and assign them enough permissions to access S3.
Create a Group to access S3
Enter the Groups tab and then click “Create New Group”

Name the Group and the click “Next Step”

The next step will ask you to attach a policy to the group. In this case we will select “AmazonS3FullAccess”. As its name implies, this policy will allow any member of this group to do everything in all containers in S3.
Again, this is probably not what you want if you use S3 for other purposes or for multiple instances. You can change this by going in to the “Policies” tab before creating the group and creating a customized policy that suits your needs.

The next will ask you to review your choices, if everything looks correct click “Create Group”.

Create a User to Access S3
The next step is to create a user that belongs to this group, so click in the “Users” tab.

Then click “Create New Users” and provide a user name for the new user in this case “ifmx_s3_user” and click “Create”

The screen will display two strings: one is the Access Key and the other is called Secret Access Key. Store these numbers in a safe location. Do not let anybody not authorized to access your data to have them. These strings are the equivalent of a username and password that can be used to store and retrieve data from S3 programmatically using APIs. Anybody with access to these strings can retrieve/steal your information.
Additionally, you can download these credentials into a text file that you can store in a safe location. The file is NOT encrypted therefore you must be very careful where to place it.
This is the only place where you will be able to download or copy these credentials. If you do not save them or if you lose them, you will need to create a new user as the former one will be unusable.
Then click “Close”.

You will be sent back to the user screen, you will see the user you just created:

Assign your user to the group
Click the check box next to the user and the click “User Actions” and select “Add User to Groups”

Add the user to the group we first created by clicking the check box, the click “Add to Groups”.

After this we will have a user named “ifmx_s3_user” that belongs to the group “ifx_s3_group” that has unrestricted access to all the S3 resources.
The next step is to create a place where to put the Informix backups. This place is called a “Bucket” in S3 and is the equivalent of a directory in a regular file system.
The bucket name will be needed later while creating a PSM device using the parameter “—container”
The next step is to go to “Services” and the “Amazon S3” and the click “Create Bucket”

Give the bucket a name and then select the appropriate zone, in this case US Standard. Then click “Create”

Create a Primary Storage Manager (PSM) device
At this point the configuration in S3 is done, you just need to configure the Primary Storage Manager to use the Bucket and Credentials you just created.
In order to use the PSM with the S3 Bucket you just created you need to create a device in PSM of type CLOUD using S3 as provider.
In this command line:

  1. AMAZON1, is the arbitrary name you will give to this device. With FILE type devices, this is actually the full path of the directory that will store the data but in the case of CLOUD type devices is just any name that will help you organize your devices. This name plus the pool (DBSPOOL in this case) must be unique.
  2. ‘-t CLOUD’, is the device type that will tell PSM to store/retrieve the data to/from a CLOUD infrastructure.
  3. ‘–provider S3’, is the target cloud provider (Amazon S3) in this case. At this moment only S3 and SWIFT (OpenStack SWIFT ) are supported.
  4. ‘–url https://ifmx-s3-dev.s3.amazonaws.com’, is the URL where your backups will go. In the specific case of S3 it just so happens that the bucket name is part of the URL provided.
  5. ‘–user AKIAIT1111155555X4PA’, for S3 this is the Access Key provided to you when your user was created.
  6. ‘–password A2nB21111155555nvTI0X9ZxGzUJNJivoBQY9MrD’, for S3 is the Secret Key provided to you when the user was created.
  7. ‘–container ifmx-s3-dev’ is the amazon bucket
  8. ‘—-max_part_size 25600’ will fragment your objects in 25MB pieces, in the case of S3 we suggest a size between 25 and 100 MB for this.

Check the device was created:

Take a Level Zero Backup

Check Backup Data is in your Bucket
The backup data is organized using the INFORMIXSERVER, for example, in the next picture there is backup data from 6 different Informix instances: cloudbkp, gaccurl_lin32, gaccurl_shm, gaccurl_tcp, ol_informix1210_2 and win32curl:

Pradeep Natarajan

Works For Me

In this section I will write about things that help me in my day to day work. Most DBAs probably have their own way to perform these tasks which may be different than the way I do them. So, if you find an error or can think of a better way, please let me know. If not feel, free to use these tips as is or modify them to fit you needs.
This time this corner is offered by Eric Vercelletto.

Thank you Eric.

When new applications are released, it is not rare that sessions locking phenomenon show up. Those moments are mostly annoying and if no DBA is available at this time, tracking the troublemakers is not easy nor fast.

It is even harder to identify the culprit when this kind of situation occurs by 2:30 AM, while sessions with massive and critical data updates are launched, then crash after a few minutes.

The science of Informix locks is a hybrid science: smooth and trouble free applications life is one of the benefits of the collaboration between DBAs and developers, sharing common knowledge in this science.

Nevertheless, having a discussion about the locking policy of this or that application is barely achievable at 2:00 AM when the invoices creation batches is hanging for an identified reason. This is why I would like to share a couple of tools that will allow you to prevent (or at least limit) the possibility of such cases.

  1. The first query will help you to identify sessions that have been idle for a certain amount of time. You may want, for instance to kill any session that has been idle for more than 4 hours. Here is how to identify the sessions idle for 4 hours or more. It provides the session id, user name, session start time, session cpu time (how many seconds did it work), and idle time.
    The returned sessions may or may not have a transaction pending.

dbaccess sysmaster <<+
select s.sid as session_id,s.username,dbinfo(“utc_to_datetime”,connected) as session_started_at,
cpu_time,dbinfo(“utc_Current”)-last_run_time as idle_time
from systcblst t,sysrstcb r,syssessions s
where t.tid = r.tid
and s.sid = r.sid
and current-dbinfo(“utc_to_datetime”,last_run_time) > “0 04:00:00”
order by 4 desc

  1. The second query will help you to identify the sessions that have a transaction pending for more than x time (here 15 minutes), whether they block other users or not. It also shows on which tables the session has set locks.

select r.sid,r.username,dbsname,tabname, dbinfo(“utc_Current”)-last_run_time as idle_for_seconds,count(*) as locks_held_on_table
from systcblst t,sysrstcb r, sysmaster:syslocks l, sysmaster:syssessions s
where t.tid = r.tid and r.sid = l.owner AND l.type = ‘X’ AND s.sid = l.owner
and current-dbinfo(“utc_to_datetime”,last_run_time) > “0 00:15:00”
group by 1,2,3,4,5
order by 5 desc ;

  1. The query is similar to the second one, but it will only keep sessions in transaction that are BLOCKING other sessions.

select r.sid,r.username,dbsname,tabname, dbinfo(“utc_Current”)-last_run_time as idle_for_seconds,count(*) as locks_held_on_table
from systcblst t,sysrstcb r, sysmaster:syslocks l, sysmaster:syssessions s
where t.tid = r.tid and r.sid = l.owner AND l.type = ‘X’ AND s.sid = l.owner
and current-dbinfo(“utc_to_datetime”,last_run_time) > “0 00:15:00”
group by 1,2,3,4,5
having sum(waiter) > 0
order by 5 desc ;

Remember that the use of SET LOCK MODE can become an enemy to the smooth operation: it you do not specify a number of seconds, the sessions attempting to access locked data may hang forever or so. You can track the use inappropriate use of SET LOCK MODE by running onstat –g sql and look in the ‘Lock Mode’ column.

Feel free to modify, adapt or enhance these queries according to your needs.

Have fun.

Eric Vercelletto

Informix resources

IBM Informix home page
www.informix.com or directly at: http://www-01.ibm.com/software/data/informix/

Informix Blogs and Wikis

Blogs and Wikis that have been updated during the last month

More Blogs and Wikis

Social media

Linkedin: https://www.linkedin.com/groups/25049
Twitter : https://twitter.com/IBM_Informix
Facebook : https://www.facebook.com/IBM.Informix
YouTube : https://ibm.biz/BdH2nb
Informix IoT Channel : https://ibm.biz/BdH2nm

Forums, Groups, Videos, and Magazines

  • The IIUG forums at /forums/technical.php
  • Informix Marketing channel on YouTube http://www.youtube.com/user/informixmarketing?feature=results_main
  • IBM DATABASE MAGAZINE at http://www.ibmdatabasemag.com
  • Credentials, the IBM Certification Newsletter at http://www-03.ibm.com/certify/email/201307credentials.shtml
  • The Informix Zone at http://www.informix-zone.com
  • There is now an Informix group on LinkedIn. The group is called “Informix Supporter”, so anyone loving Informix can join, from current IBM employees, former Informix employees, to users. It will also be a good occasion to get in touch with others or long-time-no-seen friends. If you fancy showing the Informix logo on your profile, join. To join, simply go to: http://www.linkedin.com/e/gis/25049/5E4B2048E558

Closing and Credits

The International Informix Users Group (IIUG) is an organization designed to enhance communications between its worldwide user community and IBM. The IIUG’s membership database now exceeds 25,000 entries and enjoys the support and commitment of IBM’s Information Management division. Key programs include local user groups and special interest groups, which we promote and assist from launch through growth.

Sources: IIUG Board of Directors
IBM Corp.

Editor: Gary Ben-Israel

For comments, please send an email to gary@iiug.org

Categorized as Insider

By Vicente Salvador

Board member since 2014, a user since 1989 and Informix fan. I'am software architect which allow me to combine technical and business skills.