IIUG Insider (Issue #211) January 2018

Highlights: Next IIUG board meeting

Welcome to the International Informix Users Group (IIUG) Insider! Designed for IIUG members and Informix user group leaders, this publication contains timely and relevant information for the IBM Informix community.

Contents:

Editorial

Informix development is doing OK. 12.1 xC10 looks great. I hope 12.1 xC11 will be even better.

However we lost Informix documentation. It seems no one owns it anymore.

IBM dropped it and HCL didn’t pick it up. The last documented release was 12.1 xC8.

It is great having many new features but it is hard to use them without documentation.

I hope this problem will be addressed in the near future.

You can find all documentation in:

https://www.ibm.com/support/knowledgecenter/en/SSGU8G_12.1.0/com.ibm.welcome.doc/welcome.htm

It is well organized however if you look at the manuals the latest version documented us still 12.10 xC8

Gary Ben-Israel
IIUG Insider Editor
IIUG Board of Directors
gary@iiug.org

Highlights

Next IIUG Board Meeting

The IIUG Board will meet at the IBM Think conference in Las Vegas March 19-22.

If you are attending the conference we will be delighted to meet you there.

https://www.ibm.com/events/think/

Conference Corner

IIUG WORLD 2018 – Call for Speakers Opening Soon

IIUG World 2018 will be hosted by the Renaissance Arlington Capital View Hotel, Washington D.C. and will be held October 28th – November 1st, 2018.

This is a beautiful hotel and convenient to Reagan National Airport. It is also convenient to many historical sites, museums and other local attractions Washington D.C. is noted for. To learn more about this venue check out www.marriott.com/waspy.

We are updating our website in preparation of IIUG 2018 speaker submissions. Our goal is to have speaker registration open no later than April 1, 2018.

We will send an email announcing the submission website is open and provide you instructions regarding where to submit a brief overview of your topic. The email will also share the benefits of becoming an IIUG World 2018 presenter. We would like to thank our past speakers who helped make our IIUG events the best Informix event offered and would be happy to have you return. We are always looking for new speakers who might have a passion for Informix and/or related software to share their knowledge. So, start thinking of some topics you are interested in sharing with our IIUG attendees and help us make this event the best ever.

IIUG World 2018 registration should open shortly after the call for speakers.

Looking for information or sessions from past IIUG events? We have it! We have updated our website to make this information available to you for 2008 – 2017 events.

It works like this, just go to www.iiug.org/iiugworld/YYYY, where YYYY is the year you are looking for and where that year’s documentation will be found. For example, the 2008 sessions and documents will be found at www.iiug.org/iiugworld/2008. By changing the year on the URL you can assess documentation from 2008 to 2017. Hope you find what you are looking for. However, if what you want is missing, it is likely the speaker did not provide us their presentation to share with you.

Information regarding the current IIUG event, 2018, will be found at www.iiug.org/iiugworld. This includes everything you will need to know about the current event. This site is updated as new information is ready for release.

Hope to see you there.
IIUG World 2018 Planning Committee

RFE Corner

Just in case you are not aware, some time ago IBM created a public website to collect the requests for new features directly from users. The RFE (Requests For Enhancements) website is included in developerWorks. You can access it here.

Once you logged in with your usual IBM ID, choose “Information Management” in the Brand dropdown box and “Informix Servers” or “Client Products” in the Products dropdown box.

The interesting thing is that any request, including your request, if you place one, is submitted to be voted on. This means the RFEs that receive more votes have a greater chance to be considered by the architecture and development teams for further consideration. In other words, this IS your opportunity to provide enhancement ideas even if you are not the biggest IBM customer on the planet earth.

Some RFEs will be of great interest, others will not seem useful to you. This is why your opinion is important. Do not hesitate to vote and place comments!

The idea of the RFE corner is to provide a digest on new Informix RFEs and make those RFEs more visible and accessible for the community, so that you can vote for them in a faster and easier way. By participating actively in this website, IBM will have solid and useful elements from the customer base to introduce new functionality to Informix product.

Also in the area of IBM website, a new functionality has been released: MyNotifications. You will want to register this webpage in order to receive the notifications of your choice (new product defects, new patch release, new versions etc…, on the frequency of your choice (daily, weekly). I have registered and will definitely remain registered, due to the value of the information delivered.

Check at this place.

New RFEs for January 2018
We had some activity in January, although moderate.

onbar – add Encryption At Rest (EAR) keystore to critical files to be back…

add Encryption At Rest (EAR) keystore to critical files to be backed up

Request to add the possibility to encrypt onbar backup with systems using REST(EAR) Votes: 4

Need the ability to increase the number of logical log buffers from 3 to handle peak transaction loads.
Currently there are three logical log buffers. When transaction load increases sometimes all three logical log buffers are being flushed, especially in an unbuffered logged database environment. In such an environment increasing the size of the logical logs often does not alleviate the problem. The ability to specify that the server use 4 or more log buffers will improve application performance. Votes: 7

TOP 14 RFE’s

Abstract Status Votes Progr.
In-Place Alter for varchar, lvarchar and boolean Under Consideration 59 0
Backup from RSS or HDR Secondaries using ontape, onunload, onbar, dbexport Under Consideration 58 0
SQL interface to obtain the temporary space usage (tables, hash, sorts…) Under Consideration 50 0
Obtain the query plan of a running query Under Consideration 48 0
Request to track and save a time stamp for last time an index was used. Nee… Delivered 40 0
Backup Individual database, not entire instance Submitted 34 0
ALTER owner of database objects after creation Submitted 29 0
Implement CREATE OR REPLACE option for stored procedures Under Consideration 27 0
Informix should be able to change owner on tables and any other objects Submitted 23 0
Ability to re-create views and procedures without dependent objects being dropped (34762 Under consideration 20 0
Need an onmode option to terminate orphaned, tightly coupled global transac… Under Consideration 19 0
Allow triggers install/updates without taking an outage for the box (57792) Under Consideration 18 0
Allow “group commit” as other RDBMS (45166) Under Consideration 18 +1
Allow column ALIAS in HAVING clause (46670) Submitted 16 0

Hot RFE’s

This new list gives visibility on hot RFE’s that are not yet in the Top 14 nor in the new RFE’s

Abstract Status Votes
allow repack of TBLSpace extents Submitted 15 (+8)
Implement Oracle “Flashback Database and Restore Points” Submitted 10(+10)
Table level restore Submitted 8 (+1)
Need the ability to increase the number of logical log buffers from 3 Planned for Future Release 7 (+7)
onbar – add Encryption At Rest (EAR) keystore to critical files to be back Submitted 4 (+4)

Do not forget to vote for one or several of those RFE’s if they fit your requirements.

You can access each RFE by clicking on the above links. At the bottom of each RFE page you will find a hyperlink to vote for it. You will see the Request stats, including number of votes for this request, on the right side of the request page. The more votes, the greater the chance an enhancement will be addressed by the Development Team, taking into consideration the general interest.

Take some time to examine the full list and vote for the enhancement you would like to see implemented.

Eric Vercelletto

Support Corner

Easier Client Sessions Tracking

This feature was introduced in the Informix 12.10.xC10 fixpack release. It allows you to assign a character string to your CSDK or JDBC client session and identify that character string on the database server. This is useful for environments where same userid runs multiple instances of the same application, and there is a need to distinguish one session from the other.

This feature is implemented with a new CLIENT_LABEL environment variable that is set in your CSDK 4.10.xC10 or JDBC 4.10.JC10 client and is detectable on your IDS 12.10.xC10 database instance.

CLIENT_LABEL is set in the application environment and then is viewable on the database side with two methods: onstat –g env <sesID> and sysmaster:sysenvses.

(Note that SQL ‘SET ENVIRONMENT’ statement will also support CLIENT_LABEL in 12.10.xC11)

CSDK Example:

Here we set CLIENT_LABEL to two different strings and execute the same esqlc program. In this example, the session ids are 43 and 201.

bash-3.2$ export CLIENT_LABEL='csdk_client1'
bash-3.2$ ./myesqlc
bash-3.2$ export CLIENT_LABEL='csdk_client2'
bash-3.2$ ./myesqlc
onstat:
onstat -g env 43

IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 5 days 23:01:39 -- 210712 Kbytes

Environment for session 43:

Variable Value [values-list]
CLIENT_LABEL cdsk_client2
CLIENT_LOCALE en_US.8859-1
CLNT_PAM_CAPABLE 1

onstat -g env 201

IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 5 days 23:02:41 -- 210712 Kbytes

Environment for session 201:

Variable Value [values-list]
CLIENT_LABEL cdsk_client1
CLIENT_LOCALE en_US.8859-1
CLNT_PAM_CAPABLE 1

sysmaster:

select * from sysenvses where envses_name = 'CLIENT_LABEL'

envses_sid 201
envses_id 9
envses_name CLIENT_LABEL

envses_value cdsk_client1

envses_sid 43
envses_id 9
envses_name CLIENT_LABEL

envses_value cdsk_client2

2 row(s) retrieved.

Database closed.

JDBC example:
Here we set CLIENT_LABEL to two different strings in the JDBC connection URL and execute the same JDBC program. In this example, the session ids are 232 and 234.

java myjdbc "jdbc:informix-sqli://myhost:52220:user=myuser;password=mypasswd;CLIENT_LABEL=jdbc_client1"
java myjdbc "jdbc:informix-sqli://myhost:52220:user=myuser;password=mypasswd;CLIENT_LABEL=jdbc_client2"

onstat:

onstat -g env 232

IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 6 days 00:56:26 -- 210712 Kbytes

Environment for session 232:

Variable Value [values-list]
CLIENT_LABEL jdbc_client1
CLIENT_LOCALE en_US.8859-1
CLNT_PAM_CAPABLE 1

onstat -g env 234

IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 6 days 00:56:59 -- 210712 Kbytes

Environment for session 234:

Variable Value [values-list]
CLIENT_LABEL jdbc_client2
CLIENT_LOCALE en_US.8859-1
CLNT_PAM_CAPABLE 1

sysmaster:

Database selected.

select * from sysenvses where envses_name = 'CLIENT_LABEL'

envses_sid 234
envses_id 9
envses_name CLIENT_LABEL
envses_value jdbc_client2
envses_sid 232
envses_id 9
envses_name CLIENT_LABEL
envses_value jdbc_client1
2 row(s) retrieved.
Database closed.

In summary this is a simple, yet powerful way to more easily track sessions in your environment.
Jeffrey McMahon

Informix Warehouse Accelerator for Cloud (3)

In this post we look at methods to define data marts in the Informix Warehouse Accelerator (IWA). IWA is a software appliance for the Informix database server, using in-memory, columnar data storage for business intelligence applications doing OLAP. For such workloads IWA typically can achieve a query acceleration of 10 – 100 times faster than running the workload within the Informix database server.

Part 3: Defining IWA Data Marts

In the last post we used the IWA Example Database to demonstrate creation and administration of a data mart. This was quite easy because we used a ready-made data mart definition. Additionally, the example database itself is pretty simple. It serves well for explaining and understanding some basic concepts, but real use case scenarios are much more complex. While the administration steps for data marts always remain simple enough, finding the proper definition for a data mart can be quite difficult. And if the data mart is to accelerate some report that consists of hundreds of queries this does not make things easier. Not all of the queries may be acceleratable. Figuring out which ones are and from there the tables and columns needed in the data mart can be a herculean task. Sometimes such reports are ‘grown’ over many years and there may be nobody around knowing all the details.

Brute Force

One approach is to include all tables with all their columns in the data mart – or at least all columns of supported data types. However, this probably is not optimal and may even not be possible because the resulting data mart may be too big. Often there are some lengthy description columns never used by the typical OLAP queries. Such columns are prime candidates for omission from a data mart, making the data mart smaller and the data loading faster. Identifying these columns is a problem, especially when there are hundreds or even thousands of tables in the database.

Workload Analysis

Workload analysis attacks the problem from the other side: Analyze a given workload (e.g. one or several reports), filter all the queries that can be accelerated and determine just the tables and columns that these queries need. For this task we use built-in functionality, called “query probing”, in the Informix database server. The “query probing” does the filtering of acceleratable queries and automatically collects all the necessary information for building a data mart definition, including the references between tables. This is especially useful when no primary-foreign relational constraints are defined, which often is the case in a data warehouse.

We can further combine the “query probing” with the SQL Tracing feature. This also collects timing information on the individual queries in a workload. Obviously, we need to actually execute the workload in order to get the timings. Once we have all the information from the “query probing” and the SQL Tracing, we can see which of the long running queries can be accelerated. We may even be able to determine long running queries that cannot be accelerated and perhaps can change them in some way or other to make them acceleratable.

Finally, we use the integrated function “ifx_probe2mart()” to automatically create a data mart definition from the collected query probing data. From there it is then straightforward to create the data mart itself. The data mart load step after the creation will show how big the data mart is and whether at all it fits into IWA’s memory. If the data mart turns out to be too big, we can scrutinize the data mart definition to determine some column or table that we can remove as a compromise. We can also search in the collected SQL Trace statements for that table to see, which queries need the table or column we would like to remove and what run time they had. After all, we may not want to remove something that is needed by the longest running query and thus making it non-acceleratable.

Workload Analysis Example

Let’s put workload analysis to work in a very simple example. We again use the IWA Example Database named “iwadb” from the last post. As workload we use the three example queries in the files “q1.sql”, “q2.sql” and “q3.sql” that come with it. Following are the steps:

  1. Preparation: Update statistics and create a logging database named “martdb” for the data mart definition:

$ echo "update statistics low;" \
| dbaccess iwadb -
$ echo "create database martdb with log;" \
| dbaccess - -

  1. Clear probing data possibly left over from previous probings, turn on query probing and run the workload without actually executing it:

$ echo "set environment use_dwa 'probe_cleanup';" \
| dbaccess iwadb -
$ ( echo "set environment use_dwa 'probe start';" ; \
echo "set explain avoid_execute;" ; \
cat q[123].sql ) \
| dbaccess iwadb -

  1. Check for collected probing data:

$ onstat -g probe

  1. From the collected probing data create a data mart definition for a data mart with name “new_mart” in the logging database “martdb”:

$ echo "execute procedure ifx_probe2Mart('iwadb', 'new_mart');" \
| dbaccess martdb -

  1. Extract the data mart definition to an XML file named “new_mart.xml”:

$ echo "select lotofile(ifx_genMartDef('new_mart'),'new_mart.xml"'!'"','client') \
from iwa_marts where martname='new_mart';" \
| dbaccess martdb -

Few thoughts on the above example:

  • We need to perform the preparation step (0) only once, even if later we want to adapt the data mart definition to a changed query workload, e.g. when new queries get added to the workload.
  • Probing data is kept in memory only. After a restart of the Informix server all previously collected probing data will be gone.
  • For the actual probing we run the statements to set the environment and the actual workload in the same dbaccess session. This is necessary, because the shown environment settings are valid only for the duration of the session. Otherwise, it is also possible to set the variable “use_dwa” for a specific (existing) session. Alternatively, the setting can be placed in a specific “sysdbopen()” procedure.

As we probe the workload without really executing it, we do not use SQL Tracing in this example. The SQL Tracing anyway would not produce timing information.

  • The command “onstat -g probe” basically shows the raw probing information with little formatting to make it ‘readable’. The command is useful to just check if any probing data was collected. In depth interpretation of the data is something for Tech Support.
  • To perform the procedure “ifx_probe2Mart()”, we need to be connected to the logging database that we created in the preparation step. If a definition for the data mart already exists, any newly probed tables or columns will be added to the existing definition.

Therefore, if we want to get rid of an existing data mart definition in the logging database, we have to manually delete it with an SQL statement like the following:

$ echo "delete from iwa_marts where martname='new_mart';" \o | dbaccess martdb -o

  • When extracting the data mart definition as XML file, we use the ‘!’ to avoid that the function “lotofile()” adds a string to make the file name unique between repeated runs.

In the last blog post I promised to touch on more conceptual details and administration tasks for data marts in IWA. Today’s topic of defining a data mart does not exactly fit my promise, but at least it is an important aspect of the overall concept. I will certainly return to more details in a future blog post.

This blog post contains several examples of UNIX commands. They are marked in HTML within “PRE” tags so that they appear formatted in a readable manner. Apart from this, no further formatting or other ‘ornaments’ should be necessary for these little pieces of UNIX command examples. Any additional ‘stuff’ (especially like unnecessary ‘boxing’, possibly with headings for the boxes) will only disrupt and compromise the flow of the text for any reader. “Keep it simple” works very well – not only for coding, but also for blog posts. 😉

Thanks a lot, Martin

Martin Fuerderer

High Availability Made Easy: ifxclone

Hopefully you have an established routine of local backups or cloud backups, and you should be sleeping a little better (when you get a chance to sleep). And like fire-drills or other emergency procedures, you are probably practicing a restore of your backups on another server. You have probably discovered, depending on the size of your data and speed of the system, this restore may take hours. Moving from ontape to onbar helped by enabling parallel backup and restore of your dbspaces, but while backups can happen with the lights on, during a restore, the server won’t be accessible until the restore is complete. You may not be able to afford for your applications to be down that long.

You don’t have to just be reacting to a failure, you can proactively reduce down-time by utilizing one or more of the Informix Replication technologies discussed in the “Informix Replication Technology” blog by Nagaraju Inturi. With High-availability Data Replication (HDR) your database operations can switch to a secondary server in the event of a hardware or disk failure on your primary system. Each of the “for-purchase” licenses authorize the use of at least one High-Availability server. If you are not using any of the “for-purchase” editions, or want to follow along without touching your actual system, the HDR and RSS technology is available between two instances of the Informix Developer Edition. This blog shows how to implement an HDR secondary server using the “ifxclone” command, after making a few simple changes to your current server’s configuration, without stopping it (this is of course about availability).

Overview of HDR Components

One server plays the role of “Primary”, and all updates made to logging Database tables, are recorded in the log. The Primary sends those log updates to each of the “secondary” servers, where it will apply those changes to the physical disk. This continually keeps the data consistent with the data on the primary.

If the connection between the Primary and the Secondary is too slow, or unstable, this can impact the performance on the primary. For these situations, creating an Remote Secondary Standby server (RSS) will allow the Primary to use asynchronous communications to the secondary, with the trade-off being a risk that a primary failure could happen before some committed transaction are recorded on the RSS machine. Therefore, HDR is your High-Availability option, and RSS is better for fast disaster recovery. Some “for-purchase” licenses entitle the use of HDR and RSS simultaneously.

Limitations/Restrictions
Informix HDR and RSS replication technology uses the logs to keep the secondary system up to date. Therefore, for data to be replicated, it must be stored in logging databases and in spaces with logging. It cannot be in BLOB spaces (no logging is done for BLOB data), non-logging smartblob spaces, or external storage. Other restrictions to configuration of the servers are in the Administrators Guide.

Prerequisites

  1. HARDWARE:
    A second machine with:

    1. Sufficient storage capacity to match the storage allocated on the original machine
    2. Adequate memory and processing capacity to handle your workload in event of the primary server failing, or needing to be shutdown.
  2. SOFTWARE:
    1. Install the same version of Informix software on the second machine as the original machine.
    2. Install all User-defined types, user-defined routines, and DataBlade modules that are installed on the original/primary machine.
      (They do not need to be registered on the secondary server).
  3. Customized Scripts:
    1. sh
    2. sh
    3. (etc.)
  4. Ownership and Permission of storage paths.
    1. Owner and group should match the primary server (informix:informix)
    2. Directories containing cooked files must have 770 permissions.

Common Setup (from Source Server)

Some changes and files will be the same on both servers. We will make the changes to the files on “host1” (our “server1” machine), and then “ifxclone” will copy them to the new target host (host2). We are avoiding naming our systems “primary” and “secondary” because these are roles that can switch between the servers over time, and perhaps for long periods of time.

Trusted Server Connection

The primary server must trust the connection from the new database server. We need mutual trust when the roles of the two servers reverse. While you can do this at the OS level, we will assume you want this trust limited to the database service. This can be done by creating a file you identify in the ONCONFIG file’s REMOTE_SERVER_CFG parameter in the $INFORMIXDIR/etc directory. For this example, will create a file named “trusted.hosts” in the “$INFORMIXDIR/etc” directory. This also allows Informix tools to update the trust information later.

Create an empty “trusted.hosts” file, and set the file permissions to limit write access to the instance owner(informix) and group (informix):

cp /dev/null $INFORMIXDIR/etc/trusted.hosts
chmod 640 $INFORMIXDIR/etc/trusted.hosts

Now configure the server to use the new file:

onmode -wf REMOTE_SERVER_CFG=trusted.hosts

Enable “ifxclone” and the “sysadmin:admin” function to setup the connectivity information on all servers in the cluster (for now it is just one).

onmode -wf CDR_AUTO_DISCOVER=1

You can manually edit the file the first time, but we will execute the “admin” SQL function in the “sysadmin” database using dbaccess to update the REMOTE_SERVER_CFG on all servers in the cluster (I am also adding our current server as trusted so either server can initiate a connection to the other, and this trust information will eventually be copied to our new server too):

execute function sysadmin:admin('cdr add trustedhost', 'host2 informix, host1 informix');

HDR ONCONFIG Changes

Next disable temp table logging, and enable snapshot copy to be made by the “ifxclone” command:
onmode -wf TEMPTAB_NOLOG=1
onmode -wf ENABLE_SNAPSHOT_COPY=1

NOTE:

  1. The TEMPTAB_NOLOG setting is only really required on the secondary. However, when the roles are reversed, leaving this enabled would affect applications that depend on rollback on TEMP Tables. So, either plan to correct immediately after making a new primary, or be consistent and don’t rely on TEMP Table transactions.
  2. If you are going to setup an RSS server instead of HDR secondary, we need to enable the logging of index builds. This causes all index creation operations to go through the logs (hope you have automatic log backups configured):

onmode -wf LOG_INDEX_BUILDS=1

INFORMIXSQLHOSTS

The “ifxclone” command will attempt to add the new server to the INFORMIXSQLHOSTS file, and if it already exists will not properly modify the file to define a server group the way we want. If you already have the entry for the new server we should delete it, or comment it out for now:

server1 onsoctcp host1 9088
# server2 onsoctcp host2 9088

Database Logging Mode

Only databases using logging will be replicated. Databases using buffered logging can lose transactions if the log has not been flushed to disk and the server fails. This same window of vulnerability extends to the replica as well. If all databases are logging, or you don’t need the non-logging databases to be replicated, you can move on to the next step. You can verify the list of the non-logging databases, and those that are using buffered logging by running the following query against the “sysmaster” database table “sysdatabases” using “dbaccess”:

select name, is_buff_log from sysmaster:sysdatabases where is_logging = 0 or is_buff_log = 1;

If no rows are produced, then all databases are using unbuffered logging. For each database you want replicated that is currently non-logging (or is using buffered logging), you can convert them to unbuffered logging mode during the ontape system backup command using the “ontape” option “-U” and the list of database names to change to Unbuffered-Logging. For example, change “db1” and “db2” to Unbuffered-Logging during the Level 0 system backup requires the following ontape command:

ontape -s -L 0 -U db1 db2

Notes:

  1. If you are not using ontape for backups, then you must use the “ondblog” command to change logging mode, and then onbar for Level 0 backup.
  2. Logging Mode Change requires an exclusive lock on the DB, so plan for the DB to be unavailable.
  3. If you later create a non-logging, it will appear to exist on the secondary, but it will not be usable on the secondary.
  4. To change a non-logging DB to Logging after Data Replication has been started is a pain, you must
    1. Stop the secondary servers
    2. Change the logging mode and take a Level 0 backup.
    3. Restore the secondary from backup (or re-clone).

With Data Replication (DR) you should create all new Databases with some form of logging (ANSI, Unbuffered, or Buffered).

Chunk Path/Device Information

While we are still on server1, you should collect the path information for the chunks since these paths need to exist on the new server. You can collect this with the “onstat -d” command. We will discuss the output during setup of server2.

Target Setup
Most, if not all, of the new server configuration can be handled by “ifxclone”:

  1. Copies the REMOTE_SERVER_CFG trusted host information.
  2. Copies the ONCONFIG Due to the number of configuration parameters that need to be the same, it is probably easiest to let “ifxclone” copy the ONCONFIG from the original server. Otherwise you need to verify each of the ONCONFIG parameters which must be identical (listed in the Administrators Reference) have been correctly set. If needed you can have “ifxclone” override some settings using the “-c” option.
  3. Setup the source and local INFORMIXSQLHOST file.

The “ifxclone” command cannot setup the symlinks to raw devices, or chunkfiles if they are in directories that do not exist.

Create Needed Directories and Symlinks to Raw Devices

All dbspace chunk paths used on the primary must be the same on our new server. You will need to create the matching symlinks to the corresponding physical devices. For cooked files, the “ifxclone” can create the missing chunk files automatically, even for the root dbspace, but it will not create the parent directories. The parent directories for those paths must already exist for “ifxclone” to successfully create the chunk files.

You can find the paths in the “onstat -d” output from “server1”. For this example, “onstat -d” produced the following output, showing that the chunk paths are all under the “/chunkdir” directory:

IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line --
Up 02:22:13 -- 185676 Kbytes

Dbspaces

address number flags fchunk nchunks pgsize flags owner name
4478b028 1 0x20001 1 2 2048 N BA informix rootdbs
4611bca8 2 0x2001 3 1 2048 N TBA informix tmpspace

2 active, 2047 maximum

Chunks

address chunk/dbs offset size free bpages flags pathname
4478b268 1 1 0 150000 83029 PO-B-- /chunkdir/rootspace
4613a028 2 1 0 500 497 PO-B-- /chunkdir/rootspace_chk2
460b8028 3 2 0 500 447 PO-B-- /chunkdir/tmpspace

3 active, 32766 maximum
NOTE: The values in the “size” and “free” columns for DBspace chunks are displayed in terms of “pgsize” of the DBspace to which they belong.

Expanded chunk capacity mode: always I could run touch for each of those missing paths:

Then, for each of those paths, make sure the parent directories exist with adequate permissions. The above example only needs the “/chunkdir” directory to exist. It also must have the ownership, and permissions, so you will need to do these operations as root/administrator:

mkdir /chunkdir
chown informix:informix /chunkdir
chmod 770 /chunkdir

Under those directories you can manually create the needed raw-device symlinks (need permission of 660, same as any cooked files you choose to create manually.)

If the list of chunks is long, you can collect the paths via the following command on “server1”, and use the output to create a script to handle the above steps for creating the required parent directories:

onstat -d | awk '$NF ~ /\// { print $NF; }'

Run “ifxclone”

The “ifxclone” will make “server1” a primary, perform “fake” backup of “server1”, and restore it to “server2” as a new HDR secondary. All changes to logged spaces on “server1” will be continually applied to “server2”. The “ifxclone” will add the secondary to the INFORMIXSQLHOSTS file on the “source” server, and copy the “source” server’s ONCONFIG information to the target:

ifxclone --source=server1 --sourceIP=host1 --sourcePort=9088 \
--target=server2 --targetIP=host2 --targetPort=9088 \
--trusted \
--createchunkfile \
--disposition=HDR \
--autoconf

See the Administrator’s Reference and the section for “The ifxclone utility” for complete information on ifxclone.

Check the Progress

To monitor the Data Replication Information and message log, run the following (Ctrl-C to break out):

onstat -g dri -m -r 1

The server state at the top of each output burst should change from “Initialization”, to “Fast Recovery”, then eventually to “Read-Only (Sec)”, and show it is paired with server1 and the Data Replication state is “on”:

IBM Informix Dynamic Server Version 12.10.FC10 -- Read-Only (Sec) -- Up 00:01:37 -- 193868 Kbytes

Data Replication at 0x45a68028:

Type State Paired server Last DR CKPT (id/pg) Supports Proxy Writes
HDR Secondary on server1 52 / 72 N

DRINTERVAL 0
DRTIMEOUT 30
DRAUTO 0
DRLOSTFOUND /INFORMIXDIR/etc/dr.lostfound
DRIDXAUTO 0
ENCRYPT_HDR 0

Backlog 0
Last Send 2017/11/29 10:47:05
Last Receive 2017/11/29 10:47:05
Last Ping 2017/11/29 10:46:58
Last log page applied(log id,page): 0,0

Server Failed to Initialize?

If the server failed to initialize, check the message log for errors. Correct any missing “paths”, or permissions on the filesystem, or any other changes needed to the ONCONFIG file. Then rerun the “ifxclone” command, but you must add “–useLocal” so you don’t overwrite any changes you fixed in the local ONCONFIG.

INFORMIXSQLHOSTS was Updated

The “autoconf” option added a “group” to the server INFORMIXSQLHOSTS file, and assigned each server in the group using the “g=” option. The group name is the name of the source server but with “g_” prepended. The INFORMIXSQLHOSTS file from our example contains three lines now:

g_server1 group - - i=i
server1 onsoctcp host1 9088 g=g_server1
server2 onsoctcp host2 9088 g=g_server1

Verify Backup Device Configuration

Verify the ONCONFIG LTAPEDEV and TAPEDEV values (copied from server1 to server2) exist and have the correct permissions (RWX for owner and group) on the new server. This will ensure that any automatic and manual backups run as intended. If you need to change them on the secondary later, you can use “onmode -wf” command while the server is running to update the value.

Copying the alarmprogram.sh ensures that the same automatic log backup configuration is used on server2. In the event server2 becomes the primary, the log backups are already configured for operation.

Manual Failover

In the event of a server1 being unavailable or you need to shut it down, the secondary can be made the primary. On the secondary server, you can run the following command which will both shutdown the current primary, and make the secondary the new primary (force is needed if the primary is already “unavailable”):

onmode -d make primary server2 force

Until you run the command to make the secondary the primary, server2 is read-only (even when configured as updatable secondary, which forwards updates to the primary).

Making a new Secondary from old Primary
Once you have “server1” working again, and it didn’t lose any data, it can become the “secondary” for the current primary (server2). Just restart server1 with “oninit -PHY” (pretends it was physically restored, and waiting for logical recovery), make it the secondary for our current primary so it will recover logical logs from primary, and monitor the message log until the server is running as a secondary:

oninit -PHY
onmode -d secondary server2
onstat -m –r

If the server wasn’t down long, your cluster is complete again, with “server1” as your secondary. However, if any of the needed logical logs are no longer available directly from the primary, the server will remain in “Fast Recovery (sec)” state, and message log will indicate you need to do a recovery from tape:

13:46:00 DR: Secondary server needs failure recovery
13:46:01 DR: Start failure recovery from tape ...

In this event, you need to restore the missing logs from the log backups of server2. For example, if you are using “ontape”, you could copy the server2 logical log backups to host1, and place them in the server1 configured LTAPEDEV location. Since the backup names all start with a prefix indicating “host2” and server number “0”, we provide this information via the IFX_ONTAPE_FILE_PREFIX environment variable, and then run the “ontape” command to restore the logical logs:

( export IFX_ONTAPE_FILE_PREFIX="host2_0"; ontape -v -l -d )

If you lost data disk, you re-clone from “server2” by following the “Setup Target” steps again, but this time the target is “server1”, and the source is “server2”.

Client Failover to Secondary

When planning for server failover in the event of a primary failure, the clients connecting to this cluster should be prepared for connecting to the current primary server accepting transactions. The traditional way for enabling this, without utilizing the Connection Manager, is using sqlhosts groups instead of server names for connecting.

Connect to Server Group

Client applications configured to connect to the “g_server1” INFORMIXSERVER group will automatically connect to the first server in the group, and if unavailable, it will connect to the next server in the group, until it connects to the current primary. If the application automatically attempts to reconnect following a connection failure, the switch to a different server is transparent to users. Otherwise restarting the application to force a new connection will find the current primary.

Configuring client applications using JDBC URL or JDBC DataSource to connect to the server group need to include options to specify the SQLHOSTS file path as either a local file or a URL to retrieve the file, and type of path, or can be retrieved via LDAP.

Here is an example JDBC URL specifying the INFORMIXSERVER as the server group, SQLH_TYPE, and SQLH_FILE (SQLHOSTS file is “/local/sqlhosts”):

jdbc:informix-sqli://INFORMIXSERVER=g_server1; SQLH_TYPE=FILE;SQLH_FILE=/local/sqlhosts; USER=my_user_name;PASSWORD=my_password

You can see this and additional details and options for connecting to HA servers via JDBC in the IBM Informix JDBC Driver Programmer’s Guide.

Further Reading: Connection Manager

The connection manager can monitor the primary server availability, and automatically force a secondary to become the primary. Client connecting to the connection manager will be directed to the primary server. The connection manager can also implement rules to determine the best connection for clients.

See the Administrator’s Guide and the section on “Connection management through the Connection Manager”.

Kevin Mayfield

Informix News

More than €5m worth of direct orders awarded by six government entities in a year

The highest direct order, worth €552,272, was awarded by MITA for “IBM Informix Subscription and Support Contract.”

http://www.independent.com.mt/articles/2018-01-31/local-news/More-than-5m-worth-of-Direct-Orders-awarded-by-six-government-entities-in-a-year-6736184290

Informix Resources

IBM Informix home page www.informix.com or directly at: http://www-01.ibm.com/software/data/informix/

Informix Blogs and Wikis

Blogs and Wikis that have been updated during the last month

More Blogs and Wikis

Social Media

Linkedin: https://www.linkedin.com/groups/25049
Twitter : https://twitter.com/IBM_Informix
Facebook : https://www.facebook.com/IBM.Informix
YouTube : https://ibm.biz/BdH2nb
Informix IoT Channel : https://ibm.biz/BdH2nm

Forums, Groups, Videos, and Magazines

  • The IIUG forums at /forums/technical.php
  • Informix Marketing channel on YouTube http://www.youtube.com/user/informixmarketing?feature=results_main
  • IBM DATABASE MAGAZINE at http://www.ibmdatabasemag.com
  • Credentials, the IBM Certification Newsletter at http://www-03.ibm.com/certify/email/201307credentials.shtml
  • The Informix Zone at http://www.informix-zone.com
  • There is now an Informix group on LinkedIn. The group is called “Informix Supporter”, so anyone loving Informix can join, from current IBM employees, former Informix employees, to users. It will also be a good occasion to get in touch with others or long-time-no-seen friends. If you fancy showing the Informix logo on your profile, join. To join, simply go to: http://www.linkedin.com/e/gis/25049/5E4B2048E558

Closing and Credits

The International Informix Users Group (IIUG) is an organization designed to enhance communications between its worldwide user community and IBM. The IIUG’s membership database now exceeds 25,000 entries and enjoys the support and commitment of IBM’s Information Management division. Key programs include local user groups and special interest groups, which we promote and assist from launch through growth.

Sources: IIUG Board of Directors
IBM Corp.
Editor: Gary Ben-Israel

For comments, please send an email to gary@iiug.org

Published
Categorized as Insider

By Vicente Salvador

Board member since 2014, a user since 1989 and Informix fan. I'am software architect which allow me to combine technical and business skills.