La commande 'ioo' permet de gérer les paramètres systèmes AIX liés aux entrées-sorties.

L'option '-a' permet d'afficher tous les paramètres et leur valeur.

L'option '-o' permet de modifier la valeur d'un des paramètres.

L'option '-L' permet d'afficher les valeurs actuelles et par défaut de chaque paramètre.

 

/# ioo -a
j2_atimeUpdateSymlink = 0
j2_dynamicBufferPreallocation = 16
j2_inodeCacheSize = 400
j2_maxPageReadAhead = 128
j2_maxRandomWrite = 0
j2_maxUsableMaxTransfer = 512
j2_metadataCacheSize = 400
j2_minPageReadAhead = 2
j2_nBufferPerPagerDevice = 512
j2_nPagesPerWriteBehindCluster = 32
j2_nRandomCluster = 0
j2_nonFatalCrashesSystem = 0
j2_syncModifiedMapped = 1
j2_syncPageCount = 0
j2_syncPageLimit = 256
j2_syncdLogSyncInterval = 1
j2_unmarkComp = 0
jfs_clread_enabled = 0
jfs_use_read_lock = 1
lvm_bufcnt = 9
maxpgahead = 8
maxrandwrt = 0
memory_frames = 1310720
minpgahead = 2
numclust = 1
numfsbufs = 196
pd_npages = 65536
pgahd_scale_thresh = 0
pv_min_pbuf = 512
sync_release_ilock = 0
vcadinet1:/# ioo -L
NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
j2_atimeUpdateSymlink     0      0      0      0      1      boolean           D
--------------------------------------------------------------------------------
j2_dynamicBufferPreallocation
16     16     16     0      256    16K slabs         D
--------------------------------------------------------------------------------
j2_inodeCacheSize         400    400    400    1      1000                     D
--------------------------------------------------------------------------------
j2_maxPageReadAhead       128    128    128    0      64K    4KB pages         D
--------------------------------------------------------------------------------
j2_maxRandomWrite         0      0      0      0      64K    4KB pages         D
--------------------------------------------------------------------------------
j2_maxUsableMaxTransfer   512    512    512    1      4K     pages             M
--------------------------------------------------------------------------------
j2_metadataCacheSize      400    400    400    1      1000                     D
--------------------------------------------------------------------------------
j2_minPageReadAhead       2      2      2      0      64K    4KB pages         D
--------------------------------------------------------------------------------
j2_nBufferPerPagerDevice  512    512    512    512    256K                     M
--------------------------------------------------------------------------------
j2_nPagesPerWriteBehindCluster
32     32     32     0      64K                      D
--------------------------------------------------------------------------------
j2_nRandomCluster         0      0      0      0      64K    16KB clusters     D
--------------------------------------------------------------------------------
j2_nonFatalCrashesSystem  0      0      0      0      1      boolean           D
--------------------------------------------------------------------------------
j2_syncModifiedMapped     1      1      1      0      1      boolean           D
--------------------------------------------------------------------------------
j2_syncPageCount          0      0      0      0      64K    iterations        D
--------------------------------------------------------------------------------
j2_syncPageLimit          256    256    256    1      64K    iterations        D
--------------------------------------------------------------------------------
j2_syncdLogSyncInterval   1      1      1      0      4K     iterations        D
--------------------------------------------------------------------------------
j2_unmarkComp             0      0      0      0      1      boolean           D
--------------------------------------------------------------------------------
jfs_clread_enabled        0      0      0      0      1      boolean           D
--------------------------------------------------------------------------------
jfs_use_read_lock         1      1      1      0      1      boolean           D
--------------------------------------------------------------------------------
lvm_bufcnt                9      9      9      1      64     128KB/buffer      D
--------------------------------------------------------------------------------
maxpgahead                8      8      8      0      4K     4KB pages         D
minpgahead
--------------------------------------------------------------------------------
maxrandwrt                0      0      0      0      512K   4KB pages         D
--------------------------------------------------------------------------------
memory_frames             1280K         1280K                4KB pages         S
--------------------------------------------------------------------------------
minpgahead                2      2      2      0      4K     4KB pages         D
maxpgahead
--------------------------------------------------------------------------------
numclust                  1      1      1      0      2G-1   16KB/cluster      D
--------------------------------------------------------------------------------
numfsbufs                 196    196    196    1      2G-1                     M
--------------------------------------------------------------------------------
pd_npages                 64K    64K    64K    1      512K   4KB pages         D
--------------------------------------------------------------------------------
pgahd_scale_thresh        0      0      0      0      1M     4KB pages         D
--------------------------------------------------------------------------------
pv_min_pbuf               512    512    512    512    2G-1                     D
--------------------------------------------------------------------------------
sync_release_ilock        0      0      0      0      1      boolean           D
--------------------------------------------------------------------------------

n/a means parameter not supported by the current platform or kernel

Parameter types:
S = Static: cannot be changed
D = Dynamic: can be freely changed
B = Bosboot: can only be changed using bosboot and reboot
R = Reboot: can only be changed during reboot
C = Connect: changes are only effective for future socket connections
M = Mount: changes are only effective for future mountings
I = Incremental: can only be incremented
d = deprecated: deprecated and cannot be changed

Value conventions:
K = Kilo: 2^10       G = Giga: 2^30       P = Peta: 2^50
M = Mega: 2^20       T = Tera: 2^40       E = Exa: 2^60
vcadinet1:/# man ioo |cat

Commands Reference, Volume 3, i - m

ioo Command

Purpose

Manages Input/Output tunable parameters.

Syntax

ioo [ -p | -r ] [-y]{ -o Tunable [ =NewValue ] }

ioo [ -p | -r ] [-y] {-d Tunable}

ioo [ -p | -r ] [-y] -D

ioo [ -p | -r ] -a

ioo -h [ Tunable ]

ioo -L [ Tunable ]

ioo -x [ Tunable ] Note: Multiple -o, -d, -x, and -L flags are allowed.

Description
Note: The ioo command can only be executed by root.

The ioo command configures Input/Output tuning parameters. This command sets or displays current or next boot values
for all Input/Output tuning parameters. This command can also make permanent changes or defer changes until the next
reboot. Whether the command sets or displays a parameter is determined by the accompanying flag. The -o flag
performs both actions. It can either display the value of a parameter or set a new value for a parameter.

If a process appears to be reading sequentially from a file, the values specified by the minpgahead parameter
determine the number of pages to be read ahead when the condition is first detected. The value specified by the
maxpgahead parameter sets the maximum number of pages that are read ahead, regardless of the number of preceding
sequential reads.

The operating system allows tuning of the number of file system bufstructs (numfsbuf) and the amount of data
processed by the write-behind algorithm (numclust).

Understanding the Effect of Changing Tunable Parameters

Misuse of the ioo command can cause performance degradation or operating-system failure. Before experimenting with
ioo, you should be thoroughly familiar with Performance overview of the Virtual Memory Manager.

Before modifying any tunable parameter, you should first carefully read about all its characteristics in the Tunable
Parameters section below, and follow any Refer To pointer, in order to fully understand its purpose.

You must then make sure that the Diagnosis and Tuning sections for this parameter truly apply to your situation and
that changing the value of this parameter could help improve the performance of your system.

If the Diagnosis and Tuning sections both contain only "N/A", you should probably never change this parameter unless
specifically directed by AIX development.

Flags
-a
Displays current, reboot (when used in conjunction with -r) or permanent (when used in conjunction with -p)
value for all tunable parameters, one per line in pairs tunable = value. For the permanent option, a value is
only displayed for a parameter if its reboot and current values are equal. Otherwise NONE displays as the

value.
-d Tunable
Resets Tunable to its default value. If a Tunable needs to be changed (that is it is currently not set to its
default value) and is of type Bosboot or Reboot, or if it is of type Incremental and has been changed from its
default value, and -r is not used in combination, it is not changed but a warning displays.
-D
Resets all tunables to their default value. If tunables needing to be changed are of type Bosboot or Reboot, or
are of type Incremental and have been changed from their default value, and -r is not used in combination, they
are not changed but a warning displays.
-h [Tunable]
Displays help about the Tunable parameter if one is specified. Otherwise, displays the ioo command usage
statement.
-L [Tunable]
Lists the characteristics of one or all tunables, one per line, using the following format:

NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
minpgahead                2      2      2      0      4K     4KB pages         D
maxpgahead
--------------------------------------------------------------------------------
maxpgahead                8      8      8      0      4K     4KB pages         D
minpgahead
--------------------------------------------------------------------------------
pd_npages                 64K    64K    64K    1      512K   4KB pages         D
--------------------------------------------------------------------------------
maxrandwrt                0      0      0      0      512K   4KB pages         D
--------------------------------------------------------------------------------
numclust                  1      1      1      0             16KB/cluster      D
--------------------------------------------------------------------------------
numfsbufs                 196    196    196                                    M
--------------------------------------------------------------------------------
...
where:
CUR = current value
DEF = default value
BOOT = reboot value
MIN = minimal value
MAX = maximum value
UNIT = tunable unit of measure
TYPE = parameter type: D (for Dynamic), S (for Static), R (for Reboot),
B (for Bosboot), M (for Mount), I (for Incremental), C (for Connect), and d (for Deprecated)
DEPENDENCIES = list of dependent tunable parameters, one per line
-o Tunable [=NewValue ]
Displays the value or sets Tunable to NewValue. If a Tunable needs to be changed (the specified value is
different than current value), and is of type Bosboot or Reboot, or if it is of type Incremental and its
current value is bigger than the specified value, and -r is not used in combination, it is not changed but a
warning displays.

When -r is used in combination without a NewValue, the nextboot value for tunable displays. When -p is used in
combination without a NewValue, a value displays only if the current and next boot values for the Tunable are
the same. Otherwise NONE displays as the value.
-p
Specifies that the changes apply to both the current and reboot values when used in combination with the -o, -d
or -D flags. Turns on the updating of the /etc/tunables/nextboot file in addition to the updating of the
current value. These combinations cannot be used on Reboot and Bosboot type parameters, their current value
cannot be changed.

When used with -a or -o without specifying a new value, the values display only if the current and next boot

values for a parameter are the same. Otherwise NONE displays as the value.
-r
Makes changes apply to reboot values when used in combination with the -o, -d or -D flags. That is, turns on
the updating of the /etc/tunables/nextboot file. If any parameter of type Bosboot is changed, the user is
prompted to run bosboot.

When used with -a or -o without specifying a new value, next boot values for tunables display instead of
current values.
-x [Tunable]
Lists characteristics of one or all tunables, one per line, using the following (spreadsheet) format:

tunable,current,default,reboot,min,max,unit,type,{dtunable }
where:
current = current value
default = default value
reboot = reboot value
min = minimal value
max = maximum value
unit = tunable unit of measure
type = parameter type: D (for Dynamic), S (for Static), R (for Reboot),
B (for Bosboot), M (for Mount), I (for Incremental),
C (for Connect), and d (for Deprecated)
dtunable = space separated list of dependent tunable parameters
-y
Suppresses the confirmation prompt before executing the bosboot command.

Any change (with -o, -d or -D) to a parameter of type Mount results in a message being displayed warning you that
the change is only effective for future mountings.

Any change (with -o, -d or -D flags) to a parameter of type Connect will result in inetd being restarted, and a
message being displayed to warn the user that the change is only effective for future socket connections.

Any attempt to change (with -o, -d or -D) a parameter of type Bosboot or Reboot without -r, will result in an error
message.

Any attempt to change (with -o, -d or -D but without -r) the current value of a parameter of type Incremental with a
new value smaller than the current value, will result in an error message.

Tunable Parameters Type

All the tunable parameters manipulated by the tuning commands (no, nfso, vmo, ioo, raso, and schedo) have been
classified into these categories:
Dynamic
If the parameter can be changed at any time
Static
If the parameter can never be changed
Reboot
If the parameter can only be changed during reboot
Bosboot
If the parameter can only be changed by running bosboot and rebooting the machine
Mount
If changes to the parameter are only effective for future file systems or directory mounts
Incremental
If the parameter can only be incremented, except at boot time
Connect
If changes to the parameter are only effective for future socket connections
Deprecated
If changing this parameter is no longer supported by the current release of AIX.

For parameters of type Bosboot, whenever a change is performed, the tuning commands automatically prompt the user to
ask if they want to execute the bosboot command. For parameters of type Connect, the tuning commands automatically
restart the inetd daemon.

Note that the current set of parameters managed by the ioo command only includes Static, Dynamic, Mount and
Incremental types.

Compatibility Mode

When running in pre 5.2 compatibility mode (controlled by the pre520tune attribute of sys0, see Performance tuning
enhancements for AIX 5.2 in the Performance management), reboot values for parameters, except those of type Bosboot,
are not really meaningful because in this mode they are not applied at boot time.

In pre 5.2 compatibility mode, setting reboot values to tuning parameters continues to be achieved by imbedding
calls to tuning commands in scripts called during the boot sequence. Parameters of type Reboot can therefore be set
without the -r flag, so that existing scripts continue to work.

This mode is automatically turned ON when a machine is MIGRATED to AIX 5.2. For complete installations, it is turned
OFF and the reboot values for parameters are set by applying the content of the /etc/tunables/nextboot file during
the reboot sequence. Only in that mode are the -r and -p flags fully functional. See Kernel Tuning in AIX 5L Version
5.3 Performance Tools Guide and Reference for more information.

Tunable Parameters
j2_dynamicBufferPreallocation
Purpose:
Specifies the number of 16k slabs to preallocate when the filesystem is running low of bufstructs.
Values:
*    Default: 16 (256k worth)
*    Range: 0 to 256
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
The value is in 16k slabs, per filesystem. The filesystem does not need remounting. The bufstructs for
Enhanced JFS are now dynamic; the number of buffers that start on the paging device is controlled by
j2_nBufferPerPagerDevice, but buffers are allocated and destroyed dynamically past this initial value.
If the value of external pager filesystem I/Os blocked with no fsbuf (from vmstat -v) increases, the
j2_dynamicBufferPreallocation should be increased for that filesystem, as the I/O load on the filesystem
could be exceeding the speed of preallocation. A value of 0 (zero) disables dynamic buffer allocation
completely.
j2_inodeCacheSize
Purpose:
Controls the amount of memory Enhanced JFS will use for the inode cache.
Values:
*    Default: 400
*    Range: 1 to 1000
*    Type: Dynamic
Diagnosis:
Tuning this value is useful when accessing large numbers of files causes excessive I/O as inodes are
recycled.
Tuning:
This tunable does not explicitly indicate the amount that will be used, but is instead a scaling factor.
It is used in combination with the size of the main memory to determine the maximum memory usage for the
inode cache. The valid values for this tunable are between 1 and 1000, inclusive. This value represents
a maximum size. The system may not reach the maximum size. If the tunable is lowered, a best effort will
be made to lower the size. It may not be possible to lower the size immediately, so shortly after tuning
the size of the cache may be higher than the maximum. It is not recommended to set the values above 400,
but the interface is provided in case it helps certain workloads. Values above 400 may exhaust the
kernel heap. Similarly, low values (values below 100) may be too few depending on the workload and

demands on the system. This may result in errors such as "File table full" being returned to the
application. Also, on the 32-bit kernel, the ideal maximum may never be reached due to a restricted
kernel heap. If this value is changed, the value for metadata_cache_size may need to be reconsidered if
tuning for a specific workload. The inode cache controls the inode data stored in memory, so if the
workload uses a large number of files, increasing the maximum size of the inode cache may help. If the
workload uses few files, but the files are large, increasing the maximum size of the metadata cache may
help; use the metadata_cache_size tunable for that. Because the inode cache is pinned memory, the cache
size can be decreased if the workload uses few files. This frees the memory for use elsewhere.
j2_maxPageReadAhead
Purpose:
Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file on
Enhanced JFS.
Values:
*    Default: 128
*    Range: 0 to 65536 (64 K)
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
The difference between minfree and maxfree should always be equal to or greater than
j2_maxPageReadAhead. If run time decreases with higher a j2_maxPageReadAhead value, observe other
applications to ensure that their performance has not deteriorated.
Refer To:
Sequential read performance tuning
j2_maxRandomWrite
Purpose:
Specifies a threshold for random writes to accumulate in RAM before subsequent pages are flushed to disk
by the Enhanced JFS's write-behind algorithm. The random write-behind threshold is on a per-file basis.
Values:
*    Default: 0
*    Range: 0 to 65536 (64 K)
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Useful if too many pages are flushed out by syncd.
j2_maxUsableMaxTransfer
Purpose:
Specifies the maximum LTG (Logical Track Group) size, in pages, that Enhanced JFS will gather into a
single bufstruct. Defaults to 512, or a 2 megabyte LTG in a single bufstruct.
Values:
*    Default: 512
*    Range: 1 to 4096
*    Type: Mount
Diagnosis:
N/A
Tuning:
The value is in pages. It is a mount tunable. The range is 1 to 4096. The filesystem must be remounted.
This tunable is not applicable on the 32-bit kernel due to heap constraints. On the 64-bit kernel, this
value is the maximum size of the gather list of pages that can be collected into a single buf struct.
The actual size of the gather list depends on the LTG size of the filesystem, this tunable only
specifies a maximum size that Enhanced JFS will use to construct the bufstructs. Kernel heap exhaustion
may occur due to the size of Enhanced JFS bufstructs. It is best to increment this value slowly,
observing overall system performance after each change, to avoid kernel heap exhaustion.
j2_metadataCacheSize
Purpose:
Controls the amount of memory Enhanced JFS will use for the metadata cache.
Values:

*    Default: 400
*    Range: 1 to 1000
*    Type: Dynamic
Diagnosis:
Tuning this value is useful when accessing large amounts of file metadata causes excessive I/O.
Tuning:
This tunable does not explicitly indicate the amount that will be used, but is instead a scaling factor;
it is used in combination with the size of the main memory to determine the maximum memory usage for the
inode cache. The valid values for this tunable are between 1 and 1000, inclusive. This value represents
a maximum size. The system may not reach the maximum size. If the tunable is lowered, a best effort will
be made to lower the size. It may not be possible to lower the size immediately, so shortly after tuning
the size of the cache may be higher than the maximum. It is not recommended to set the values above 400,
but the interface is provided in case it helps certain workloads. Values above 400 may exhaust the
kernel heap. Similarly, low values (values below 100) may be too few depending on the workload and
demands on the system. This may result in extremely slow access times. Also, on the 32-bit kernel, the
ideal maximum may never be reached due to a restricted kernel heap. If this value is changed, the value
for inode_cache_size may need to be reconsidered if tuning for a specific workload. The inode cache
controls the inode data stored in memory, so if the workload uses a large number of files, increasing
the maximum size of the inode cache may help; use the inode_cache_size tunable for that. If the workload
uses few files, but the files are large, increasing the maximum size of the metadata cache may help.
Because the metadata cache is pinned memory, the cache size can be decreased if the workload uses small
files. This frees the memory for use elsewhere.
j2_minPageReadAhead
Purpose:
Specifies the minimum number of pages to be read ahead when processing a sequentially accessed file on
Enhanced JFS.
Values:
*    Default: 2
*    Range: 0 to 65536 (64 K)
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure
that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random.
Refer To:
Sequential read performance tuning
j2_nBufferPerPagerDevice
Purpose:
Specifies the minimum number of file system bufstructs for Enhanced JFS.
Values:
*    Default: 512
*    Range: 512 to 262144 (256 K)
*    Type: Mount
Diagnosis:
Using vmstat -v, look for the "external pager filesystem I/Os blocked with no fsbuf". If the kernel must
wait for a free bufstruct, it puts the process on a wait list before the start I/O is issued and will
wake it up once a bufstruct has become available.
Tuning:
This tunable specifies the number of bufstructs that start on the paging device. Enhanced JFS will
allocate more dynamically. Ideally, this value should not be tuned, and instead
j2_dynamicBufferPreallocation should be tuned. However, it may be appropriate to change this value if,
when using vmstat -v, the value of "external pager filesystem I/Os blocked with no fsbuf" increases
rapidly and j2_dynamicBufferPreallocation tuning has already been attempted. It may be appropriate to
increase if striped logical volumes or disk arrays are being used.
j2_nonFatalCrashesSystem
Purpose:
Turns on the j2_nonFatalCrashesSystem flag to crash the system when Enhanced JFS corruption occurs.

Values:
*    Default: 0
*    Range: 0 or 1
*    Type: Mount
Diagnosis:
N/A
Tuning:
N/A
j2_nPagesPerWriteBehindCluster
Purpose:
Specifies the number of pages per cluster processed by Enhanced JFS's write behind algorithm.
Values:
*    Default: 32
*    Range: 0 to 65536 (64 K)
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the
I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are
being used.
j2_nRandomCluster
Purpose:
Specifies the distance apart (in clusters) that writes have to exceed in order for them to be considered
as random by the Enhanced JFS's random write behind algorithm.
Values:
*    Default: 0
*    Range: 0 to 65536 (64 K)
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the
I/O pattern is random and random write behind is enabled (j2_maxRandomWrite).
j2_syncModifiedMapped
Purpose:
Syncs files that are modified through a mapping of shmat or mmap by using either the sync command or
sync daemon. If set to 0, these files are skipped by the sync command and the sync daemon and must be
synced using fsync.
Values:
*    Default: 1
*    Range: 0 or 1
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
N/A
jfs_clread_enabled
Purpose:
This tunable controls whether JFS uses clustered reads on all files.
Values:
*    Default: 0
*    Range: 0 or 1
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
In general, this option is not needed, but it may benefit certain workloads that have relatively random
read access patterns.

jfs_use_read_lock
Purpose:
Controls whether JFS uses a shared lock when reading from a file. If this option is turned off, two
processes cannot disrupt each other's read.
Values:
*    Default: 0
*    Range: 0 or 1
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Certain workloads may benefit from this.
lvm_bufcnt
Purpose:
Specifies the number of LVM buffers for raw physical I/Os.
Values:
*    Default: 9
*    Range: 1 to 64
*    Type: Dynamic
Diagnosis:
Applications performing large writes to striped raw logical volumes are not obtaining the desired
throughput rate.
Tuning:
LVM splits large raw I/Os into multiple buffers of 128 K a piece. Default value of 9 means that about 1
MB I/Os can be processed without waiting for more buffers. If a system is configured to have striped raw
logical volumes and is doing writes greater than 1.125 MB, increasing this value may help the throughput
of the application. If performing larger than 1 MB raw I/Os, it might be useful to increase this value.
Refer To:
File system buffer tuning
maxpgahead
Purpose:
Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file.
Values:
*    Default: 8 (the default should be a power of two and should be greater than or equal to
minpgahead)
*    Range: 0 to 4096
*    Type: Dynamic
Diagnosis:
Observe the elapsed execution time of critical sequential-I/O-dependent applications with the time
command.
Tuning:
Because of limitations in the kernel, do not exceed 512 as the maximum value used. The difference
between minfree and maxfree should always be equal to or greater than maxpgahead. If execution time
decreases with higher maxpgahead, observe other applications to ensure that their performance has not
deteriorated.
Refer To:
Sequential page read ahead
maxrandwrt
Purpose:
Specifies a threshold (in 4 KB pages) for random writes to accumulate in RAM before subsequent pages are
flushed to disk by the write-behind algorithm. The random write-behind threshold is on a per-file basis.
Values:
*    Default: 0
*    Range: 0 to largest_file_size_in_pages
*    Type: Dynamic
Diagnosis:
vmstat shows page out and I/O wait peaks on regular intervals (usually when the sync daemon is writing
pages to disk).

Tuning:
Useful to set this value to 1 or higher if too much I/O occurs when syncd runs. Default is to have
random writes stay in RAM until a sync operation. Setting maxrandwrt ensures these writes get flushed to
disk before the sync operation has to occur. However, this could degrade performance because the file is
then being flushed each time. Tune this option to favor interactive response time over throughput. After
the threshold is reached, all subsequent pages are then immediately flushed to disk. The pages up to the
threshold value stay in RAM until a sync operation. A value of 0 disables random write-behind.
Refer To:
Sequential and random write behind
minpgahead
Purpose:
Specifies the number of pages with which sequential read-ahead starts.
Values:
*    Default: 2
*    Range: 0 to 4096 (should be a power of two)
*    Type: Dynamic
Diagnosis:
Observe the elapsed execution time of critical sequential-I/O-dependent applications with time command.
Tuning:
Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure
that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random.
Refer To:
Sequential page read ahead
numclust
Purpose:
Specifies the number of 16 k clusters processed by the sequential write-behind algorithm of the VMM.
Values:
*    Default: 1
*    Range: 0 to any positive integer
*    Type: Dynamic
Diagnosis:
N/A
Tuning:
Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the
I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are
being used.
Refer To:
Sequential and random write behind
numfsbufs
Purpose:
Specifies the number of file system bufstructs.
Values:
*    Default: 196 (value is dependent on the size of the bufstruct)
*    Type: Mount
Diagnosis:
A default numfsbufs is calculated based on the running kernel and the memory configuration of the
machine. This value can be increased from the default value to a max of 2G-1. However, increasing the
numfsbufs to a value close to 2G may cause kernel heap exhaustion. It is best to tune the numfsbufs
incrementally, observing overall system performance as each change is made.
Tuning:
If the VMM must wait for a free bufstruct, it puts the process on the VMM wait list before the start I/O
is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if
striped logical volumes or disk arrays are being used.
Refer To:
File system buffer tuning
pd_npages
Purpose:
Specifies the number of pages that should be deleted in one chunk from RAM when a file is deleted.

Values:
*    Default: 65536
*    Range: 1 to largest filesize_in_pages
*    Type: Dynamic
Diagnosis:
Real-time applications that experience sluggish response time while files are being deleted.
Tuning:
Tuning this option is only useful for real-time applications. If real-time response is critical,
adjusting this option may improve response time by spreading the removal of file pages from RAM more
evenly over a workload.
Refer To:
File system buffer tuning
pgahd_scale_thresh
Purpose:
When a system is low on free frames, aggressive pagehead can unnecessarily exhaust the free list and
start page replacement. This tunable specifies a number of pages on the free list below which pagehead
should be scaled back.
Values:
*    Default: 0 (Do not scale back pagehead)
*    Range: 0 to 4/5 of memory
*    Type: Dynamic
Diagnosis:
The system is paging but the expected memory usage should fit within main store, and the workload makes
significant use of sequential access to files.
Tuning:
When the number of free pages in a mempool drops below this threshold, pageahead will be linearly scaled
back, avoiding prepaging memory that would then need to be forced back out when the LRU daemon runs.
Useful to increase if the system is unable to meet the memory demands under heavy read workload.
pv_min_pbuf
Purpose:
Specifies the minimum number of pbufs per PV that the LVM uses. This is a global value that applies to
all VGs on the system.
Values:
*    Default: 256 on 32-bit kernel; 512 on 64-bit kernel.
*    Range: 512 to 2G-1
*    Type: Dynamic
Diagnosis:
Increase when the value of "pending disk I/Os blocked with no pbuf" (as displayed by vmstat -v) is
increasing rapidly. This indicates that the LVM had to block I/O requests waiting for pbufs to become
available.
Tuning:
Useful to increase if there is a substantial number of simultaneous I/Os and the value of "pending disk
I/Os blocked with no pbuf" (as displayed by vmstat -v), increases over time. The lvmo command can also
be used to set a different value for a particular VG. In this case, the larger of the two values is used
for this particular VG. Using a value close to 2G will pin a great deal of memory and might result in
overall poor system performance. This value should be increased incrementally, and overall system
performance should be monitored at each increase.
Refer To:
LVM performance tuning with the lvmo command
sync_release_ilock
Purpose:
If set, will cause a sync() to flush all I/O to a file without holding the i-node lock, and then use the
i-node lock to do the commit.
Values:
*    Default: 0 (off)
*    Range 0 or 1
*    Type: Dynamic
Diagnosis:

I/O to a file is blocked when the syncd daemon is running.
Tuning:
Default value of 0 means that the i-node lock is held while all dirty pages of a file are flushed.
Refer To:
File synchronization performance tuning
Examples
1    To list the current and reboot value, range, unit, type, and dependencies of all tunables parameters that are
managed by the ioo command, enter the following command:

ioo -L
2    To display help on j2_nPagesPerWriteBehindCluster, enter the following command:

ioo -h j2_nPagesPerWriteBehindCluster
3    To set maxrandwrt to 4 after the next reboot, enter the following command:

ioo -r -o maxrandwrt=4
4    To permanently reset all ioo tunable parameters to default, enter the following command:

ioo -p -D
5    To list the reboot value of all ioo parameters, enter the following command:

ioo -r -a
6    To list (spreadsheet format) the current and reboot value, range, unit, type and dependencies of all tunables
parameters managed by the ioo command, enter the following command:

ioo -x

Related Information

The nfso command, no command, raso command, schedo command, tuncheck command, tunchange command, tundefault command,
tunrestore command, tunsave command, and vmo command.

Kernel Tuning in AIX 5L Version 5.3 Performance Tools Guide and Reference.

Performance tuning enhancements for AIX 5.2 in Performance management.

icon phone
Téléphone/Whatsapp : +33 (0)6 83 84 85 74
icon phone