La commande 'nfso' permet de gérer les options systèmes liées aux couches NFS.
Nous retrouvons le plus souvent l'option '-a' pour afficher les valeurs de chaque option et '-o field=value' pour modifier une valeur.
Exemple présenté ci-dessous :
# nfso -a
portcheck = 0
udpchecksum = 1
nfs_socketsize = 600000
nfs_tcp_socketsize = 600000
nfs_setattr_error = 0
nfs_gather_threshold = 4096
nfs_repeat_messages = 0
nfs_udp_duplicate_cache_size = 5000
nfs_tcp_duplicate_cache_size = 5000
nfs_server_base_priority = 0
nfs_dynamic_retrans = 1
nfs_iopace_pages = 0
nfs_max_connections = 0
nfs_max_threads = 3891
nfs_use_reserved_ports = 0
nfs_device_specific_bufs = 1
nfs_server_clread = 1
nfs_rfc1323 = 0
nfs_max_write_size = 32768
nfs_max_read_size = 32768
nfs_allow_all_signals = 0
nfs_v2_pdts = 1
nfs_v3_pdts = 1
nfs_v2_vm_bufs = 10000
nfs_v3_vm_bufs = 10000
nfs_securenfs_authtimeout = 0
nfs_v3_server_readdirplus = 1
lockd_debug_level = 0
statd_debug_level = 0
statd_max_threads = 50
nfs_v4_fail_over_timeout = 0
utf8_validation = 1
nfs_v4_pdts = 1
nfs_v4_vm_bufs = 10000
server_delegation = 1
nfs_auto_rbr_trigger = 0
client_delegation = 1
vcadinet1:/# nfso -L
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
portcheck 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
udpchecksum 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_socketsize 600000 600000 600000 40000 1M Bytes D
--------------------------------------------------------------------------------
nfs_tcp_socketsize 600000 600000 600000 40000 1M Bytes D
--------------------------------------------------------------------------------
nfs_setattr_error 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_gather_threshold 4K 4K 4K 512 8K+1 Bytes D
--------------------------------------------------------------------------------
nfs_repeat_messages 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_udp_duplicate_cache_size
5000 5000 5000 5000 100000 Req I
--------------------------------------------------------------------------------
nfs_tcp_duplicate_cache_size
5000 5000 5000 5000 100000 Req I
--------------------------------------------------------------------------------
nfs_server_base_priority 0 0 0 31 125 Pri D
--------------------------------------------------------------------------------
nfs_dynamic_retrans 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_iopace_pages 0 0 0 0 64K-1 Pages D
--------------------------------------------------------------------------------
nfs_max_connections 0 0 0 0 10000 Number D
--------------------------------------------------------------------------------
nfs_max_threads 3891 3891 3891 5 3891 Threads D
--------------------------------------------------------------------------------
nfs_use_reserved_ports 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_device_specific_bufs 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_server_clread 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_rfc1323 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_max_write_size 32K 32K 32K 512 64K Bytes D
--------------------------------------------------------------------------------
nfs_max_read_size 32K 32K 32K 512 64K Bytes D
--------------------------------------------------------------------------------
nfs_allow_all_signals 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_v2_pdts 1 1 1 1 8 PDTs M
--------------------------------------------------------------------------------
nfs_v3_pdts 1 1 1 1 8 PDTs M
--------------------------------------------------------------------------------
nfs_v2_vm_bufs 10000 10000 10000 512 50000 Bufs I
--------------------------------------------------------------------------------
nfs_v3_vm_bufs 10000 10000 10000 512 50000 Bufs I
--------------------------------------------------------------------------------
nfs_securenfs_authtimeout 0 0 0 0 60 Seconds D
--------------------------------------------------------------------------------
nfs_v3_server_readdirplus 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
lockd_debug_level 0 0 0 0 10 Level D
--------------------------------------------------------------------------------
statd_debug_level 0 0 0 0 10 Level D
--------------------------------------------------------------------------------
statd_max_threads 50 50 50 1 1000 Threads D
--------------------------------------------------------------------------------
nfs_v4_fail_over_timeout 0 0 0 0 3600 Seconds D
--------------------------------------------------------------------------------
utf8_validation 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_v4_pdts 1 1 1 1 8 PDTs M
--------------------------------------------------------------------------------
nfs_v4_vm_bufs 10000 10000 10000 512 50000 Bufs I
--------------------------------------------------------------------------------
server_delegation 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_auto_rbr_trigger 0 0 0 -1 1M MB D
--------------------------------------------------------------------------------
client_delegation 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
n/a means parameter not supported by the current platform or kernel
Parameter types:
S = Static: cannot be changed
D = Dynamic: can be freely changed
B = Bosboot: can only be changed using bosboot and reboot
R = Reboot: can only be changed during reboot
C = Connect: changes are only effective for future socket connections
M = Mount: changes are only effective for future mountings
I = Incremental: can only be incremented
Value conventions:
K = Kilo: 2^10 G = Giga: 2^30 P = Peta: 2^50
M = Mega: 2^20 T = Tera: 2^40 E = Exa: 2^60
La documentation officielle IBM est donnée ci-dessous.
Commands Reference, Volume 4, n - r
nfso Command
Purpose
Manages Network File System (NFS) tuning parameters.
Syntax
nfso [ -p | -r ] [ -c ] { -o Tunable[ =newvalue ] }
nfso [ -p | -r ] { -d Tunable }
nfso [ -p | -r ] -D
nfso [ -p | -r ] -a [ -c ]
nfso -h [ Tunable ]
nfso -l [ hostname ]
nfso -L [ Tunable ]
nfso -x [ Tunable ] Note: Multiple flags -o, -d, -x, and -L are allowed.
Description
Use the nfso command to configure Network File System tuning parameters. The nfso command sets or displays current or next boot values for Network
File System tuning parameters. This command can also make permanent changes or defer changes until the next reboot. Whether the command sets or
displays a parameter is determined by the accompanying flag. The -o flag performs both actions. It can either display the value of a parameter or
set a new value for a parameter.
Understanding the Effect of Changing Tunable Parameters
Extreme care should be taken when using this command. If used incorrectly, the nfso command can make your system inoperable.
Before modifying any tunable parameter, you should first carefully read about all its characteristics in the Tunable Parameters section below, and
follow any Refer To pointer, in order to fully understand its purpose.
You must then make sure that the Diagnosis and Tuning sections for this parameter truly apply to your situation and that changing the value of this
parameter could help improve the performance of your system.
If the Diagnosis and Tuning sections both contain only "N/A", you should probably never change this parameter unless specifically directed by AIX
development.
Flags
-a
Displays the current, reboot (when used in conjunction with -r) or permanent (when used in conjunction with -p) value for all tunable
parameters, one per line in pairs Tunable = Value. For the permanent options, a value is only displayed for a parameter if its reboot and
current values are equal. Otherwise NONE displays as the value.
-c
Changes the output format of the nfso command to colon-delineated format.
-d Tunable
Sets the Tunable variable back to its default value. If a Tunable needs to be changed that is, . it is currently not set to its default value)
and is of type Bosboot or Reboot, or if it is of type Incremental and has been changed from its default value, and -r is not used in
combination, it will not be changed but a warning displays instead.
-D
Sets all Tunable variables back to their default value. If Tunables needing to be changed are of type Bosboot or Reboot, or are of type
Incremental and have been changed from their default value, and the -r flag is not used in combination, they will not be changed but warnings
display instead.
-h [Tunable]
Displays help about Tunable parameter if one is specified. Otherwise, displays the nfso command usage statement.
-L [Tunable]
Lists the characteristics of one or all Tunable, one per line, using the following format:
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
portcheck 0 0 0 0 1 On/Off D
--------------------------------------------------------------------------------
udpchecksum 1 1 1 0 1 On/Off D
--------------------------------------------------------------------------------
nfs_socketsize 600000 600000 600000 40000 1M Bytes D
--------------------------------------------------------------------------------
nfs_tcp_socketsize 600000 600000 600000 40000 1M Bytes D
--------------------------------------------------------------------------------
...
where:
CUR = current value
DEF = default value
BOOT = reboot value
MIN = minimal value
MAX = maximum value
UNIT = tunable unit of measure
TYPE = parameter type: D (for Dynamic), S (for Static), R (for Reboot),
B (for Bosboot), M (for Mount), I (for Incremental), C (for Connect), and d (for Deprecated)
DEPENDENCIES = list of dependent tunable parameters, one per line
-l hostname
Allows a system administrator to release NFS file locks on an NFS server. The hostname variable specifies the host name of the NFS client that
has file locks held at the NFS server. The nfso -l command makes a remote procedure call to the NFS server's rpc.lockd network lock manager to
request the release of the file locks held by the hostname NFS client.
If there is an NFS client that has file locks held at the NFS server and this client has been disconnected from the network and cannot be
recovered, the nfso -l command can be used to release those locks so that other NFS clients can obtain similar file locks. Note: The nfso
command can be used to release locks on the local NFS server only.
-o Tunable[ =newvalue ]
Displays the value or sets Tunable to newvalue. If a tunable needs to be changed (the specified value is different than current value), and is
of type Bosboot or Reboot, or if it is of type Incremental and its current value is bigger than the specified value, and -r is not used in
combination, it will not be changed but a warning displays instead.
When -r is used in combination without a new value, the nextboot value for the Tunable displays. When -p is used in combination without a
newvalue, a value displays only if the current and next boot values for the Tunable are the same. Otherwise NONE displays as the value.
-p
Makes changes apply to both current and reboot values, when used in combination with -o, -d or -D, that is, it turns on the updating of the
/etc/tunables/nextboot file in addition to the updating of the current value. These combinations cannot be used on Reboot and Bosboot type
parameters because their current value cannot be changed.
When used with -a or -o without specifying a new value, values are displayed only if the current and next boot values for a parameter are the
same. Otherwise NONE displays as the value.
-r
Makes changes apply to reboot values when used in combination with -o, -d or -D, that is, it turns on the updating of the
/etc/tunables/nextboot file. If any parameter of type Bosboot is changed, the user is prompted to run bosboot.
When used with -a or -o without specifying a new value, next boot values for tunables display instead of current values.
-x [Tunable]
Lists characteristics of one or all tunables, one per line, using the following (spreadsheet) format:
tunable,current,default,reboot,min,max,unit,type,{dtunable }
where:
current = current value
default = default value
reboot = reboot value
min = minimal value
max = maximum value
unit = tunable unit of measure
type = parameter type: D (for Dynamic), S (for Static), R (for Reboot),
B (for Bosboot), M (for Mount), I (for Incremental),
C (for Connect), and d (for Deprecated)
dtunable = space separated list of dependent tunable parameters
Any change (with -o, -d, or -D) to a parameter of type Mount results in a message displaying to warn the user that the change is only effective for
future mountings.
Any change (with -o, -d or -D flags) to a parameter of type Connect will result in inetd being restarted, and a message displaying to warn the user
that the change is only effective for future socket connections.
Any attempt to change (with -o, -d, or -D) a parameter of type Bosboot or Reboot without -r, results in an error message.
Any attempt to change (with -o, -d, or -D but without -r) the current value of a parameter of type Incremental with a new value smaller than the
current value, results in an error message.
Tunable Parameters Type
All the tunable parameters manipulated by the tuning commands (no, nfso, vmo, ioo, schedo, and raso) have been classified into these categories:
Dynamic
If the parameter can be changed at any time
Static
If the parameter can never be changed
Reboot
If the parameter can only be changed during reboot
Bosboot
If the parameter can only be changed by running bosboot and rebooting the machine
Mount
If changes to the parameter are only effective for future file systems or directory mounts
Incremental
If the parameter can only be incremented, except at boot time
Connect
If changes to the parameter are only effective for future socket connections
Deprecated
If changing this parameter is no longer supported by the current release of AIX.
For parameters of type Bosboot, whenever a change is performed, the tuning commands automatically prompt the user to ask if they want to execute the
bosboot command. For parameters of type Connect, the tuning commands automatically restart the inetd daemon.
Note that the current set of parameters managed by the nfso command only includes Dynamic, Mount, and Incremental types.
Compatibility Mode
When running in pre 5.2 compatibility mode (controlled by the pre520tune attribute of sys0, see AIX 5.2 compatibility mode), reboot values for
parameters, except those of type Bosboot, are not really meaningful because in this mode they are not applied at boot time.
In pre 5.2 compatibility mode, setting reboot values to tuning parameters continues to be achieved by imbedding calls to tuning commands in scripts
called during the boot sequence. Parameters of type Reboot can therefore be set without the -r flag, so that existing scripts continue to work.
This mode is automatically turned ON when a machine is MIGRATED to AIX 5L Version 5.2. For complete installations, it is turned OFF and the reboot
values for parameters are set by applying the content of the /etc/tunables/nextboot file during the reboot sequence. Only in that mode are the -r
and -p flags fully functional. See Kernel Tuning in the AIX 5L Version 5.3 Performance Tools Guide and Reference for details about the new 5.2 mode.
Tunable Parameters
client_delegation
Purpose:
Enables or disables NFS version 4 client delegation support.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 1 enables client delegation support. A value of 0 disables client delegation support.
lockd_debug_level
Purpose:
Sets the level of debugging for rpc.lockd.
Values:
* Default: 0
* Useful Range: 0 to 9
* Type: Dynamic
Diagnosis:
N/A
Tuning:
N/A
nfs_allow_all_signals
Purpose:
Specifies that the NFS server adhere to signal handling requirements for blocked locks for the UNIX 95/98 test suites.
Values:
* Default: 0
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 1 turns nfs_allow_all_signals on, and a value of 0 turns it off.
nfs_auto_rbr_trigger
Purpose:
Specifies a threshold offset (in megabytes) beyond which a sequential read of an NFS file will result in the pages being released from
memory after the read. This option is ignored when the rbr mount option is in effect.
Values:
* Default: 0 (indicates system determines the threshold)
* Range: -1 (indicates disabled), 0 to max NFS filesize (in MB)
* Type: Dynamic
Diagnosis:
Due to large sequentially read NFS files, vmstat shows a high paging rate and svmon shows a high client page count.
Tuning:
This value should be set to the number of megabytes that should be cached in memory when an NFS file is read sequentially. To prevent
exhaustion of memory with cached file pages, the remaining memory pages beyond this threshold will be released after the memory pages
are read.
nfs_device_specific_bufs (AIX 4.2.1 and later)
Purpose:
This option allows the NFS server to use memory allocations from network devices if the network device supports such a feature.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Use of these special memory allocations by the NFS server can positively affect the overall performance of the NFS server. The default
of 1 means the NFS server is allowed to use the special network device memory allocations. If the value of 0 is used, the NFS server
uses the traditional memory allocations for its processing of NFS client requests. These are buffers managed by a network interface that
result in improved performance (over regular mbufs) because no setup for DMA is required on these. Two adapters that support this
include the Micro Channel ATM adapter and the SP2 switch adapter.
nfs_dynamic_retrans
Purpose:
Specifies whether the NFS client should use a dynamic retransmission algorithm to decide when to resend NFS requests to the server.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
If this function is turned on, the timeo parameter is only used in the first retransmission. With this parameter set to 1, the NFS
client attempts to adjust its timeout behavior based on past NFS server response. This allows for a floating timeout value along with
adjusting the transfer sizes used. All of this is done based on an accumulative history of the NFS server's response time. In most
cases, this parameter does not need to be adjusted. There are some instances where the straightforward timeout behavior is desired for
the NFS client. In these cases, the value should be set to 0 before mounting file systems.
Refer to:
Unnecessary retransmits
nfs_gather_threshold
Purpose:
Sets the minimum size of write requests for which write gathering is done.
Values:
* Default: 4096
* Useful Range: 512 to 8193
* Type: Dynamic
Diagnosis:
If either of the following two situations are observed, tuning nfs_gather_threshold might be appropriate:
* Delays are observed in responding to RPC requests, particularly those where the client is exclusively doing nonsequential writes
or the files being written are being written with file locks held on the client.
* Clients are writing with write sizes < 4096 and write-gather is not working.
Tuning:
If write-gather is to be disabled, change the nfs_gather_threshold to a value greater than the largest possible write. For AIX Version 4
running NFS Version 2, this value is 8192. Changing the value to 8193 disables write gather. Use this for the situation described above
in scenario (1). If write gather is being bypassed due to a small write size, say 1024, as in scenario (2), change the write gather
parameter to gather smaller writes; for example, set to 1024.
nfs_iopace_pages (AIX 4.1)
Purpose:
Specifies the number of NFS file pages that are scheduled to be written back to the server through the VMM at one time. This I/O
scheduling control occurs on close of a file and when the system invokes the syncd daemon.
Values:
* Default: 0 (32 before AIX 4.2.1)
* Range: 0 to 65536
* Type: Dynamic
Diagnosis:
N/A
Tuning:
When an application writes a large file to an NFS mounted filesystem, the file data is written to the NFS server when the file is
closed. In some cases, the resources required to write the file to the server may prevent other NFS file I/O from occurring. This
parameter limits the number of 4 KB pages written to the server to the value of nfs_iopace_pages. The NFS client will schedule
nfs_iopace_pages for writing to the server and then waits for these pages to be written to the server before scheduling the next batch
of pages. The default value will usually be sufficient for most environments. Decreased the values if there are large amounts of
contention for NFS client resources. If there is low contention, the value can be increased. When this value is 0, the default number of
pages written is calculated using a heuristic intended to optimize performance and prevent exhaustion of resources that might prevent
other NFS file I/O from occurring.
nfs_max_connections
Purpose:
Specifies the maximum number of TCP connections allowed into the server.
Values:
* Default: 0 (indicates no limit)
* Range: 0 10 10000
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Limits number of connections into the server in order to reduce load.
nfs_max_read_size
Purpose:
Sets the maximum and preferred read size.
Values:
* Default: 32768 bytes
* Useful Range: 512 to 65536 for NFS V3 over TCP
512 to 61440 for NFS V3 over UDP
512 to 8192 for NFS V2
* Type: Dynamic
Diagnosis:
Useful when all clients need to have changes in the read/write sizes, and it is impractical to change the clients. Default means to use
the values used by the client mount.
Tuning:
Tuning may be required to reduce the V3 read/write sizes when the mounts cannot be manipulated directly on the clients, in particular
during NIM installations on networks where the network is dropping packets with the default 32 KB read/write sizes. In that case, set
the maximum size to a smaller size that works on the network.
It can also be useful where network devices are dropping packets and a generic change is desired for communications with the server.
nfs_max_threads (AIX 4.2.1 and later)
Purpose:
Specifies the maximum number of NFS server threads that are created to service incoming NFS requests.
Values:
* Default: 3891
* Range: 1 to 3891
* Type: Dynamic
Diagnosis:
With AIX 4.2.1, the NFS server is multithreaded. The NFS server threads are created as demand increases for the NFS server. When the NFS
server threads become idle, they will exit. This allows the server to adapt to the needs of the NFS clients. The nfs_max_threads
parameter is the maximum number of threads that can be created.
Tuning:
In general, it does not detract from overall system performance to have the maximum set to something very large because the NFS server
creates threads as needed. However, this assumes that NFS-serving is the primary machine purpose. If the desire is to share the system
with other activities, then the maximum number of threads may need to be set low. The maximum number can also be specified as a
parameter to the nfsd daemon.
Refer to:
Number of necessary nfsd threads
nfs_max_write_size
Purpose:
Allows the system administrator to control the NFS RPC sizes at the server.
Values:
* Default: 32768 bytes
* Useful Range: 512 to 65536 for NFS V3 over TCP
512 to 61440 for NFS V3 over UDP
512 to 8192 for NFS V2
* Type: Dynamic
Diagnosis:
Useful when all clients need to have changes in the read/write sizes, and it is impractical to change the clients. Default means to use
the values used by the client mount.
Tuning:
Tuning may be required to reduce the V3 read/write sizes when the mounts cannot be manipulated directly on the clients, in particular,
during NIM installations on networks where the network is dropping packets with the default 32 KB read/write sizes. In that case, set
the maximum size to a smaller size that works on the network. It can also be useful where network devices are dropping packets and a
generic change is desired for communications with the server.
nfs_repeat_messages (AIX Version 4)
Purpose:
Checks for duplicate NFS messages. This option is used to avoid displaying duplicate NFS messages.
Values:
* Default: 0 (no)
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Tuning this parameter does not affect performance.
nfs_v4_fail_over_timeout (AIX 5.3 with 5300-03 and later)
Purpose:
Specifies how long the NFS client will wait (in seconds) before switching to another server when data is replicated and the current
associated server is not accessible. If the default value of 0 is set, the client dynamically determines the timeout as twice the RPC
call timeout that was established at mount time or with nfs4cl. The nfs_v4_fail_over_timeout option is client-wide; if set, the
nfs_v4_fail_over_timeout option overrides the default behavior on all replicated data. This option only applies to NFS version 4.
Value:
* Default: 0
* Range: 0-4294967295
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 0 allows the client to internally determine the timeout value. A positive value overrides the default and specifies the
replication fail-over timeout in seconds for all data accessed by the client.
nfs_rfc1323 (AIX 4.3)
Purpose:
Enables very large TCP window size negotiation (greater than 65535 bytes) to occur between systems.
Values:
* Default: 0
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
If using the TCP transport between NFS client and server, and both systems support it, this allows the systems to negotiate a TCP window
size in a way that allows more data to be in-flight between the client and server. This increases the throughput potential between
client and server. Unlike the rfc1323 option of the no command, this only affects NFS and not other applications in the system. Value of
0 means this is disabled, and value of 1 means it is enabled. If the no command parameter rfc1323 is already set, this NFS option does
not need to be set.
nfs_server_base_priority
Purpose:
Sets the base priority of nfsd daemons.
Values:
* Default: 65
* Range: 31 to 125
* Type: Dynamic
Diagnosis:
N/A
Tuning:
By default, the nfsd daemons run with a floating process priority. Therefore, as they increase their cumulative CPU time, their priority
changes. This parameter can be used to set a static parameter for the nfsd daemons. The value of 0 represents the floating priority
(default). Other values within the acceptable range are used to set the priority of the nfsd daemon when an NFS request is received at
the server. This option can be used if the NFS server is overloading the system (lowering or making the nfsd daemon less favored). It
can also be used if you want the nfsd daemons be one of the most favored processes on the server. Use caution when setting the parameter
because it can render the system almost unusable by other processes. This situation can occur if the NFS server is very busy and
essentially locks out other processes from having run time on the server.
nfs_server_clread (AIX 4.2.1 and later)
Purpose:
This option allows the NFS server to be very aggressive about the reading of a file. The NFS server can only respond to the specific
NFS-read request from the NFS client. However, the NFS server can read data in the file which exists immediately after the current read
request. This is normally referred to as read-ahead. The NFS server does read-ahead by default.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
In most NFS serving environments, the default value (enabled) for this parameter is appropriate. However, in some situations where the
amount of NFS server memory available for file caching and/or where the access pattern of reads over NFS is primarily random, then
disabling this option may be appropriate.
Tuning:
With the nfs_server_clread option enabled, the NFS server becomes very aggressive about doing read-ahead for the NFS client. If value is
1, then aggressive read-ahead is done; If value is 0, normal system default read-ahead methods are used. Normal system read-ahead is
controlled by VMM. In AIX 4.2.1, the more aggressive top-half JFS read-ahead was introduced. This mechanism is less susceptible to
read-ahead breaking down due to out-of-order requests (which are typical in the NFS server case). When the mechanism is activated, it
will read an entire cluster (128 KB, the LVM logical track group size).
nfs_setattr_error (AIX 4.2.1 and later)
Purpose:
When enabled, NFS server ignores setattr requests that are not valid.
Values:
* Default: 0 (disabled)
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
This option is provided for certain PC applications. Tuning this parameter does not affect performance.
nfs_socketsize
Purpose:
Sets the queue size of the NFS server UDP socket.
Values:
* Default: 600000
* Practical Range: 60000 to sb_max
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Increase the size of the nfs_socketsize variable when netstat reports packets dropped due to socket buffer overflows for UDP, and
increasing the number of nfsd daemons has not helped.
Refer to:
TCP/IP tuning guidelines for NFS performance section in NFS performance monitoring and tuning.
nfs_tcp_duplicate_cache_size (AIX 4.2.1 and later)
Purpose:
Specifies the number of entries to store in the NFS server's duplicate cache for the TCP network transport.
Values:
* Default: 5000
* Range: 1000 to 100000
* Type: Incremental
Diagnosis:
N/A
Tuning:
The duplicate cache size cannot be decreased. Increase the duplicate cache size for servers that have a high throughput capability. The
duplicate cache is used to allow the server to correctly respond to NFS client retransmissions. If the server flushes this cache before
the client is able to retransmit, then the server may respond incorrectly. Therefore, if the server can process 1000 operations before a
client retransmits, the duplicate cache size must be increased.
Calculate the number of NFS operations that are being received per second at the NFS server and multiply this by 4. The result is a
duplicate cache size that should be sufficient to allow correct response from the NFS server. The operations that are affected by the
duplicate cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir().
nfs_tcp_socketsize (AIX 4.2.1 and later)
Purpose:
Sets the queue size of the NFS TCP socket. The queue size is specified in number of bytes. The TCP socket is used for buffering NFS RPC
packets on send and receive. This option reserves, but does not allocate, memory for use by the send and receive socket buffers of the
socket.
Values:
* Default: 600000
* Practical Range: 60000 to sb_max
* Type: Dynamic
Diagnosis:
Poor sequential read or write performance between an NFS server and client when both of the following situations exist:
* A large (32 KB or greater) RPC size is being used.
* Communication between the server and client is over a network link using a large (9000-byte or greater) MTU size.
Tuning:
Do not set the nfs_tcp_socketsize value to less than 60 000. The default value should be adequate for the vast majority of environments.
This value allows enough space for the following functions:
* Buffer incoming data without limiting the TCP window size.
* Buffer outgoing data without limiting the speed at which NFS can write data to the socket.
The value of the nfs_tcp_socketsize option must be less than the sb_max_option, which can be manipulated by the no command.
Refer to:
NFS performance monitoring and tuning
nfs_udp_duplicate_cache_size (AIX 4.2.1 and later)
Purpose:
Specifies the number of entries to store in the NFS server's duplicate cache for the UDP network transport.
Values:
* Default: 5000
* Range: 1000 to 100000
* Type: Incremental
Diagnosis:
N/A
Tuning:
The duplicate cache size cannot be decreased. Increase the duplicate cache size for servers that have a high throughput capability. The
duplicate cache is used to allow the server to correctly respond to NFS client retransmissions. If the server flushes this cache before
the client is able to retransmit, then the server may respond incorrectly. Therefore, if the server can process 1000 operations before a
client retransmits, the duplicate cache size must be increased.
Calculate the number of NFS operations that are being received per second at the NFS server and multiply this by 4. The result is a
duplicate cache size that should be sufficient to allow correct response from the NFS server. The operations that are affected by the
duplicate cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir().
nfs_use_reserved_ports (AIX 4.2.1 and later)
Purpose:
Specifies using nonreserved IP port number.
Values:
* Default: 0
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Value of 0 use a nonreserved IP port number when the NFS client communicates with the NFS server.
nfs_v2_pdts
Purpose:
Sets the number of tables for memory pools used by the biods for NFS Version 2 mounts.
Values:
* Default: 1
* Range: 1 to 8
* Type: Mount
Diagnosis:
Run vmstat -v to look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number may need to be increased in conjunction
with nfs_v2_vm_bufs.
Note: bufs option must be set prior to pdts.
nfs_v2_vm_bufs
Purpose:
Sets the number of initial free memory buffers used for each NFS version 2 Paging Device Table (pdt) created after the first table. The
very first pdt has a set value of 1000 or 10000, depending on memory size. This initial value is also the default value of each newly
created pdt. Note: Prior to AIX 5.2, running nfs_v2_vm_bufs would not affect any previously established pdt. In AIX 5.2 and any
subsequent releases, changing nfs_v2_vm_bufs will also affect the size of the old pdt if there are no current NFS version 2 mounts.
Values:
* Default: 1000
* Range: 1000 to 50000
* Type: Incremental
Diagnosis:
Run vmstat -v to look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number may need to be increased in conjunction
with nfs_v2_pdts.
Note: bufs option must be set prior to pdts.
nfs_v3_pdts
Purpose:
Sets the number of tables for memory pools used by the biods for NFS Version 3 mounts.
Values:
* Default: 1
* Range: 1 to 8
* Type: Mount
Diagnosis:
Run vmstat -v and look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number may need to be increased in conjunction
with nfs_v3_vm_bufs.
Note: bufs option must be set prior to pdts.
nfs_v3_server_readdirplus (AIX 5.2 and later)
Purpose:
Enables or disables the use of the NFS V3 READDIRPLUS operation on the NFS server.
Values:
* Default: 1 (enabled)
* Range: 0 to 1
* Type: Dynamic
Diagnosis:
The READDIRPLUS operation adds overhead when reading very large directories in NFS-mounted filesystems using NFS V3 mounts, which can
cause excessive CPU consumption by the nfsd threads, and slow response times to commands such as ls by an NFS client.
Tuning:
Disabling the use of the READDIRPLUS operation will help reduce the overhead when reading very large directories over NFS V3. However,
note that this is NOT compliant with the NFS Version 3 standard. Most NFS V3 clients will automatically fall back to using the READDIR
operation, but if problems arise the default value of this option should be restored.
nfs_v3_vm_bufs
Purpose:
Sets the number of initial free memory buffers used for each NFS version 3 Paging Device Table (pdt) created after the first table. The
very first pdt has a set value of 1000 or 10000, depending on memory size. This initial value is also the default value of each newly
created pdt. Note: Prior to AIX 5.2, running nfs_v3_vm_bufs would not affect any previously established pdt. In AIX 5.2 and any
subsequent releases, changing nfs_v3_vm_bufs will also affect the size of the old pdt if there are no current NFS version 3 mounts.
Values:
* Default: 1000
* Range: 1000 to 50000
* Type: Incremental
Diagnosis:
Run vmstat -v to look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number may need to be increased in conjunction
with nfs_v3_pdts.
Note: bufs option must be set prior to pdts.
nfs_v4_pdts
Purpose:
Sets the number of tables for memory pools used by the biods for NFS Version 4 mounts.
Values:
* Default: 1
* Range: 1 to 8
* Type: Mount
Diagnosis:
Run vmstat -v to look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number might need to be increased in
conjunction with nfs_v4_vm_bufs.
Note: The bufs option must be set prior to pdts.
nfs_v4_vm_bufs
Purpose:
Sets the number of initial free memory buffers used for each NFS version 4 Paging Device Table (pdt) created after the first table. The
very first pdt has a set value of 1000 or 10000, depending on memory size. This initial value is also the default value of each newly
created pdt. Note: Prior to AIX 5.2, running nfs_v4_vm_bufs would not affect any previously established pdt. In AIX 5.2 and any
subsequent releases, changing nfs_v4_vm_bufs will also affect the size of the old pdt if there are no current NFS version 4 mounts.
Values:
* Default: 1000
* Range: 1000 to 50000
* Type: Incremental
Diagnosis:
Run vmstat -v and look for non-zero values in the client filesystem I/Os blocked with no fsbuf field.
Tuning:
Increase number until the blocked I/O count is no longer incremented during workload. The number might need to be increased in
conjunction with nfs_v4_pdts.
Note: The bufs option must be set prior to pdts.
portcheck
Purpose:
Checks whether an NFS request originated from a privileged port.
Values:
* Default: 0
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Value of 0 disables the port-checking that is done by the NFS server. A value of 1 directs the NFS server to do port checking on the
incoming NFS requests. This is a configuration decision with minimal performance consequences.
server_delegation
Purpose:
Enables or disables NFS version 4 server delegation support.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 1 enables delegation support. A value of 0 disables delegation support. Server delegation can also be controlled by using the
/etc/exports file and exportfs.
statd_debug_level
Purpose:
Sets the level of debugging for rpc.statd.
Values:
* Default: 0
* Useful Range: 0 to 9
* Type: Dynamic
Diagnosis:
N/A
Tuning:
N/A
statd_max_threads
Purpose:
Sets the maximum number of threads used by rpc.statd.
Values:
* Default: 50
* Useful Range: 1 to 1000
* Type: Dynamic
Diagnosis:
The rpc.statd is multithreaded so that it can reestablish connections with remote machines in a concurrent manner. The rpc.statd threads
are created as demand increases, usually because rpc.statd is trying to reestablish a connection with a machine that it cannot contact.
When the rpc.statd threads become idle, they will exit. The statd_max_threads parameter is the maximum number of threads that can be
created.
Tuning:
N/A
udpchecksum
Purpose:
Turns on or off the generation of checksums on NFS UDP packets.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
Make sure this value is set to on in any network where packet corruption might occur. Slight performance gains can be realized by
turning it off, but at the expense of increased chance of data corruption.
utf8 (AIX 5.3 and later)
Purpose:
This option allow NFS v4 to perform UTF8 checking.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 1 turns on UTF8 checking of file names. A value of 0 turns off UTF8 checking.
utf8_validation
Purpose:
Enables checking of file names for the NFS version 4 client and server to ensure they conform to the UTF-8 specification.
Values:
* Default: 1
* Range: 0 or 1
* Type: Dynamic
Diagnosis:
N/A
Tuning:
A value of 1 turns on UTF-8 checking of file names. A value of 0 turns it off.
Examples
1 To set the portcheck tunable parameter to a value of zero, type:
nfso -o portcheck=0
2 To set the udpchecksum tunable parameter to its default value of 1 at the next reboot, type:
nfso -r -d udpchecksum
3 To print, in colon-delimited format, a list of all tunable parameters and their current values, type:
nfso -a -c
4 To list the current and reboot value, range, unit, type and dependencies of all tunables parameters managed by the nfso command, type:
nfso -L
5 To display help information on nfs_tcp_duplicate_cache_size, type:
nfso -h nfs_tcp_duplicate_cache_size
6 To permanently turn off nfs_dynamic_retrans, type:
nfso -p -o nfs_dynamic_retrans=0
7 To list the reboot values for all Network File System tuning parameters, type:
nfso -r -a
8 To list (spreadsheet format) the current and reboot value, range, unit, type and dependencies of all tunables parameters managed by the nfso
command, type:
nfso -x
Related Information
The netstat command, no command, vmo command, ioo command, raso command, schedo command, tunchange command, tunsave command, tunrestore command,
tuncheck command, and tundefault command.
Network File System.
Transmission Control Protocol/Internet Protocol .
NFS statistics and tuning parameters.
NFS commands.
Kernel Tuning in AIX 5L Version 5.3 Performance Tools Guide and Reference
AIX 5.2 compatibility mode.