Tru64 Managers
Thanks for the replies. Here is a summary:
- You can't limit "update" -- it's within the Kernel.
- Try using the feature, "smoothsync" (see description below)
- It's possible (but not likely) that fragmentation is causing more
"update" activity
Thanks again for the assistance.
Robert
NOTE: Below, you will find the reply messages that I received.
....................
"update" is ENTIRELY inside the kernel. You can't limit it. What it
is doing is critical to the correct functioning of the system. There
are some things you can do to fine tune how your system uses things
like the unified buffer cache (UBC), which is used to cache file data
in memory; if you've got LOTS of open files being modified constantly,
you might want to force the updates of file pages to happen a bit more
frequently, and this may well smooth out the activity and make things
SEEM a bit smoother. AdvFS defragging has no real impact on this, but
badly fragmented files MIGHT lead to lots of disk head movement during
updates/synchs, and that could make the system seem sluggish compared
to having a different file data page layout.
....................
The easy answer to whether update(8) can be limited is
no. The more complex answer is maybe.
The main job of update is to run the sync(2) system call
every so often. This goes through the buffer cache and
starts writing all the buffers that have dirty data. The
CPU usage probably happens then. To limit the amount of
time needed to do this, you'd simply have to have fewer
dirty buffers. Much easier said than done.
There is a feature called "smooth sync" that I believe
allows spreading out this load more evenly. It may be
mentioned in the tuning guide or some other system
management manual.
Individual applications may be able to reduce their impact
on the buffer cache by limiting how much dirty cache they
create. For example, doing their own fsync(2) every so
often.
I can imagine circumstances where a badly fragmented file
system would cause update to work harder. If I/O aggregation
occurs at the cache, then fragmentation might create more
buffers where more contiguous data would allow fewer. If
the aggregation is done below that point, then update(8)
would still have as many buffers to handle. A defragmented
file system would simply allow for less down stream I/O.
......................
not not not not not a good idea - this keeps your disk copy of
"stuff" up to date - if you crashed before they sync you'd lose
data in a big way. there is no reason why it hitting 30-50 5 is
bad, as long as it finishes in a second or 2 - is this not what
you're seeing?
...............
I'm using an older system than you are, but we are
running a real-time application that involves critical
timelines. The update process was causing problems,
so when the RT software runs, a script first finds the
update process using a ps -ef | grep 'update' and then
pipes this to an awk command to isolate the process
number. The script then does a kill -9 on the update
process number, so that for the duration of the
software run time, until update is started again, it
remains dead. If necessary I can find the exact
command we use.
................
The update daemon is not tunable.
Please note that both UFS and AdvFS support smoothsync. With smoothsync
enabled, the update daemon will bypass UFS or AdvFS filesystems.
Which filesystems are mounted on your system? Is smoothsync enabled on
your system? Perhaps this would help the problem you are seeing.
Smoothsync typically gets enabled from the following entry in /etc/inittab:
smsync:23:wait:/sbin/sysconfig -r vfs smoothsync_age=30 > /dev/null 2>&1
>From the V5.1A manpage for sys_attrs_vfs:
* smoothsync_age
Sets the amount of time, in seconds, that modified (dirty) pages can
age before they must be written to disk.
Default value: 30 (seconds)
Minimum value: 0
Maximum value: 60
You can modify this attribute at run time.
Smoothsync'ing allows each dirty filesystem page to age for a specified
time period before getting pushed to disk. This increases the opportunity
for frequently modified pages to be found in the cache, thus decreasing
the net I/O load. As pages get enqueued to a device after having aged for
a specified period, as opposed to getting flushed enmasse by the update
daemon, spikes in which large numbers of dirty pages get locked on a
device queue are minimized, thus decreasing latencies resulting from
requests for either a locked page or the device. (I/O throttling further
addresses this concern.)
I suspect that defragging would decrease the number of I/O operations, but
not the cpu usage of the update daemon; with or without defragging, the
same set of dirty buffers get processed. With defragging, additional
consolidation within that set of buffers is more likely.
..................
-----Original Message-----
Tru64 Managers:
A user of our Tru64 5.0 system wants to know, is there anything we can do to
control/limit the "update" process? When it runs every 30 seconds or so,
it's taking 25-50% of the processor with it.
It doesn't sound like a "tunable" process to me, but I thought I'd check
with this list.
Also - could the lack of running AdvFS defragging cause Update to run more
frequently/more intensively?
Thanks,
Robert
Alliance, Ohio
----------------------------
/sbin/update
DESCRIPTION
The update command flushes data from memory to disk every 30 seconds.
This ensures that the file system is up to date in the event of a system
crash. This command is provided as a statically-linked executable in
/sbin.
----------------------------
Received on Thu Oct 18 2001 - 13:45:01 NZDT