2-nd try
Managers,
Stay a while and do not breath... It seems all the ado (read marketing)
DEC was making about integrity which ADVfs brings in OSF is only a matter
of luck (read statistics).
My major concern for now, is how the ADVfs METADATA is consistently
written to stable support when the ADVfs log buffers are just: "flushed -
to disk - more frequently - than normal buffers".
Please, can you carefully read the mails below and reply to me your inputs.
I'll summarize.
Thanks
Lucien Hercaud
former DEC Unix support
former DEC supporter
+------------------------------------------------------------------------
Author : Stephen Carpenter <sac_at_uvo.dec.com> at internet Date :
21/11/1996 13:07
Keith,
I forwarded your concerns about turning datalogging off to engineering and
they have responded with some interesting information about what chfile
will and
will not do:
"chfile -l on" DOES:
* Improve the probability that the data from a write() system call
is actually on disk when the write() system call returns to the caller.
Rather than being buffered in the normal way, data written to files which
have data logging is written to the AdvFS log buffers. Since the AdvFS
log is flushed more frequently than
the typical 30-second sync done by the "update" process, the probability
of the data from the write() being on disk if the system crashes is
somewhat increased. Note, though, that this is only a probability since
the AdvFS log itself is not flushed after every write to it!
"chfile -l on" DOES NOT:
* Guarantee that the data from a write() system call is actually
on disk when the write() system call returns to the caller. To get such a
guarantee, the file must be opened with the O_SYNC flag. Alternatively,
the application could call fsync() to make sure that all buffered data for
a particular file is on disk.
* Keep the files consistent from a filesystem perspective in the
event of a crash. AdvFS already does that through its metadata logging.
Using chfile adds no further safeguard.
* Keep the files consistent from an application (eg. Sybase)
perspective. That is up to the application. Most databases
do this by either opening the files in syncronous mode (O_SYNC) or by
periodically calling fsync(). Also, most databases
keep their own transaction log to maintain inter-file consistency.
Therefore "data logging" does not but either improved filesystem
robustness nor data integrity guarantees.
This misunderstanding appears to be due to an inaccurate man page which
will be corrected.
Stephen Carpenter sac_at_uvo.dec.com Digital Equipment Corporation
+-----------------------------------------------------------------------
Author: Lucien HERCAUD at PB_PARIS
Date: 21/11/96 19:35
All,
This is 100% clear to me.
It now means we have to either :
1. move all Sybase to RAW LSM -or -
2. Add the "O_SYNC" flag when opening the ADVfs corresponding FILES
(who's going to modify Sybase code ?). In this case, will the WRITE
return when data is on disk or when data is queued to the disk driver (as
for the UFS buffer cache SYNC writes)?
otherwise Sybase data integrity is NOT GUARANTEED!
+-------------------------------------------------------------------------
Author : Keith MCCABE at zz.london.1489 Date : 22/11/1996 09:22
I completely agree with LH.
Steve says in his message that the confusion appears to be due to an
inaccurate man page.
to quote the chfile man page when chfile is on:
'... the file system guarantees transaction-like recovery of data written
to files with data logging engaged.'
and what we hear from DEC is that, in fact, it flushes the cache a bit
more often!!! There is no confusion. DEC are lying!
They've got to be kidding here. I'm really amazed by the messages coming
from DEC.
Cheers
KSM
Received on Mon Nov 25 1996 - 17:38:58 NZDT