Managers,
Stay a while and do not breath... It seems all the ado (read marketing)
DEC was making about integrity which ADVfs brings in OSF is only a matter
of luck (read statistics).
My major concern for now, is how the ADVfs METADATA is consistently
written to stable support when the ADVfs log buffers are just:
"flushed - to disk - more frequently - than normal buffers".
Please, can you carefully read the mail below and reply to me your inputs.
I'll summarize.
Thanks
Lucien Hercaud
former DEC Unix support
former DEC supporter
_________________________ Séparateur Réacheminement ____________________________
Objet : Re[2]: Log #91585 - datalogging update
Auteur : Keith MCCABE à zz.london.1489
Date : 22/11/1996 09:22
I completely agree with LH.
Steve says in his message that the confusion appears to be due to an
inaccurate man page.
to quote the chfile man page when chfile is on:
'... the file system guarantees transaction-like recovery of data
written to files with data logging engaged.'
and what we hear from DEC is that, in fact, it flushes the cache a bit
more often!!! There is no confusion. DEC are lying!
They've got to be kidding here. I'm really amazed by the messages
coming from DEC.
Cheers
KSM
______________________________ Reply Separator _________________________________
Subject: Re: Log #91585 - datalogging update
Author: Lucien HERCAUD at PB_PARIS
Date: 21/11/96 19:35
All,
This is 100% clear to me.
It now means we have to:
1. move all Sybase to RAW LSM -or -
2. Add the "-o sync" flag when mounting the ADVfs corresponding FileSet
otherwise Sybase data integrity is NOT GUARANTEED!
Bravo DEC (sic!)
... and thanks for the man page and all the marketing around it.
____________________________ Séparateur Réponse ________________________________
Objet : Log #91585 - datalogging update
Auteur : Stephen Carpenter <sac_at_uvo.dec.com> à internet
Date : 21/11/1996 13:07
Keith,
I forwarded your concerns about turning datalogging off to engineering and
they have responded with some interesting information about what chfile will and
will not do:
"chfile -l on" DOES:
* Improve the probability that the data from a write() system call
is actually on disk when the write() system call returns to the
caller. Rather than being buffered in the normal way, data written
to files which have data logging is written to the AdvFS log
buffers. Since the AdvFS log is flushed more frequently than
the typical 30-second sync done by the "update" process, the
probability of the data from the write() being on disk if the
system crashes is somewhat increased. Note, though, that this is
only a probability since the AdvFS log itself is not flushed after
every write to it!
"chfile -l on" DOES NOT:
* Guarantee that the data from a write() system call is actually
on disk when the write() system call returns to the caller.
To get such a guarantee, the file must be opened with the
O_SYNC flag. Alternatively, the application could call
fsync() to make sure that all buffered data for a particular
file is on disk.
* Keep the files consistent from a filesystem perspective in the
event of a crash. AdvFS already does that through its metadata
logging. Using chfile adds no further safeguard.
* Keep the files consistent from an application (eg. Sybase)
perspective. That is up to the application. Most databases
do this by either opening the files in syncronous mode (O_SYNC)
or by periodically calling fsync(). Also, most databases
keep their own transaction log to maintain inter-file consistency.
Therefore "data logging" does not but either improved filesystem robustness nor
data integrity guarantees.
This misunderstanding appears to be due to an inaccurate man page
which will be corrected.
Stephen.
_____________________________________________________________________
// \\
// Stephen Carpenter "One inode short of a file system" \\
// \\
\\ UNIX Guru sac_at_uvo.dec.com //
\\ Digital Equipment Corporation //
\\_____________________________________________________________________//
Received on Fri Nov 22 1996 - 17:47:05 NZDT