Scrigroup - Documente si articole

     

HomeDocumenteUploadResurseAlte limbi doc
BulgaraCeha slovacaCroataEnglezaEstonaFinlandezaFranceza
GermanaItalianaLetonaLituanianaMaghiaraOlandezaPoloneza
SarbaSlovenaSpaniolaSuedezaTurcaUcraineana

AdministrationAnimalsArtBiologyBooksBotanicsBusinessCars
ChemistryComputersComunicationsConstructionEcologyEconomyEducationElectronics
EngineeringEntertainmentFinancialFishingGamesGeographyGrammarHealth
HistoryHuman-resourcesLegislationLiteratureManagementsManualsMarketingMathematic
MedicinesMovieMusicNutritionPersonalitiesPhysicPoliticalPsychology
RecipesSociologySoftwareSportsTechnicalTourismVarious

TransBase Disk Recovery

computers



+ Font mai mare | - Font mai mic




TransBase Relational Database System

Version 5.1.1

Disk Recovery in TransBase
Tutorial and Reference Guide




Copyright 1987 - 1999 by:

TransAction Software GmbH
Gustav-Heinemann-Ring 109
D-81739 Munich
Germany
Telephone: 0 89 / 6 27 09 0
Telefax: 0 89 / 6 27 09 11
Electronic Mail: tas@transaction.de

Introduction to TransBase Disk Recovery

Platform availability

Relationship and Comparison to the Utilities tbarc and tbtar

Disk Recovery is a Database Property

Data in the Disk Recovery Schema

Processes in the Disk Recovery Schema

The Database Life Cycle

Producing Logfiles

Periodic Dumping

Principles of Database Reconstructing

Fuzzy Dumping

Disk Recovery and CD-ROM Databases.

Commands for Disk Recovery

Switching Disk Recovery Logging On

Switching Disk Recovery Logging Off

Dumping the Database

Example for a Dump Script

A Note about the First Dump

The Command tbdump

Restoring the Database

Example and Further Explanations:

Saving Logfiles to Tape

Old Dumps and Older Database Versions

What to do when Logfiles are Lost

.Ende Verzeichnis V.

#TJDISKREC_1__Introduction_to_TransBase_Disk_Recovery#Introduction to TransBase Disk Recovery

From Version 4.1 on, TransBase offers a mechanism to recover database from disk failures. This means that with that mechanism a TransBase database can be reconstructed after physical damage (disk crashes), inadvertent deletion etc. In general, disk recovery in this context means the reconstruction of a database the data of which has been corrupted or has got lost.

The underlying principle is a dump and logging mechanism. A database dump is similar to a copy of all database files (snapshot) which represents a database state at a certain point in time. Logging means that all database updates are not only applied to the database files but additionally are recorded on separate logfiles. If the database files get lost or are corrupted, then the last consistent database state can be reconstructed by the dump and the logfiles, provided that these files are still available.

It is important to distinguish Disk Recovery from Transaction Recovery (Crash Recovery). The latter is a mechanism to protect a database from corruption by machine crashes, system crashes or transaction failures. TransBase in the current as well as in all former versions automatically guarantees transaction recovery for any database. If the machine or system crashes, TransBase undoes the effects of all interrupted transactions such that the most recent consistent database state is installed. The Effects of already committed transactions are preserved.

Transaction Recovery, however, cannot compensate for the event of a physically corrupted disk or diskfile. The new TransBase Disk Recovery Feature can now be used to tolerate such failures.

#TJDISKREC_1_1__Relationship_and_Comparison_to_the_Utilities_tbarc_and_tbtar#Platform availability

The functionality of discrecovery is only available on Unix platforms.

#TJRelshiptbarctbtar#Relationship and Comparison to the Utilities tbarc and tbtar

The application scope of the TransBase Disk Recovery should be carefully distinguished from that of the TransBase utilities 'tbarc' and 'tbtar'. The utility tbarc is a tool to write the contents of a database into files (a so called tbarc 'archive'). A tbarc archive consists of text files and thus is machine independent. It can be used, for example, to port a TransBase database from one machine to another or to rebuild the database for optimal disk space allocation. However, if changes are made to the database, then the tbarc archive becomes obsolete, i.e. there is no automatic way to take the tbarc archive and to rerun the changes which have been made to the database since archive creation.

The same holds analogously for tbtar.

In contrast to tbarc and tbtar, the dump of the Disk Recovery utility is binary (machine dependent). Thus it cannot be used for a database migration. However, together with the logfiles, it can be used to reconstruct the actual database state.

#TJDISKREC_1_2__Disk_Recovery_is_a_Database_Property#Disk Recovery is a Database Property

Disk Recovery incurs some overhead in time and space. Therefore it is not a property of the TransBase System itself but a feature which is optional for each TransBase database. At creation time of a database, disk recovery always is switched off by default. If a database is considered to be important enough to pay for the overhead of Disk Recovery then Disk Recovery Logging can be switched on at an arbitrary point in time. Normally this is done after the database has been built up and before the normal transaction processing starts.

#TJDISKREC_1_3__Data_in_the_Disk_Recovery_Schema#Data in the Disk Recovery Schema

Dump and Logging Data:

A dump consists of the following files:

- File tbdesc, a copy of the equally named database description file which resides in the database directory.

- File fentry.db, an excerpt of the file f.db which resides in the TRANSBASE directory

- The database diskfile(s), their names are chosen by the database administrator at database creation time, the default is tbdsk001, tbdsk002, etc.

- A file named dumpdesc, it contains some more info of the dump such as dump time etc.
This is a user readable text file which can also serve to inform the database administrator about the dump date.

Logfiles are written by the tbkernel and are named axxxxxxx.log or bxxxxxxx.log, where xxxxxxx is a 7-digit number.

#TJDISKREC_1_4__Processes_in_the_Disk_Recovery_Schema#Processes in the Disk Recovery Schema

3 processes are involved in the disk recovery scheme:

The TransBase kernel 'tbkernel' (writes the logfiles),
The process 'tbdump' (generates a dump),
The disk recovery process (collection of scripts and processes).

The processes are described in greater detail below.

#TJDISKREC_1_5__The_Database_Life_Cycle#The Database Life Cycle

In the following, a typical scenario is described for a database which needs protection against loss by disk failures. In the sequel the sample database is called sampledb.

In the first step, the database is created (tbadmin -c sampledb .. ). At creation time the Disk Recovery option automatically is switched off. However, at creation time, a directory must be specified where the TransBase kernel will write the logfiles once the recovery option will be switched on. In many cases, the database is then built up from external data (spool files). In this phase, it is not useful to protect the database with the Disk Recovery feature against disk failures because the database setup can be repeated if the database should get lost.

Finally, our database enters the phase of being worked upon by 'normal' online transaction processing. This means that transactions are run against the database which are not easily or not at all repeatable if the database should get lost. This is the point in time when the disk recovery mechanism should be turned on.

To switch on database recovery, 2 things must be done:

(i) The database logging must be switched on for sampledb:
$ tbadmin -af sampledb drl=on
or interactively with
$ tbadmin -a sampledb

(ii) A dump of the database sampledb must be produced by the 'tbdump' command,
see Chapter 2.

At first glance it might seem sufficient to switch logging on after the dump has been produced. However, the dump procedure only can work if logging is switched on - this has to do with fuzzy dumping which will be explained later.

Now, in our example, let us call the dump produced by (ii) the dump D1 - actually this is the first dump of the database sampledb.

The dump D1 must be kept on a safe place, preferrably a tape. If D1 is kept on disk, it should of course not be the same disk as that of the database file. The same holds for the logfiles.

#TJDISKREC_1_5_1__Producing_Logfiles#Producing Logfiles

The command (i) above causes the process tbkernel to start logging all updates on logfiles. tbkernel creates logfiles in the so called drlog directory specified at database creation time (drlog stands for 'disk recovery log'). Also the drlog directory should reside on another disk than that of the database file(s).

Whenever a logfile has reached a certain size (the 'logfile size'), this logfile is closed and the next logfile (with increased number) is created. The logfile size is also specified at database creation time (the default is 1 MB). It can be changed by the interactive 'tbadmin -a ..' command.

Whenever a logfile has reached its maximum size completed, the tbkernel process does not access it any more, so it can be transferred to tape to save disk space. From a semantical point of view, all logfiles generated in this way belong together, i.e. they form one sequence of logentries. Breaking up into logfiles of limited size only has the reason to be able to move them from disk to tape in order to reduce the required disk space.

Functionally, the logfiles are coupled to a certain dump. Let us call the logfiles written after creation of dump D1 the logfile sequence L1. D1 is stable in size whereas L1 grows with the occurrence of update transactions on the database.

#TJDISKREC_1_5_2__Periodic_Dumping#Periodic Dumping

After a certain time which depends on the amount of updates it is reasonable to make a new dump of the database. For the database sampledb this dump is called D2 now. After having produced the new dump D2 all logfiles (L1) coupled to the last dump (D1) can be removed. Also dump D1 can be removed. The logfiles which now are being written after creation of D2 form the logfile set L2.

It is essential to understand that for the reconstruction of the newest database state only the newest dump together with its corresponding logfiles is needed. Furthermore it is clear that each logfile belongs to exactly one dump.

The time to reconstruct a corrupted database (see Chapter 2.4) is proportional to the length of the logfiles. Switching to a new dump generation therefore reduces the reconstruction time and the tape space occupied by the logfiles. On the other hand, writing a dump consumes machine resources. Note, however, that the normal transaction processing can continue during dumping (this is explained in Chapter 'Fuzzy Dumping'). So the dumping period that the database administrator chooses must be a compromise with respect to the tradeoff described above.

Logfile names start with a0000001.log, a0000002.log etc. When the next dump has been written, the new logfile sequence has the names b0000001.log, b0000002.log, etc., the next dump again switches the names to a0000001.log etc. Thus the prefix of the logfile names always alternates between 'a' and 'b' in different logfile sequences and the numbering inside one logfile sequence always starts with 1.

#TJDISKREC_1_5_3__Principles_of_Database_Reconstructing_#Principles of Database Reconstructing

When the database is damaged or gets lost, it can be reconstructed with the newest dump Dn and the corresponding logfile set Ln. The reconstruction is a half-automatic procedure and described in detail in Chapter 'Restoring the Database'. In principle, as the first step, using the dump Dn, a database with name and shape identical to the corrupted one is constructed and then all updates using the logfiles are redone.

#TJDISKREC_1_6__Fuzzy_Dumping#Fuzzy Dumping

An important property of the TransBase Disk Recovery scheme is that the database need not be shutdown when a dump is being written. This means that the normal transaction processing can continue when the dump process reads the diskfiles and constructs the dump. Of course, a certain performance degradation is inevitable due to the additional machine load by the dumper.

The dump produced during transaction processing is inconsistent or 'fuzzy'. At database reconstruction time, the database reconstruction process starts with that fuzzy database copy and then selectively applies the logfile records such that the 'fuzziness' at dump time is corrected at reconstruction time.

Note that due to the 'fuzziness' of the dump, a dump without the corresponding logfiles in general is of no use.

Normally the reconstruction procedure processes all logfiles and thus reconstructs the most recent consistent state. It is also possible to apply a subset of logfiles to reconstruct older versions of the database, especially the version at the time of dumping. This is explained in Chapter 'Old Dumps and Older Database Versions'.

#TJDISKREC_1_7__Disk_Recovery_and_CD_ROM_Databases_#Disk Recovery and CD-ROM Databases.

TransBase supports Disk Recovery for standard databases as well as for CD Editorial Databases but not for CD Retrieval Databases.

For CD Editorial databases, disk recovery is supported until the point in time where the database is FLUSHed to the romfiles by the 'tbadmin -F ..' command. For FLUSHing, disk recovery logging must be switched off which is reported by tbadmin.

#TJDISKREC_2__Commands_for_Disk_Recovery#Commands for Disk Recovery

#TJDISKREC_2_1__Switching_Disk_Recovery_Logging_On#Switching Disk Recovery Logging On

Syntax:

tbadmin -a[f] dbname drl=on

Effect:

The specified database is switched on for disk recovery logging. A logfile named a0000001.log or b0000001.log is created in the drlog directory. This logfile will be written up to the limit size (can be specified at database creation time or also interactively by 'tbadmin -a ..'). The default is 1 MB. Then a new logfile with number 0000002 will be written etc.

With the start of a dump, the names of the new logfile generation is prefixed with 'b' if the former names have been prefixed with 'a' and vice versa.

Whenever a dump has been completed, all names of the obsolete logfiles are changed from *.log to *.bak. Thus in the most general case, logfiles ending with '.log' (valid logfiles) as well as ending with '.bak' (obsolete logfiles) are in the drlog directory. Obsolete logfiles can be deleted by hand or archived together with the corresponding (obsolete) dump if desired.

NOTE:

Whenever more than one valid logfile *.log exist in the drlog directory, all logfiles except the one with the highest number can be safely removed from the drlog directory and copied to tape. Don't remove the logfile with the highest number - otherwise the TransBase kernel would stop logging after the next reboot..

#TJDISKREC__2_2__Switching_Disk_Recovery_Logging_Off#Switching Disk Recovery Logging Off

Syntax:

tbadmin -a[f] dbname drl=off

Effect:

Disk Recovery logging is switched off for the specified database. All existing logfiles in the drlog directory are renamed to *.bak.

NOTE:

Once the logging has been switched off, it is useless to switch it on again without making a new dump afterwards. Otherwise the logfiles would exhibit a 'hole' in the sequence of database updates. Note, however, that later it would still be possible to use the last dump and all its corresponding logfiles to reconstruct the database as it was in the moment of switching off.

#TJDISKREC__2_3__Dumping_the_Database#Dumping the Database

As mentioned before, a dump consists of a collection of files. In the following it is described for each file what it contains and how it is produced. We assume that the database to be handled is called 'sampledb'.

FILENAME:

tbdesc

CONTENTS:

Copy of the file tbdesc which is in the database directory.

PRODUCED:

by UNIX cp command, tbadmin can produce tbdesc's pathname:
$ cp `tbadmin -inv sampledb tbdesc` tbdesc

FILENAME:

fentry.db

CONTENTS:

Excerpt of the file f.db in TRANSBASE directory;

PRODUCED:

by tbadmin command:
$ tbadmin -inv sampledb f.db > fentry.db

FILENAME:

tbdsk001, tbdsk002, etc. or other names

CONTENTS:

These are fuzzy copies of the database diskfiles;

FILENAME:

dumpdesc

CONTENTS:

Some additional information of the dump, e.g. date and time;

PRODUCED:

all by program 'tbdump' in a target directory to be specified:
$ tbdump db=sampledb tdir=targetdir

The tbdump program is described in detail below.

Note that the names of the diskfiles are as they have been specified at database creation time. So the above names tbdsk001 etc. are only the default names as an example .

#TJDISKREC__2_3_1__Example_for_a_Dump_Script#Example for a Dump Script

Example for a dump of sampledb:

$ # choose a target directory e.g. dumpdir:
$ mkdir dumpdir
$ cp `tbadmin -inv sampledb tbdesc` dumpdir/tbdesc
$ tbadmin -inv sampledb f.db > dumpdir/fentry.db
$ # now the (time consuming) dump command
$ tbdump db=sampledb tdir=dumpdir
$ # now the complete dump is in dumpdir; copy to tape if desired
$ ls dumpdir | cpio -ovcB > /dev/tape

After the new dump is complete (let's call it Dn+1), there remains a file dumpdesc and one or several *.bak logfiles in the drlog directory of sampledb. The file dumpdesc in the drlog directory is identical to the one in dumpdir and can be ignored or manually deleted.

The *.bak logfiles in the drlog directory belong to the (now old) dump Dn. If they were never deleted from the drlog directory then the next dump (for Dn+2) would report an error. If it is desired to keep the old dump Dn, then these *.bak logfiles (which are the newest logfiles belonging to Dn !) must be added to all other logfiles of Dn by renaming them to *.log again. Anyway they must be deleted from the drlog directory before the next dump can run.

See also the special chapter about the retaining of old dumps.

#TJDISKREC_2_3_2__A_Note_about_the_First_Dump#A Note about the First Dump

As stated in the previous chapters, after dump completion there remain one or several *.bak logfiles in the drlog directory which belong to the previous dump. This also holds for the first dump, but here the *.bak logfiles do not belong to any dump because there is no previous dump. In this case, the *.bak logfiles can and should be deleted immediately.

#TJDISKREC__2_3_3__The_Command_tbdump#The Command tbdump

Syntax:

tbdump db=dbname tdir=dumpdir [pw=passwd] [sh=[proc]]

Effect:

Creates a dump in dumpdir for the database dbname. If dumpdir does not exist it is created.

tbdump CONNECTs to the database as tbadmin using passwd as the tbadmin password (empty password is the default for pw).

tbdump copies all database diskfiles to dumpdir. Additionally, tbdump creates a file named 'dumpdesc' in the dumpdir (and additionally in the drlog directory of the database). Note that this file is an essential part of the dump. 

As long as diskfiles are being dumped, logfiles are written in twinmode, i.e. valid logfiles with names 'axxxxxxx.log' as well as 'bxxxxxxx.log' exist in parallel (with possibly different numbers). When dumping has successfully completed, tbdump renames the obsolete logfiles into *.bak. Otherwise, if dumping fails, then the logfiles of the new dump generation are deleted.

If a shell procedure proc is specified by the option sh=proc, then tbdump calls proc whenever the dumping of one database diskfile is finished. tbdump calls proc as 'sh proc filename' where filename is of the form dumpdir/fname. dumpdir is from the tbdump command line and fname is the basename of the recently copied file. The sh command is taken from the environment variable SHELL or as /bin/sh if SHELL is not set.

If the option sh= is specified (without any procedure name) then tbdump calls the shell whenever the copy process of the next diskfile has been finished. The shell command is taken as described above.

The sh= option usually is used to restrict the occupied disk space by immediately moving a copied diskfile to cheap mass storage before the next diskfile is copied. In the present version of tbdump there is no way to directly copy diskfiles to tape.

Note:

The process tbdump can be executed during normal transaction processing. tbdump itself CONNECTs to the database, therefore the database must be booted. Before dumping the first diskfile, tbdump must wait until all transactions which were active at the time of tbdump's CONNECT have finished (however tbdump need not wait for transactions which started after tbdump's CONNECT).

Errors:

An error is reported if there are *.bak logfiles (from the last tbdump call) left in the drlog directory when tbdump starts. In this case these files must be removed before tbdump is started again.

Exitcode:

tbdump exits with 0 on success else with 1.

#TJDISKREC__2_4__Restoring_the_Database#Restoring the Database

This chapter gives a summary about the steps to be performed to reconstruct a database from a dump and the corresponding logfiles. A detailed example and further explanations are given in the next chapter.

(Step 1)
Construct (empty) database with structure identical to the corrupted one:
$ tbadmin -DR <dbname> fentry.db=<f_file> tbdesc=<t_file>

This command creates a database <dbname> (similar to 'tbadmin -c ..') with the structure described by the files f_file and t_file.

If this step fails because the corrupted database still exists, drop the corrupted database but first note the Important Note below !

(Step 2)
Replace all diskfiles of the newly created database by those of the dump
$ <several UNIX cp commands>

(Step 3)
Call the disk recovery program 'tbdr' :
$ tbdr <dbname> <logfile_dir>
or
$ tbdr -i <dbname> <logfile_dir>

tbdr expects in directory <logfile_dir> a file 'dumpdesc' and one or several logfiles axxxxxxx.log or bxxxxxxx.log; it applies all logfiles to database <dbname> and ends up with a consistent database. If called with the -i option it interactively asks for further logfiles when all logfiles in logfile_dir have been processed.

Important Note:
It may be that (Step 1) fails because parts of the corrupted database still exist and block files or directories. In this case, the files and directories of the corrupted database must be moved or removed but don't destroy logfiles which are still needed for the restauration process. Save all files in the drlog directory with the extension .log and provide them as input for Step 3 (together with all other logfiles already saved on tape or wherever).

Further notes:
If (Step 3) should fail (e.g. insufficient disk space), it can only be repeated after (Step 2) has also been repeated.

It should not be tried to replace (Step 1) by a 'tbadmin -c ..' command with appropriate database parameters because the following steps would fail.

After these steps the database is operational. It is in single user mode and can be brought into multi user mode by the standard 'tbadmin -a ..' command.

#TJDISKREC_2_5__Example_and_Further_Explanations_#Example and Further Explanations:

Assume a dump for the database sampledb. As described earlier,  the dump consists of the files tbdesc, fentry.db, dumpdesc and several diskfiles. We assume 3 diskfiles tbdsk001, tbdsk002, tbdsk003. All these files are in a directory called 'dumpdir'.

Furthermore, there are several logfiles, e.g. assume 4 logfiles a0000001.log to a0000004.log which are located in directory 'logfile_dir'.

For simplicity we assume that both directories are located in the current directory. Note that both directories could also be identical. This only depends on how all these files have been read from tape by the database administrator.

(Step 1 in Example)
Construct (empty) database sampledb with structure identical to the corrupted sampledb:
$ tbadmin -DR sampledb fentry.db=dumpdir/fentry.db tbdesc=dumpdir/tbdesc

Note that the name of the new database must match the one which is denoted in fentry.db. If another name is desired, it must be consistently edited in fentry.db, but this produces some warnings in the following steps.

By default, the database is constructed with the same file locations as the dumped (and corrupted) one. If other locations are desired, then all pathnames inside the input files fentry.db and tbdesc must be consistently edited before the the above command is applied.

Note that it is not possible to replace the above command by a 'tbadmin -c ..' command with appropriate database parameters.

(Step 2 in Example)
Copying the diskfiles from the dump to the new database:
Assume that the pathnames of the 3 diskfiles as denoted in tbdesc are /usr/tas/sampledb@machine/disks/tbdsk001 etc.
$ cp dumpdir/tbdsk001 /usr/tas/sampledb@machine/disks/tbdsk001
$ cp dumpdir/tbdsk002 /usr/tas/sampledb@machine/disks/tbdsk002
$ cp dumpdir/tbdsk003 /usr/tas/sampledb@machine/disks/tbdsk003

(Step 3 in Example)
Apply the logfiles:
$ tbdr sampledb logfile_dir
or
$ tbdr -i sampledb logfile_dir

Step 3 itself internally is divided into 4 parts (S3 to S6). In S3, the logfiles are applied. This is by far the most time consuming phase. When starting, S3 prompts a message how the user can get information about the progress of the logfile processing (i.e. which logfile is being worked upon and on which position). The logfiles are not strictly read sequentially because the log records of transactions which had been aborted during the normal processing must be read backwards by the recovery process.

When called with the -i option, S3 prompts to supply further logfiles in the specified directory after all logfiles have been processed. The user then can delete the present logfiles and supply further logfiles whose numbering of course must be a continuation of the processed ones.

In the next step S4, the so called FreeSpaceManagement (FSM) is initialized (tbdr prompts the message 'Resetting FSM on database ..').  The FSM contains information about which blocks in the diskfiles are occupied. This information is not logged in the logfiles and thus must be rebuilt by the recovery process.

The real rebuilding of the FSM is done by the step S5 (tbdr prompts the message 'Rebuilding FSM on database ..'). For this, all tables and blobcontainers are traversed to find all occupied blocks and to insert the occupation information into the FSM. This is not as time consuming as it might be expected because the B-trees and blob indexes are not accessed on leaf level and the number of not-leaf blocks is smaller by 2 orders of magnitude compared with the number of leaf blocks.

In the last step S6, all secondary index information is handled. Note that the logfiles do not contain records concerning user created indexes (CREATE INDEX ..). This is to save time and space during normal processing but of course causes more work in the database reconstruction. Step S6 does not reconstruct the secondary indexes but inspects the (already reconstructed) system table sysindex. From that information a script called 'ixscript' is built which contains all statements (CREATE INDEX ..) to rebuild the indexes as they were at the time of database corruption. Then all tuples of the sysindex table are deleted.

At this point the disk recovery process tbdr ends and prompts a message about the script ixscript. Running ixscript may be time consuming, thus it is up to the database administrator to do it now or later or in a modified form or never at all. The database, however, is fully operational now. It remains to bring it into multi-user mode if required (tbadmin -a .. ) because the disk recovery process runs and leaves it in single user mode.

#TJDISKREC_2_6__Saving_Logfiles_to_Tape#Saving Logfiles to Tape

The set of logfiles belonging to the most recent dump consists of files with consistent prefix 'a' or 'b' and with numbering starting with 1. The logfile with the highest number is being worked upon by the tbkernel if update transactions are active. All other logfiles are not accessed by the tbkernel.

If the database is corrupted then it is fully reconstructable if and only if the dump and all logfiles are intact. For example, if the drlog directory is also destroyed (at least the highest numbered logfile resides there) then the actual state is not reconstructable. The database administrator must choose the location of the drlog directory and of the database diskfile(s) on physical devices with independent failure mode if possible. It is recommendable to transfer all logfiles with non-highest numbers to tape as soon as possible.

If the drlog directory is not safe then also the current logfile (which is still growing to its maximum logfile size) should be transferred to tape periodically to minimize the loss of work if the database and the drlog directory should both be corrupted. This means that also a copy of a non-complete current logfile which is being being worked upon can be used as input in the reconstruction process.

#TJDISKREC_2_7__Old_Dumps_and_Older_Database_Versions#Old Dumps and Older Database Versions

As stated in the previous chapters, the newest dump together with all corresponding logfiles serves to reconstruct the actual database state. Instead of applying all logfiles it is possible to apply a subset of logfiles only. If the logfile set belonging to the newest dump consists of n logfiles (n greater or equal to 1), one could apply the m first logfiles only (m less than n). This produces a database state as it was at the time when the m-th logfile was finished.

As a special case, applying the first logfile only produces a database state as it was when the corresponding dump had been written. This is so because TransBase starts the second logfile at the time when the dump has been written. In this way the dump can be used as a snapshot. The following has to be noted: if the database is without transaction traffic when the dump is being written, then the first logfile is (nearly) empty. However, it must be applied if the dump database state is to be constructed, i.e. the program tbdr expects at least one logfile.

Of course, older dumps can be used to construct (still) older database versions. If a dump Dn is not the newest dump, i.e. a dump Dn+1 also exists, then Dn together with the corresponding logfile set covers the set of database states from the time of writing Dn until the time of writing Dn+1. If it is desired to construct the database state as it was at the time of writing Dn+1, then from a functional point of view one could take Dn+1 with its first logfile or the dump Dn with all corresponding logfiles. Of course, with Dn+1 the job is much faster.

#TJDISKREC_2_8__What_to_do_when_Logfiles_are_Lost#What to do when Logfiles are Lost

If one or several logfiles of the newest dump get lost then the remaining logfiles often can still be used to reconstruct a database version, but this version will not reflect the most actual state of the corrupted database. From the preceding chapter it follows that any logfile set starting with number 1 and without gap in the numbering can be used as input for the reconstruction process and produces a consistent state. In other words: if a logfile with number n is lost then all logfiles with numbers higher than n cannot be used any more. 

A special case arises if the first logfile is lost - this includes the case that all logfiles are lost. The situation then is that Step 3 of the Restauration Process described in Chapter 2.4 cannot be executed. Then two cases can be distinguished. If the used dump had been created when no database traffic had been active (dump is not 'fuzzy') then Step 2 results in an operational database whose state is that of the dump time. If there had been update activity during the dump then it cannot be expected that the database will be readable after Step 2. It may happen, however, that the database or at least some tables are readable. In this case the readable parts can be saved into text files by SPOOL commands or even by tbarc to rebuild at least parts of the database.

Note that these considerations only apply as the last resort for a situation which should not occur, namely the loss of the database including the loss of the dump logfiles.



Politica de confidentialitate | Termeni si conditii de utilizare



DISTRIBUIE DOCUMENTUL

Comentarii


Vizualizari: 1965
Importanta: rank

Comenteaza documentul:

Te rugam sa te autentifici sau sa iti faci cont pentru a putea comenta

Creaza cont nou

Termeni si conditii de utilizare | Contact
© SCRIGROUP 2024 . All rights reserved