Quick Notes
Shadow Paging(Database Recovery)
-Requires fewer disk access than do-log methods
-Maintain two page tables during the life cycle of transaction When transaction starts both pages tables are identical.
-Shadow page table is never changed over duration of transaction.
-Current page table may be changed during write operation
-All I/P and o/p operations use the current P.T to locate db pages on disk.
-Store shadow Page table in non-volatile storage.
Directory
1) Current page table
2) Shadow page table
When Transaction commits system writes current page table to non-volatile storage. The current PT then becomes new shadow Page Table
Advantages of shadow paging over log-based techniques
1) Log-record overhead is removed.
2) Faster Recovery(Undo/Redo Transaction)
Drawbacks of shadow paging
1) Commit Overhead
Single transaction
a) actual data blocks
b) Current P.T
c) Disk address of current page table
2) Data Fragmentation
Location property is lost. Shadow paging causes database pages to change location
3) Garbage collection: When transaction commits, database pages containing old unison of data changed by transaction become inaccessible.
DBMS checkpoints
Checkpoints
The following problem occurs during recovery procedure:
• searching the entire log is time-consuming as we are not aware of the
consistency of the database after restart. Thus, we might unnecessarily redo
transactions, which have already output their updates to the database.
Thus, we can streamline recovery procedure by periodically performing check
pointing. Check pointing involves:
• Output of all the log records currently residing in the non-volatile memory onto
stable storage.
• Output all modified buffer blocks to the disk.
• Write a log record < checkpoint> on a stable storage.
During recovery we need to consider only the most recent transactions that started
before the checkpoint and is not completed till, checkpoint and transactions started
after check point. Scan backwards from end of log to find the most recent
<checkpoint> record. Continue scanning backwards till a record <Ti start> is found.
Need only consider part of the log following above start record. The earlier part of the
log may be ignored during recovery, and can be erased whenever desired. For all
transactions (starting from Ti or later) with no <Ti commit>, execute undo (Ti ).
(Done only in case immediate modification scheme is used). Scanning forward in the
log, for all transactions starting from Ti or later with a <Ti commit>, execute redo(Ti).
Important Concepts
System Catalog: The system catalogue is a collection of tables and views that contain important information about a database.In relational DBMSs the catalogue is stored as relations.
The DBMS software is used for querying, updating, and maintaining the catalogue. This allows DBMS routines (as well as users) to access. The information stored in the catalogue can be accessed by the DBMS routines as well as users upon authorisation with the help of the query language such as SQL.
The information stored in a catalogue of an RDBMS includes:
• the relation names,
• attribute names,
• attribute domains (data types),
• descriptions of constraints (primary keys, secondary keys, foreign keys, NULL/ NOT NULL, and other types of constraints),
• views, and
• storage structures and indexes (index name, attributes on which index is defined, type of index etc).
Security and authorisation information is also kept in the catalogue, which describes:
• authorised user names and passwords,
• each user’s privilege to access specific database relations and views,
• the creator and owner of each relation. The privileges are granted using GRANT command. A listing of such commands is given in Figure 1.
The system catalogue can also be used to store some statistical and descriptive information about relations. Some such information can be:
• number of tuples in each relation,
• the different attribute values,
• storage and access methods used in relation.
All such information finds its use in query processing.
Data Dictionary
The data dictionary stores useful metadata, such as field descriptions, in a format that is independent of the underlying database system. Some of the functions served by the Data Dictionary include:
• ensuring efficient data access, especially with regard to the utilisation of indexes,
• partitioning the database into both logical and physical regions,
• specifying validation criteria and referential constraints to be automatically enforced,
• supplying pre-defined record types for Rich Client features, such as security and administration facilities, attached objects, and distributed processing (i.e., grid and cluster supercomputing).
Catalog system vs Data Dictionary
A catalogue is closely coupled with the DBMS software; it provides the information stored in it to users and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such as DDL and DML compilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer.
On the other hand, a Data Dictionary is a data structure that stores meta-data, i.e., data about data. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users, and administrators of a computer system for information resource management. These systems are used to maintain information on system hardware and software configurations, documentation, applications, and users, as well as other information relevant to system administration.
Passive and Active Data Dictionary
If a data dictionary system is used only by designers, users, and administrators, and not by the DBMS software, it is called a passive data dictionary; otherwise, it is called an active data dictionary or data directory. An active data dictionary is automatically updated as changes occur in the database. A passive data dictionary must be manually updated.
The data dictionary consists of record types (tables) created in the database by system-generated command files, tailored for each supported back-end DBMS. Command files contain SQL statements for CREATE TABLE, CREATE UNIQUE
INDEX, ALTER TABLE (for referential integrity), etc., using the specific SQL statement required by that type of database.
Data Dictionary Features
A comprehensive data dictionary product will include:
• support for standard entity types (elements, records, files, reports, programs, systems, screens, users, terminals, etc.), and their various characteristics (e.g., for elements, the dictionary might maintain Business name, Business definition, name, Data type, Size, Format, Range(s), Validation criteria, etc.)
• support for user-designed entity types (this is often called the “extensibility” feature); this facility is often exploited in support of data modelling, to record and cross-reference entities, relationships, data flows, data stores, processes, etc.
• the ability to distinguish between versions of entities (e.g., test and production)
• enforcement of in-house standards and conventions.
• comprehensive reporting facilities, including both “canned” reports and a reporting language for user-designed reports; typical reports include:
• detail reports of entities
• summary reports of entities
• component reports (e.g., record-element structures)
• cross-reference reports (e.g., element keyword indexes)
• where-used reports (e.g., element-record-program cross-references).
• a query facility, both for administrators and casual users, which includes the ability to perform generic searches on business definitions, user descriptions, synonyms, etc.
• language interfaces, to allow, for example, standard record layouts to be automatically incorporated into programs during the compile process.
• automated input facilities (e.g., to load record descriptions from a copy library).
• security features
• adequate performance tuning abilities
• support for DBMS administration, such as automatic generation of DDL (Data Definition Language).
Data Dictionary Benefits
The benefits of a fully utilised data dictionary are substantial. A data dictionary has the potential to:
• facilitate data sharing by
• enabling database classes to automatically handle multi-user coordination, buffer layouts, data validation, and performance optimisations,
• improving the ease of understanding of data definitions,
• ensuring that there is a single authoritative source of reference for all users
• facilitate application integration by identifying data redundancies,
• reduce development lead times by
• simplifying documentation
• automating programming activities.
• reduce maintenance effort by identifying the impact of change as it affects:
• users,
• data base administrators,
• programmers.
• improve the quality of application software by enforcing standards in the development process
• ensure application system longevity by maintaining documentation beyond project completions
• data dictionary information created under one database system can easily be used to generate the same database layout on any of the other database systems BFC supports (Oracle, MS SQL Server, Access, DB2, Sybase, SQL Anywhere, etc.)
These benefits are maximised by a fully utilised data dictionary. As the next section will show, our environment is such that not all of these benefits are immediately available to us.
Disadvantages of Data Dictionary
A DDS is a useful management tool, but it also has several disadvantages.
It needs careful planning. We would need to define the exact requirements designing its contents, testing, implementation and evaluation. The cost of a DDS includes not only the initial price of its installation and any hardware requirements, but also the cost of collecting the information entering it into the DDS, keeping it up-to-date and enforcing standards. The use of a DDS requires management commitment, which is not easy to achieve, particularly where the benefits are intangible and long term.
Catalogue in Distributed Database Systems
The data dictionary stores useful metadata on database relations. In a Distributed database systems information on locations, fragmentations and replications is also added to the catalogues.
The distributed database catalogue entries must specify site(s) at which data is being stored in addition to data in a system catalogue in a centralised DBMS. Because of data partitioning and replication, this extra information is needed. There are a number of approaches to implementing a distributed database catalogue.
• Centralised: Keep one master copy of the catalogue,
• Fully replicated: Keep one copy of the catalogue at each site,
• Partitioned: Partition and replicate the catalogue as usage patterns demand,
• Centralised/partitioned: Combination of the above.
Catalogue in Object Oriented Database Systems
An object oriented database systems brings together the features of object-oriented programming and the advantages of database systems under one persistent DBMS interface. Thus, they are very useful in applications with complex interrelationships, complex classes, inheritance etc. However, as far as data dictionary is concerned, it now additionally needs to describe few more classes, objects and their inter-relationships. Thus, a data dictionary for such a system may be more complex from the viewpoint of implementation; however, from the users point of view it has almost similar concepts. This data dictionary should now also store class definitions including the member variables, member functions, inheritances and relationships of various class objects.
ROLE OF SYSTEM CATALOGUE IN DATABASE ADMINISTRATION
Assertion:
Assertions are constraints that are normally of general nature. For example, the age of the student in a hypothetical University should not be more than 25 years or the minimum age of the teacher of that University should be 30 years. Such general constraints can be implemented with the help of an assertion statement. The syntax for creating assertion is:
Syntax:
CREATE ASSERTION <Name>
CHECK (<Condition>);
Thus, the assertion on age for the University as above can be implemented as:
CREATE ASSERTION age-constraint
CHECK (NOT EXISTS (
SELECT *
FROM STUDENT s
WHERE s.age > 25
OR s.age > (
SELECT MIN (f.age)
FROM FACULTY f
));
View : A view is a virtual table, which does not actually store data.view actually is a query and thus has a SELECT FROM WHERE ….. clause which works on physical table which stores the data. Thus, the view is a collection of relevant information for a specific entity.
Example: A student’s database may have the following tables:
STUDENT (name, enrolment-no, dateofbirth)
MARKS (enrolment-no, subjectcode, smarks)
For the database above a view can be created for a Teacher who is allowed to view only the performance of the student in his/her subject, let us say MCS-043.
CREATE VIEW SUBJECT-PERFORMANCE AS
(SELECT s.enrolment-no, name, subjectcode, smarks
FROM STUDENT s, MARKS m
WHERE s.enrolment-no = m.enrolment-no AND
subjectcode ‘MCS-043’ ORDER BY s.enrolment-no;
A view can be dropped using a DROP statement as:
DROP VIEW SUBJECT-PERFORMANCE;
Stored Procedures
Stored procedures are collections of small programs that are stored in compiled form and have a specific purpose. For example, a company may have rules such as:
• A code (like enrolment number) with one of the digits as the check digit, which checks the validity of the code.
• Any date of change of value is to be recorded.
The use of procedure has the following advantages from the viewpoint of database application development.
• They help in removing SQL statements from the application program thus making it more readable and maintainable.
• They run faster than SQL statements since they are already compiled in the database.
Stored procedures can be created using CREATE PROCEDURE in some commercial DBMS.
Syntax:
CREATE [or replace] PROCEDURE [user]PROCEDURE_NAME
[(argument datatype
[, argument datatype]….)]
Triggers
These events may be database update operations like INSERT, UPDATE, DELETE etc. A trigger consists of these essential components:
• An event that causes its automatic activation.
• The condition that determines whether the event has called an exception such that the desired action is executed.
• The action that is to be performed.
Database Security
Basically database security can be broken down into the following levels:
• Server Security
• Database Connections
• Table Access Control
• Restricting Database Access
Statistical Database Security...
DBMS Notes
Functional Dependency: It is an association between two attributes of the same table ( relation)
Let X and Y are attributes of the same table then
X->Y states that if two tuple agree on the values in attribute "X", they must also agree on value in attribute "Y"
Keys in Database
A key is a set of one or more attributes, which is used for unique identification with in a table
Types of keys
1) Super Key: A Super key is a set of one or more attributes that uniquely identifies each record with in a table.
2) Candidate Key: Candidate key is a minimal superkey, which contains no extra attributes. Also it contain maximum possible attributes which uniquely identifies each record or A table may have one or more choices for the primary key. Collectively these are known as candidate keys
3) Primary Key: Primary key is an attribute which uniquely identifies each record with in a table.
4) Foreign Key : A foreign key are attributes in a table whose value match as primary key in another table.
a) Foreign key can contain duplicate value
b) Also can contain NULL value
Prime attribute: It is present in any of the candidate key
Non prime attribute: It is not present in Candidate key.
Secondary / Alternate keys: Once we have selected the one primary key then remaining attributes in a table are called secondary keys or alternate keys. or the candidate keys which are not selected as primary key are also called secondary key or alternate keys.
Composite key or concatenated key: Whenever we choose multiple attribute to create a primary key then that primary key is called composite key.
Normalization is breaking of table and relation in multiple tables so that we reduce the redundancy and we can make it more efficient and error free.
First Normal form: A table is said to be in first normal form if and only if
1) Values of each attribute is atomic
2) No composite values
3) All entries in any column must be of the same kind
4) Each column must have a unique name
5) No two rows are identical
Second Normal Form: A relation is said to be in 2NF if it is in 1NF and All non prime attributes are fully functional dependent on any key of R.
1) Table should be in first form and
2) It should not have any partial dependencies.
Third Normal Form: A relation is said to be in 3NF,
1) If it is in 2 NF
2) And it should not have any transitive dependencies
BCNF: A relation is said to be in BCNF( Boyce codd Normal Form)
1) If it is in 3 NF
2) For any dependency A->B(A should be super key). IF A is non prime then B is prime attribute.
Partial Dependency: A part of primary key A-> Non prime attribute B
Transitive Dependency: Non prime A-> Non prime B
4 NF: A relation is said to be in 4NF If and only if
1) It is in BCNF
2) It should not have MVD(Multi valued dependencies
5 NF: A relation is said to be in 5 NF if and only if
1) It is in 4 NF
2) Join dependency exists decompose the table. Decomposition should not create new data or loss of data during the operation.
Shadow Paging(Database Recovery)
-Requires fewer disk access than do-log methods
-Maintain two page tables during the life cycle of transaction When transaction starts both pages tables are identical.
-Shadow page table is never changed over duration of transaction.
-Current page table may be changed during write operation
-All I/P and o/p operations use the current P.T to locate db pages on disk.
-Store shadow Page table in non-volatile storage.
Directory
1) Current page table
2) Shadow page table
When Transaction commits system writes current page table to non-volatile storage. The current PT then becomes new shadow Page Table
Advantages of shadow paging over log-based techniques
1) Log-record overhead is removed.
2) Faster Recovery(Undo/Redo Transaction)
Drawbacks of shadow paging
1) Commit Overhead
Single transaction
a) actual data blocks
b) Current P.T
c) Disk address of current page table
2) Data Fragmentation
Location property is lost. Shadow paging causes database pages to change location
3) Garbage collection: When transaction commits, database pages containing old unison of data changed by transaction become inaccessible.
DBMS checkpoints
Checkpoints
The following problem occurs during recovery procedure:
• searching the entire log is time-consuming as we are not aware of the
consistency of the database after restart. Thus, we might unnecessarily redo
transactions, which have already output their updates to the database.
Thus, we can streamline recovery procedure by periodically performing check
pointing. Check pointing involves:
• Output of all the log records currently residing in the non-volatile memory onto
stable storage.
• Output all modified buffer blocks to the disk.
• Write a log record < checkpoint> on a stable storage.
During recovery we need to consider only the most recent transactions that started
before the checkpoint and is not completed till, checkpoint and transactions started
after check point. Scan backwards from end of log to find the most recent
<checkpoint> record. Continue scanning backwards till a record <Ti start> is found.
Need only consider part of the log following above start record. The earlier part of the
log may be ignored during recovery, and can be erased whenever desired. For all
transactions (starting from Ti or later) with no <Ti commit>, execute undo (Ti ).
(Done only in case immediate modification scheme is used). Scanning forward in the
log, for all transactions starting from Ti or later with a <Ti commit>, execute redo(Ti).
Important Concepts
- Data warehouse concepts
- Extract transform load
- Data warehousing and Data mining
- K mean clustering
- Multimedia Database
- Spatial database
- Explain Star Schema and Snow Flake Design
- Apriori algorithm
System Catalog: The system catalogue is a collection of tables and views that contain important information about a database.In relational DBMSs the catalogue is stored as relations.
The DBMS software is used for querying, updating, and maintaining the catalogue. This allows DBMS routines (as well as users) to access. The information stored in the catalogue can be accessed by the DBMS routines as well as users upon authorisation with the help of the query language such as SQL.
The information stored in a catalogue of an RDBMS includes:
• the relation names,
• attribute names,
• attribute domains (data types),
• descriptions of constraints (primary keys, secondary keys, foreign keys, NULL/ NOT NULL, and other types of constraints),
• views, and
• storage structures and indexes (index name, attributes on which index is defined, type of index etc).
Security and authorisation information is also kept in the catalogue, which describes:
• authorised user names and passwords,
• each user’s privilege to access specific database relations and views,
• the creator and owner of each relation. The privileges are granted using GRANT command. A listing of such commands is given in Figure 1.
The system catalogue can also be used to store some statistical and descriptive information about relations. Some such information can be:
• number of tuples in each relation,
• the different attribute values,
• storage and access methods used in relation.
All such information finds its use in query processing.
Data Dictionary
The data dictionary stores useful metadata, such as field descriptions, in a format that is independent of the underlying database system. Some of the functions served by the Data Dictionary include:
• ensuring efficient data access, especially with regard to the utilisation of indexes,
• partitioning the database into both logical and physical regions,
• specifying validation criteria and referential constraints to be automatically enforced,
• supplying pre-defined record types for Rich Client features, such as security and administration facilities, attached objects, and distributed processing (i.e., grid and cluster supercomputing).
Catalog system vs Data Dictionary
A catalogue is closely coupled with the DBMS software; it provides the information stored in it to users and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such as DDL and DML compilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer.
On the other hand, a Data Dictionary is a data structure that stores meta-data, i.e., data about data. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users, and administrators of a computer system for information resource management. These systems are used to maintain information on system hardware and software configurations, documentation, applications, and users, as well as other information relevant to system administration.
Passive and Active Data Dictionary
If a data dictionary system is used only by designers, users, and administrators, and not by the DBMS software, it is called a passive data dictionary; otherwise, it is called an active data dictionary or data directory. An active data dictionary is automatically updated as changes occur in the database. A passive data dictionary must be manually updated.
The data dictionary consists of record types (tables) created in the database by system-generated command files, tailored for each supported back-end DBMS. Command files contain SQL statements for CREATE TABLE, CREATE UNIQUE
INDEX, ALTER TABLE (for referential integrity), etc., using the specific SQL statement required by that type of database.
Data Dictionary Features
A comprehensive data dictionary product will include:
• support for standard entity types (elements, records, files, reports, programs, systems, screens, users, terminals, etc.), and their various characteristics (e.g., for elements, the dictionary might maintain Business name, Business definition, name, Data type, Size, Format, Range(s), Validation criteria, etc.)
• support for user-designed entity types (this is often called the “extensibility” feature); this facility is often exploited in support of data modelling, to record and cross-reference entities, relationships, data flows, data stores, processes, etc.
• the ability to distinguish between versions of entities (e.g., test and production)
• enforcement of in-house standards and conventions.
• comprehensive reporting facilities, including both “canned” reports and a reporting language for user-designed reports; typical reports include:
• detail reports of entities
• summary reports of entities
• component reports (e.g., record-element structures)
• cross-reference reports (e.g., element keyword indexes)
• where-used reports (e.g., element-record-program cross-references).
• a query facility, both for administrators and casual users, which includes the ability to perform generic searches on business definitions, user descriptions, synonyms, etc.
• language interfaces, to allow, for example, standard record layouts to be automatically incorporated into programs during the compile process.
• automated input facilities (e.g., to load record descriptions from a copy library).
• security features
• adequate performance tuning abilities
• support for DBMS administration, such as automatic generation of DDL (Data Definition Language).
Data Dictionary Benefits
The benefits of a fully utilised data dictionary are substantial. A data dictionary has the potential to:
• facilitate data sharing by
• enabling database classes to automatically handle multi-user coordination, buffer layouts, data validation, and performance optimisations,
• improving the ease of understanding of data definitions,
• ensuring that there is a single authoritative source of reference for all users
• facilitate application integration by identifying data redundancies,
• reduce development lead times by
• simplifying documentation
• automating programming activities.
• reduce maintenance effort by identifying the impact of change as it affects:
• users,
• data base administrators,
• programmers.
• improve the quality of application software by enforcing standards in the development process
• ensure application system longevity by maintaining documentation beyond project completions
• data dictionary information created under one database system can easily be used to generate the same database layout on any of the other database systems BFC supports (Oracle, MS SQL Server, Access, DB2, Sybase, SQL Anywhere, etc.)
These benefits are maximised by a fully utilised data dictionary. As the next section will show, our environment is such that not all of these benefits are immediately available to us.
Disadvantages of Data Dictionary
A DDS is a useful management tool, but it also has several disadvantages.
It needs careful planning. We would need to define the exact requirements designing its contents, testing, implementation and evaluation. The cost of a DDS includes not only the initial price of its installation and any hardware requirements, but also the cost of collecting the information entering it into the DDS, keeping it up-to-date and enforcing standards. The use of a DDS requires management commitment, which is not easy to achieve, particularly where the benefits are intangible and long term.
Catalogue in Distributed Database Systems
The data dictionary stores useful metadata on database relations. In a Distributed database systems information on locations, fragmentations and replications is also added to the catalogues.
The distributed database catalogue entries must specify site(s) at which data is being stored in addition to data in a system catalogue in a centralised DBMS. Because of data partitioning and replication, this extra information is needed. There are a number of approaches to implementing a distributed database catalogue.
• Centralised: Keep one master copy of the catalogue,
• Fully replicated: Keep one copy of the catalogue at each site,
• Partitioned: Partition and replicate the catalogue as usage patterns demand,
• Centralised/partitioned: Combination of the above.
Catalogue in Object Oriented Database Systems
An object oriented database systems brings together the features of object-oriented programming and the advantages of database systems under one persistent DBMS interface. Thus, they are very useful in applications with complex interrelationships, complex classes, inheritance etc. However, as far as data dictionary is concerned, it now additionally needs to describe few more classes, objects and their inter-relationships. Thus, a data dictionary for such a system may be more complex from the viewpoint of implementation; however, from the users point of view it has almost similar concepts. This data dictionary should now also store class definitions including the member variables, member functions, inheritances and relationships of various class objects.
ROLE OF SYSTEM CATALOGUE IN DATABASE ADMINISTRATION
Assertion:
Assertions are constraints that are normally of general nature. For example, the age of the student in a hypothetical University should not be more than 25 years or the minimum age of the teacher of that University should be 30 years. Such general constraints can be implemented with the help of an assertion statement. The syntax for creating assertion is:
Syntax:
CREATE ASSERTION <Name>
CHECK (<Condition>);
Thus, the assertion on age for the University as above can be implemented as:
CREATE ASSERTION age-constraint
CHECK (NOT EXISTS (
SELECT *
FROM STUDENT s
WHERE s.age > 25
OR s.age > (
SELECT MIN (f.age)
FROM FACULTY f
));
View : A view is a virtual table, which does not actually store data.view actually is a query and thus has a SELECT FROM WHERE ….. clause which works on physical table which stores the data. Thus, the view is a collection of relevant information for a specific entity.
Example: A student’s database may have the following tables:
STUDENT (name, enrolment-no, dateofbirth)
MARKS (enrolment-no, subjectcode, smarks)
For the database above a view can be created for a Teacher who is allowed to view only the performance of the student in his/her subject, let us say MCS-043.
CREATE VIEW SUBJECT-PERFORMANCE AS
(SELECT s.enrolment-no, name, subjectcode, smarks
FROM STUDENT s, MARKS m
WHERE s.enrolment-no = m.enrolment-no AND
subjectcode ‘MCS-043’ ORDER BY s.enrolment-no;
A view can be dropped using a DROP statement as:
DROP VIEW SUBJECT-PERFORMANCE;
Stored Procedures
Stored procedures are collections of small programs that are stored in compiled form and have a specific purpose. For example, a company may have rules such as:
• A code (like enrolment number) with one of the digits as the check digit, which checks the validity of the code.
• Any date of change of value is to be recorded.
The use of procedure has the following advantages from the viewpoint of database application development.
• They help in removing SQL statements from the application program thus making it more readable and maintainable.
• They run faster than SQL statements since they are already compiled in the database.
Stored procedures can be created using CREATE PROCEDURE in some commercial DBMS.
Syntax:
CREATE [or replace] PROCEDURE [user]PROCEDURE_NAME
[(argument datatype
[, argument datatype]….)]
Triggers
These events may be database update operations like INSERT, UPDATE, DELETE etc. A trigger consists of these essential components:
• An event that causes its automatic activation.
• The condition that determines whether the event has called an exception such that the desired action is executed.
• The action that is to be performed.
Database Security
Basically database security can be broken down into the following levels:
• Server Security
• Database Connections
• Table Access Control
• Restricting Database Access
Statistical Database Security...