This tutorial covers the basics of Database Management Systems, Various types of Databases, The Relational Model, Benefits of RDBMS and Oracle Database along with Frequently Asked Questions.
Database Management Systems
A database can be regarded as having two major components, and these are the data and the software needed to manage the data. There are a variety of terms used to describe the software and the two most commonly used are• Database Management System
• Information Management system and the terms are often used interchangeably.
There have been a variety of kinds of systems developed and a convenient but rough classification is
· Flat file
· Networked
· Hierarchical
· Relational
· Text
· Geographical or Spatial
When we consider each of these it is useful to have some criteria to work with, and the following prove useful· There must be a way of describing the data - this means the ability to handle a variety of data types including numbers (integers and reals), text (often of varying length), currency, dates, times, logical or boolean, spatial data, graphical objects maps, census data, satellite image data, vector and raster data;
· There must be a way of describing the relationship that exists between the data items, i.e. what structure the data has or what structure can be imposed on the data;
· There must be a way of getting the data into the database;
· There must be a way of examining the data;
· There must be a way of generating reports based on the data.
A typical database system will therefore have to provide ways of achieving the above tasks. Most commercial systems will also provide a variety of ways of making each of the above tasks easier, i.e. the use of menu driven front ends to guide the user through each of the stages in the design and implementation of a complete database system.Various types of Databases
Flat File Systems
Historically these kinds of systems were the first to be developed. An analogy that is useful is that of a filing cabinet full of files. This is the basis of a flat file system. There is often no significant structure, and this generally means applications using systems like this are quick to develop. Indices are generally used to improve the performance of the overall system.These systems are widely available on most computer systems, from micros to mainframes. Most systems available on micros are in fact flat file systems. Many however call themselves relational - a popular buzz word, and much over-used in advertisements for database packages. Micro based flat file systems are relative cheap in software terms, and generally, but not always, perform poorly with large volumes of data. Program language interfaces are available for many systems, and some commonly available languages are C, COBOL, Pascal, FORTRAN, assembler and PL/1.
Networked Systems
The term essentially describes the logical and physical view of the data as a group of records with links between records - and hence a network of data and links. A key concept in this kind of system is that of a navigation path required to get to the data item or record of interest, i.e. the way you move round the mesh to get at the data you want.Hierarchical Systems
The term describes the view of the data - in this case as a hierarchy. Again we have the problem of only being able to access data by navigation - in this case through the hierarchy. The key concept with the relational model is the view of the data as a table. This has proved to be a very useful concept and has enabled relationships between data items and records to be expressed not through physical links as with the networked and hierarchical approaches, but through properties of data in one or more tables. Let us consider how we get at the information on a Senior Analyst in the Advisory Group. All we do is search the two domains of interest (which may or may not be indexed) for data values that satisfy our criteria, i.e. we search the Group domain for people in Advisory, and the Title domain for Senior Analyst. Adding new domains is relatively easy, and hence design decisions are rarely as problematic as with the network and hierarchical data models.Database management systems have evolved from hierarchical to network to relational models. The most widely accepted database model is the relational model. Oracle extends the relational model to an object-relational model, which makes it possible to store complex business models in a relational database.
Some DBMS Commercial Packages available in the market are:
· Hierarchical Model - IMS
· Network Model - IDMS
· Relational Model - Oracle, DB2
The Relational Model
In 1970 an IBM researcher, Dr. E. F. Codd, came up with a better way—the relational data model. In this model, the database management system (DBMS) itself keeps track of all table relationships independent of hardware or outside programming languages. In the relational model, the user only needs to understand the logical structure of data, not how it is physically stored. In this model data is represented in simple two-dimensional tables (relations), which consist of rows (tuples) and columns (attributes). A relational database is simply a collection of tables.The relational model has three major aspects:
Structures are well-defined objects (such as tables, views, indexes, and so on) that store or access the data of a database. Structures and the data contained within them can be manipulated by operations.Operations are clearly defined actions that allow users to manipulate the data and structures of a database. The operations on a database must adhere to a predefined set of integrity rules.
Integrity rules are the laws that govern which operations are allowed on the data and structures of a database. Integrity rules protect the data and the structures of a database.
Benefits of RDBMS
Relational database management systems offer benefits such as:· Independence of physical data storage and logical database structure
· Variable and easy access to all data
· Complete flexibility in database design
· Reduced data storage and redundancy
· Data independence
· Data integrity
· Data sharing/ Data redundancy
· Data security
Oracle Database
Oracle extends the relational model to an object-relational model, which makes it possible to store complex business models in a relational database.The object-relational model allows users to define object types, specifying both the structure of the data and the methods of operating on the data, and to use these datatypes within the relational model.
Object types are abstractions of the real-world entities--for example, purchase orders--that application programs deal with. An object type has three kinds of components:
• A name, which serves to identify the object type uniquely.
• Attributes, which are built-in data types or other user-defined types. Attributes model the structure of the real world entity.
• Methods, which are functions or procedures written in PL/SQL and stored in the database, or written in a language like C and stored externally. Methods implement specific operations that an application can perform on the data. Every object type has a constructor method that makes a new object according to the data type’s specification.
An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multi-user environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery.
Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed.
The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.
The Oracle grid architecture pools large numbers of servers, storage, and networks into a flexible, on-demand computing resource for enterprise computing needs. The grid computing infrastructure continually analyzes demand for resources and adjusts supply accordingly.
For example, you could run different applications on a grid of several linked database servers. When reports are due at the end of the month, the database administrator could automatically provision more servers to that application to handle the increased demand.
Grid computing uses sophisticated workload management that makes it possible for applications to share resources across many servers. Data processing capacity can be added or removed on demand, and resources within a location can be dynamically provisioned. Web services can quickly integrate applications to create new business processes.
Grid computing offers high performance and scalability, because all computing resources can be flexibly allocated to applications as needed.
Oracle Database enables enterprise grid computing in the following ways:
· Performance and scalability with low cost hardware clusters, like Itanium and Linux.
· Reliability: Continuous availability of data and applications
· Security and privacy: security features that lets you share enterprise grid resources with confidence that privacy is maintained
· Self-management: Oracle infrastructure automates many functions so that a single administrator can manage hundreds of servers.
· Distributed computing: Oracle has advanced integration features that allow applications and data to run anywhere in the network.
At the highest level, the idea of grid computing is computing as a utility. In other words, you should not care where your data resides, or what computer processes your request. You should be able to request information or computation and have it delivered - as much as you want, and whenever you want. This is analogous to the way electric utilities work, in that you don't know where the generator is, or how the electric grid is wired, you just ask for electricity, and you get it. The goal is to make computing a utility, a commodity, and ubiquitous. Hence the name, The Grid. This view of utility computing is, of course, a "client side" view.Oracle provides substantial grid computing technology, available today that can help you capitalize on the grid. Oracle is the best on commodity clusters; a hardware platform that many believe would come to dominate the grid. Oracle possesses the key technology differentiators -- Oracle Real Application Clusters, Oracle Streams, and Oracle Transportable Tablespaces -- for building the grid. Only Oracle delivers the operational characteristics -- portability, RAS, security, and scalability -- necessary for the grid.
Frequently Asked Questions
1. What is a difference between RDBMS and Object RDBMS?
2. What is the advantage of Object RDBMS?
3. What are the different models (old and new) for DBMS?
4. What is the difference between logical and Physical structures of Database?
5. Is schema a logical structure or physical structure in oracle database?
Oracle 10g Free Training: Oracle Concepts and Database Architecture:This tutorial provides a brief overview of Architecture of Oracle database. It also gives an introduction to the concept of instance. The document discusses the structures that are used to define the physical and logical representations if Oracle Database.
Oracle Database Architecture
Oracle is an RDBMS (Relational Database Management System). The Oracle database architecture can be described in terms of logical and physical structures. The advantage of separating the logical and physical structure is that the physical storage structure can be changed without affecting the logical structure.Logical Structure
The logical structure for Oracle RDBMS consists of the following elements:· Tablespace
· Schema
Tablespace
The Oracle database consists of one or more logical portions called as ‘Tablespaces’. A tablespace is a logical grouping of related data.A database administrator can use Tablespaces to do the following:
· Control disk space allocation for database data.
· Assign specific space quotas for database users.
· Perform partial database backup or recovery operations.
· Allocate data storage across devices to improve performance.
Each database has at least one Tablespace called SYSTEM Tablespace. As part of the process of creating the database, Oracle automatically creates the SYSTEM tablespace. Although a small database can fit within the SYSTEM tablespace, it's recommended that to create a separate tablespace for user data. Oracle uses the SYSTEM tablespace to store information like the data dictionary. Data dictionary stores the metadata (or the data about data). This includes information like table access permissions, information about keys etc.
Data is stored in the database in form of files called as datafiles. Each Tablespace is a collection of one or more Datafiles. Each data file consists of ‘Data blocks’, ‘extents’ and ‘segments’.
Data Blocks
At the finest level of granularity, an ORACLE database's data is stored in data blocks (also called logical blocks, ORACLE blocks, or pages). An ORACLE database uses and allocates free database space in ORACLE data blocks.Extents
The next level of logical database space is called an extent. An extent is a specific number of contiguous data blocks that are allocated for storing a specific type of information.Segments
The level of logical database storage above an extent is called a segment. A segment is a set of extents that have been allocated for a specific type of data structure, and all are stored in the same tablespace. For example, each table's data is stored in its own data segment, while each index's data is stored in its own index segment. ORACLE allocates space for segments in extents. Therefore, when the existing extents of a segment are full, ORACLE allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk, and may or may not span files.An Oracle database can use four types of segments:
· Data segment--Stores user data within the database.
· Index segment--Stores indexes.
· Rollback segment--Stores rollback information. This information is used when data must be rolled back.
· Temporary segment--Created when a SQL statement needs a temporary work area; these segments are destroyed when the SQL statement is finished. These segments are used during various database operations, such as sorts.
Schema
The database schema is a collection of logical-structure objects, known as schema objects that define how you see the database's data. A schema also defines a level of access for the users. All the logical objects in oracle are grouped into a schema.A scheme is a logical grouping of objects such as:
· Tables
· Clusters
· Indexes
· Views
· Stored procedures
· Triggers
· Sequences
Physical Structure
The physical layer of the database consists of three types of files:1. One or more Datafiles
2. Two or more redo log files
3. One or more control files
Datafiles (.dbf files):
Datafiles store the information contained in the database. One can have as few as one data file or as many as hundreds of datafiles. The information for a single table can span many datafiles or many tables can share a set of datafiles. Spreading tablespaces over many datafiles can have a significant positive effect on performance. The number of datafiles that can be configured is limited by the Oracle parameter MAXDATAFILES.Redo Log Files (.rdo & .arc):
Oracle maintains logs of all the transaction against the database. These transactions are recorded in files called Online Redo Log Files (Redo Logs). The main purpose of the Redo log files is to hold information as recovery in the event of a system failure. Redo log stores a log of all changes made to the database. The redo log files must perform well and be protected against hardware failures (through software or hardware fault tolerance). If redo log information is lost, one cannot recover the system.When a transaction occurs in the database, it is entered in the redo log buffers, while the data blocks affected by the transactions are not immediately written to disk. In an Oracle database there are at least three or more Redo Log files.
Oracle writes to redo log file in a cyclical order i.e. after the first log file is filled, it writes to the second log file, until that one is filled. When all the Redo Log files have been filled, it returns to the first log file and begin overwrite its content with new transaction data. Note, if the database is running in the ARCHIVELOG Mode, the database will make a copy of the online redo log files before overwriting them.
Control Files (.ctl):
Control files record control information about all of the files within the database. These files maintain internal consistency and guide recovery operation. Control files contain information used to start an instance, such as the location of datafiles and redo log files; Oracle needs this information to start the database instance. Control files must be protected. Oracle provides a mechanism for storing multiple copies of control files. These multiple copies are stored on separate disks to minimize the potential damage due to disk failure. The names of the database’s control files are specified via the CONTROL_FILES initialization parameter.Oracle Instance and DatabaseIn Oracle to access the data in the database, Oracle uses a set of Background processes that are shared by every user. Also, along with this there is some memory structure that stores the most recently queried data from the database.
A Database Instance (also save as Server) is set of memory structures and background processes that access a set of database files. It is possible for the single database to contain multiple instances, which is known as Real Application Cluster. The parameters that determine the size and composition of an oracle instance are either stored in an initialization file called init.ora or in spfile.ora. The initialization parameter file is read during instance start up and may be modified by the DBA. Any modification made will not be affected until the next start up.
Figure-1 Instance and database
System Global Area (SGA)
Oracle uses an area of shared memory called the system global area (SGA) and a private memory area for each process called the program global area (PGA). The SGA consists of:
System global area:
The SGA is a shared memory region that contains data and control information for one oracle instance. Oracle allocates the SGA when an instance starts and de-allocates it when the instance shut downs. Every instance has the SGA .The entire SGA should be large as possible to increase the system performance and to reduce disk I/O.
The information stored in the SGA is divided into three memory structures,
1) Database buffers cache.
2) Redo log buffers.
3) Shared pool.
Database buffers cache: The database buffer stores the most recently used blocks of data. The set of database buffers in an instance is the database buffer cache. The buffer cache contains modified as well as unmodified blocks. Because the most recently and most frequently used data is kept in memory. It improves the performance of system by reducing the disk I/O operations.
Redo log buffers: The redo log buffer stores redo entries. This is a log of changes made to the database. The redo entries stored in the redo log buffers are written to an online redo log. An online redo log is a set of two or more files that record all the changes made to oracle data files and control files.
System global area:
The SGA is a shared memory region that contains data and control information for one oracle instance. Oracle allocates the SGA when an instance starts and de-allocates it when the instance shut downs. Every instance has the SGA .The entire SGA should be large as possible to increase the system performance and to reduce disk I/O.
The information stored in the SGA is divided into three memory structures,
1) Database buffers cache.
2) Redo log buffers.
3) Shared pool.
Database buffers cache: The database buffer stores the most recently used blocks of data. The set of database buffers in an instance is the database buffer cache. The buffer cache contains modified as well as unmodified blocks. Because the most recently and most frequently used data is kept in memory. It improves the performance of system by reducing the disk I/O operations.
Redo log buffers: The redo log buffer stores redo entries. This is a log of changes made to the database. The redo entries stored in the redo log buffers are written to an online redo log. An online redo log is a set of two or more files that record all the changes made to oracle data files and control files.
Shared pool: The shared pool caches various constructs that can be shared among users, for example SQL shared area. For example’s SQL statements are cached so that they can be reused. Stored procedures can be cached for faster access. Note that in previous versions “out of memory” error were occurs if the shared memory was full. In oracle 10g it does not happen. It provides automatic shared memory tuning.
Program global area
PGA is a memory buffer that contains data and control information for a server process. A server process is a process that services a client’s requests. A PGA is created by oracle when a server process is started. The information in a PGA depends on the oracle configuration. The PGA area is a non-shared area of memory created by oracle when a server process is started.
The basic difference between SGA and PGA is that PGA cannot be shared between multiple processes in the sense that it is used only for requirements of a particular process whereas the SGA is used for the whole instance and it is shared.
The basic difference between SGA and PGA is that PGA cannot be shared between multiple processes in the sense that it is used only for requirements of a particular process whereas the SGA is used for the whole instance and it is shared.
Processes
The relationships between the databases’ physical and memory structures are maintained and enforced by the Background Process. These are database’s own background processes that may vary in the number depending in your database configuration. The Trace files are only created when there is any problem. Some of the Background Processes are:
SMON:
SMON stands for System Monitor and this database background process performs instance recovery at the start of the database. In a multiple instance system, SMON of one instance can also perform instance recovery for other instances that have failed. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during crash and instance recovery because of file-read or offline errors. It coalesces i.e. combines contiguous free extents into larger free extents.
PMON:
PMON stands for the Process Monitor and this database background process cleans up failed user processes. PMON is responsible for releasing the lock i.e. cleaning up the cache and freeing resources that the process was using. Its effect can be seen when a process holding a lock is killed.
DBWR:
The DBWR (Database Writer) background process is responsible for managing the contents of the data block buffer cache and dictionary cache. DBWR performs batch writes of changed block. Since Oracle uses write-ahead logging, DBWR does not need to write blocks when a transaction commits. In the most common case, DBWR writes only when more data needs to be read into the system global area and too few database buffers are free. The least recently used data is written to the datafiles first.
Although there is only one SMON and one PMON process running per database instance, one can have multiple DBWR processes running at the same time. Note the number of DBWR processes running is set via the DB_WRITER_PROCESSES.
LGWR:
The LGWR (Log Writer) background process manages the writing of the contents of the redo log buffer to the online redo log files. LGWR writes the log entries in the batches form. The Redo log buffers entries always contain the most up-to-date status of the database. Note LGWR is the only one process that writes to the online redo log files and the only one that directly reads the redo log buffer during to the normal database operation.
Archiver (ARCH):
The Archiver process reads the redo log files once Oracle has filled them and writes a copy of the used redo log files to the specified archive log destination(s). Actually, for most databases, ARCH has no effect on the overall system performance. On some large database sites, however, archiving can have an impact on system performance. We can specify up to ten ARCn processes for each database instance LGWR will start additional Archivers as needed, based on the load, up to the limit specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES.
Checkpoint process (CKPT):
All modified information in database buffer in the SGA is written to the datafiles by a database write process (DBWR). This event indicates a checkpoint. The checkpoint process is responsible for signaling DBWR at checkpoints and updating all of the datafiles and control files of the database.
Recover (RECO)
The recover process automatically cleans up failed or suspended distributed transactions.
PMON:
PMON stands for the Process Monitor and this database background process cleans up failed user processes. PMON is responsible for releasing the lock i.e. cleaning up the cache and freeing resources that the process was using. Its effect can be seen when a process holding a lock is killed.
DBWR:
The DBWR (Database Writer) background process is responsible for managing the contents of the data block buffer cache and dictionary cache. DBWR performs batch writes of changed block. Since Oracle uses write-ahead logging, DBWR does not need to write blocks when a transaction commits. In the most common case, DBWR writes only when more data needs to be read into the system global area and too few database buffers are free. The least recently used data is written to the datafiles first.
Although there is only one SMON and one PMON process running per database instance, one can have multiple DBWR processes running at the same time. Note the number of DBWR processes running is set via the DB_WRITER_PROCESSES.
LGWR:
The LGWR (Log Writer) background process manages the writing of the contents of the redo log buffer to the online redo log files. LGWR writes the log entries in the batches form. The Redo log buffers entries always contain the most up-to-date status of the database. Note LGWR is the only one process that writes to the online redo log files and the only one that directly reads the redo log buffer during to the normal database operation.
Archiver (ARCH):
The Archiver process reads the redo log files once Oracle has filled them and writes a copy of the used redo log files to the specified archive log destination(s). Actually, for most databases, ARCH has no effect on the overall system performance. On some large database sites, however, archiving can have an impact on system performance. We can specify up to ten ARCn processes for each database instance LGWR will start additional Archivers as needed, based on the load, up to the limit specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES.
Checkpoint process (CKPT):
All modified information in database buffer in the SGA is written to the datafiles by a database write process (DBWR). This event indicates a checkpoint. The checkpoint process is responsible for signaling DBWR at checkpoints and updating all of the datafiles and control files of the database.
Recover (RECO)
The recover process automatically cleans up failed or suspended distributed transactions.
Frequently Asked Questions
1. What is a schema?
2. What is an instance?
3. What is a tablespace?
4. What are the differences between SGA and PGA?
5. What is the significance of background processes?]
6. What is the difference between instance and database?
7. Can I have multiple instances of database?
8. What is the use of control files?
9. What is the use of redo log files?
10. What is checkpoint process?
Oracle 10g Free Training:Creating Database Using Database Configuration Assistant (DBCA):This tutorial explains how you can Create Database Using Database Configuration Assistant (DBCA).
To create database by using database configuration go to start button and click on programs -> oracle -> configuration and migration tools ->database configuration assistant You will get a welcome screen as below.
To create database by using database configuration go to start button and click on programs -> oracle -> configuration and migration tools ->database configuration assistant
You will get a welcome screen as below.
Click the next button to get the database templates page, select the type of database template to be used in creating the database. You can click show details to see the configuration for each type of database. Choose the template suited to the type of workload your database will support. If you are not sure, select the default general template. Click Next.
Click next button to continue, here you have to enter the GLOBAL DATABASE NAME and SID.
Click the next button. By default it select the database with enterprise manager and grid control for database management .or you can choose grid control for database management as it suit to your configuration
Click the next button, enter the password.
Click the next button, Select File System, Automatic Storage Management, or Raw Devices to best suite to system.
Click the next button; specify the location for the creation of the datafiles. Choose one of the following: Use database File Locations from Template, Use Common Location for All database Files, or Use Oracle-Managed Files.
Click next button, Select Flash Recovery Area and specify a directory location and size
Click next button, Click the Sample Schemas tab. Select Sample Schemas if you want to include the Sample Schemas (EXAMPLE) tablespace in your database
Click the next button, The Memory page appears keep the defaults,
Click the next button, the database Storage page appears. Accept the configuration or make changes as needed
Click the next button, Select Create database to create your database. You can also select Save as a database Template to save your configuration
Click the finish button, the confirmation screen will appear
Click ok
After the database is created, you can change or unlock your passwords or click exit.
Oracle 10g Training - Triggers And Triggering Events
In this tutorial you will learn about Triggers, Triggering Events, Types of Triggers - DML Triggers, System Triggers, Instead-of Triggers and Syntax for Trigger Creation.Triggers:
Triggers are named PL/SQL blocks that get executed implicitly when a triggering event occurs. Rather that being executed when called (as is the case with procedures and functions), triggers get fired automatically when certain events occur in the system. The action of executing a trigger is called ‘firing’ the trigger. A trigger fires when a triggering event occurs.Triggering Events:
Triggering Events are events that occur due to the users’ actions (or system events) that cause a trigger to be fired. Triggering events can be insertion, deletion, update etc. When any of these events occurs, it executes the triggers written on that event implicitly.Types of Triggers:
Although, there may be many types and classifications of triggers, basically, there are three types of triggers:DML Triggers:
DML triggers are fired by the execution of a DML statement. The DML triggers can be defined on insert, update or delete operations. Whenever a DML operation occurs on a table, the trigger will execute. Also, the triggers can be created in such a way that they get executed either before or after the DML operation occurs.
System Triggers:
System triggers fire when a system event such as a database startup or shutdown happens. System triggers can also be fired on DDL operations such as create table.
Instead-of Triggers:
Instead-of triggers can be defined on operations performed on views only. When you define a instead of trigger on an operation on a view, the trigger code will be executed instead of the operation that fired it. This type of triggers can only be row level.
Syntax for trigger creation:
The syntax for trigger creation is as below:CREATE [OR REPLACE] TRIGGER trigger_name
{BEFORE/AFTER/INSTEAD OF}triggering_event
[WHEN trigger_condition]
[FOR EACH ROW]
trigger_body;
Below is an example of creating a trigger:
Figure -4 Trigger Creation
Frequently Asked Questions
1. What are the various PL/SQL objects?
2. What is a package?
3. What is a procedure?
4. What is a function?
5. What is a trigger?
6. What are the differences between procedures and functions?
7. Where are the procedures and functions stored?
8. What are triggering events?
9. What are the various types of triggers?
10. What are instead-of triggers?
2. What is a package?
3. What is a procedure?
4. What is a function?
5. What is a trigger?
6. What are the differences between procedures and functions?
7. Where are the procedures and functions stored?
8. What are triggering events?
9. What are the various types of triggers?
10. What are instead-of triggers?
Oracle 10g Free Training - Creating Views, Altering, Dropping and Replacing Views
In this training session you will learn about Views. This section describes aspects about Views, Creating Views - Join Views, Expansion of Defining Queries at View Creation Time and Creating Views with errors. Altering Views and Dropping Views. Replacing Views and different ways with which you can replace views.Views
About Views
A view is a logical representation of another table or combination of tables. A view derives its data from the tables on which it is based. These tables are called base tables. Base tables might in turn be actual tables or might be views themselves. All operations performed on a view actually affect the base table of the view. You can use views in almost the same way as tables. You can query, update, insert into, and delete from views, just as you can standard tables.Views can provide a different representation (such as subsets or supersets) of the data that resides within other tables and views. Views are very powerful because they allow you to tailor the presentation of data to different types of users.
Creating Views
To create a view, you must meet the following requirements:· To create a view in your schema, you must have the CREATE VIEW privilege. To create a view in another user's schema, you must have the CREATE ANY VIEW system privilege. You can acquire these privileges explicitly or through a role.
.
.
· The owner of the view (whether it is you or another user) must have been explicitly granted privileges to access all objects referenced in the view definition. The owner cannot have obtained these privileges through roles. Also, the functionality of the view is dependent on the privileges of the view owner. For example, if the owner of the view has only the INSERT privilege for Scott's emp table, the view can only be used to insert new rows into the emp table, not to SELECT, UPDATE, or DELETE rows.
.
.
· If the owner of the view intends to grant access to the view to other users, the owner must have received the object privileges to the base objects with the GRANT OPTION or the system privileges with the ADMIN OPTION.
You can create views using the CREATE VIEW statement. Each view is defined by a query that references tables, materialized views, or other views. As with all subqueries, the query that defines a view cannot contain the FOR UPDATE clause.The following statement creates a view on a subset of data in the emp table:
CREATE VIEW sales_staff AS
SELECT empno, ename, deptno
FROM emp
WHERE deptno = 10
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;
Figure 13. Creating Views with check option
The query that defines the sales_staff view references only rows in department 10. Furthermore, the CHECK OPTION creates the view with the constraint (named sales_staff_cnst) that INSERT and UPDATE statements issued against the view cannot result in rows that the view cannot select. For example, the following INSERT statement successfully inserts a row into the emp table by means of the sales_staff view, which contains all rows with department number 10:
INSERT INTO sales_staff VALUES (7584, 'OSTER', 10);
However, the following INSERT statement returns an error because it attempts to insert a row for department number 30, which cannot be selected using the sales_staff view:
INSERT INTO sales_staff VALUES (7591, 'WILLIAMS', 30);
Figure 14. Error because can not select specified row
The view could optionally have been constructed specifying the WITH READ ONLY clause, which prevents any updates, inserts, or deletes from being done to the base table through the view. If no WITH clause is specified, the view, with some restrictions, is inherently updatable.
Join Views
You can also create views that specify more than one base table or view in the FROM clause. These are called join views. The following statement creates the division1_staff view that joins data from the emp and dept tables:CREATE VIEW division1_staff AS
SELECT ename, empno, job, dname
FROM emp, dept
WHERE emp.deptno IN (10, 30)
AND emp.deptno = dept.deptno;
Figure 15. Creating a join view
An updatable join view is a join view where UPDATE, INSERT, and DELETE operations are allowed.
Expansion of Defining Queries at View Creation Time
When a view is created, Oracle Database expands any wildcard (*) in a top-level view query into a column list. The resulting query is stored in the data dictionary; any subqueries are left intact. The column names in an expanded column list are enclosed in quote marks to account for the possibility that the columns of the base object were originally entered with quotes and require them for the query to be syntactically correct.As an example, assume that the dept view is created as follows:
CREATE VIEW dept AS SELECT * FROM scott.dept;
The database stores the defining query of the dept view as:
SELECT "DEPTNO", "DNAME", "LOC" FROM scott.dept;
Views created with errors do not have wildcards expanded. However, if the view is eventually compiled without errors, wildcards in the defining query are expanded.
Creating Views with Errors
If there are no syntax errors in a CREATE VIEW statement, the database can create the view even if the defining query of the view cannot be executed. In this case, the view is considered "created with errors." For example, when a view is created that refers to a nonexistent table or an invalid column of an existing table, or when the view owner does not have the required privileges, the view can be created anyway and entered into the data dictionary. However, the view is not yet usable.To create a view with errors, you must include the FORCE clause of the CREATE VIEW statement.
CREATE FORCE VIEW AS ...;
By default, views with errors are created as INVALID. When you try to create such a view, the database returns a message indicating the view was created with errors. If conditions later change so that the query of an invalid view can be executed, the view can be recompiled and be made valid (usable).
Altering Views
You use the ALTER VIEW statement only to explicitly recompile a view that is invalid.The ALTER VIEW statement lets you locate recompilation errors before run time. To ensure that the alteration does not affect the view or other objects that depend on it, you can explicitly recompile a view after altering one of its base tables.
To use the ALTER VIEW statement, the view must be in your schema, or you must have the ALTER ANY TABLE system privilege
Dropping Views
You can drop any view contained in your schema. To drop a view in another user's schema, you must have the DROP ANY VIEW system privilege. Drop a view using the DROP VIEW statement. For example, the following statement drops the emp_dept view:DROP VIEW emp_dept;
Replacing View
To replace a view, you must have all the privileges required to drop and create a view. If the definition of a view must change, the view must be replaced; you cannot use an ALTER VIEW statement to change the definition of a view. You can replace views in the following ways:· You can drop and re-create the view.
Caution: When a view is dropped, all grants of corresponding object privileges are revoked from roles and users. After the view is re-created, privileges must be regranted.
· You can redefine the view with a CREATE VIEW statement that contains the OR REPLACE clause. The OR REPLACE clause replaces the current definition of a view and preserves the current security authorizations. For example, assume that you created the sales_staff view as shown earlier, and, in addition, you granted several object privileges to roles and other users. However, now you need to redefine the sales_staff view to change the department number specified in the WHERE clause. You can replace the current version of the sales_staff view with the following statement:
CREATE OR REPLACE VIEW sales_staff AS SELECT empno, ename, deptno
FROM emp
WHERE deptno = 30
WITH CHECK OPTION CONSTRAINT sales_staff_cnst;
Figure 16. Creating views with force option and replacing views
Before replacing a view, consider the following effects:
Replacing a view replaces the view definition in the data dictionary. All underlying objects referenced by the view are not affected.
If a constraint in the CHECK OPTION was previously defined but not included in the new view definition, the constraint is dropped.
All views and PL/SQL program units dependent on a replaced view become invalid (not usable).
Oracle 10g Tutorials - Identifying PL/SQL Objects
In this tutorial you will learn about indentifying PL/SQL Objects like Packages, Procedures and Functions using SQL Plus based examples and illustrations.Document Summary
This document is a simple guide to understanding PL/SQL objects such as stored procedures, functions, triggers etc. This guide provides insight into creating the above-mentioned objects using SQL Plus.Identifying PL/SQL Objects
PL/SQL objects are named blocks of PL/SQL with declarative, executable and exception handling sections. The PL/SQL objects are stored in the data dictionary with a name and can be reused. PL/SQL objects include Packages, Stored Procedures, Functions and triggers.Packages:
A Package is a PL/SQL construct that allows related objects to be stored together. Apart from grouping related objects, packages also provide many more advantages such as improved performance and ability to reference the PL/SQL objects contained within a package from other PL/SQL blocks.A Package consists of two portions:
a) Package Specification
Package Specification consists of information about the contents of the package. Essentially, package specification contains ‘forward declarations’ of the procedures and functions in that package but does not contain the codes for any of these subprograms.
The syntax for creating package specification is as below:
CREATE [OR REPLACE] PACKAGE package_name {IS/AS}
type_definition/
procedure_specification/
function_specification/
variable_declaration/
exception_declaration/
cursor_declaration/
pragma_declaration
END [package_name];
b) Package Body
Package Body contains the actual code for the subprograms in the package. Package body is a separate data dictionary object from the package header. The package body cannot be compiled without the package specification being compiled successfully. The package body should compulsorily contain the definition for all sub program definitions contained in the package specification.
The syntax for creating package specification is as below:
CREATE [OR REPLACE] PACKAGE BODY package_name {IS/AS}
procedure_definitions
function_definitions
END [package_name];
Below is an example of creating a package:
Figure -1 Package Creation
Procedures:
A Procedure is a named PL/SQL block with declarative section, an executable section and an exception handling section. Procedures are stored in the data dictionary and are executed whenever they are invoked from another PL/SQL block. Procedures can be called from any PL/SQL block and arguments can be passed while calling. A procedure call is a PL/SQL statement and cannot be a part of an expression.The syntax for creating a procedure is as below:
CREATE [OR REPLACE] procedure procedure_name
[(argument_1[{IN/OUT/IN OUT}] data_type,
…
…
…
…
argument_n[{IN/OUT/IN OUT}] data_type)] {IS/AS}
/*declarative section*/
BEGIN
/*executable section*/
EXCEPTION
/*exception section*/
END [procedure_name];
Below is an example of creating a procedure:
Figure -2 Procedure Creation
Functions:
A Function, just like a procedure, is a named PL/SQL block that has an executable section and an exception handling section. Functions also, are stored in the data dictionary and can be called from other PL/SQL blocks when required. Functions can also have parameters and the arguments can be passed while calling them.There are some differences between procedures and functions though. Functions can only be called as a part of an expression (as opposed to procedure calls which is a PL/SQL statement by itself). Function body contains a ‘return’ statement that returns the result of function execution to the calling expression. So, the function has a return type that has to be declared in the function header.
The syntax for creating a function is as below:
CREATE [OR REPLACE] FUNCTION function_name
[( argument_1 [{IN/OUT/IN OUT}] data_type,
…
…
…
…
argument_n [{IN/OUT/IN OUT}] data_type,)]
RETURN return_type {IS/AS}
/*declarative section*/
BEGIN
/*executable section*/
RETURN expression;
EXCEPTION
/*exception section*/
END [function_name];
Below is an example of creating a function:
Figure -3 Function Creation
Oracle 10g Free Training - SQL*Loader Discarded and Rejected Records
In this tutorial you will learn about Discarded and Rejected Records - The Bad File, SQL*Loader Rejects, Oracle Database Rejects, The Discard File and Log File and Logging InformationDiscarded and Rejected Records
Records read from the input file might not be inserted into the database. Such records are placed in either a bad file or a discard file.The Bad File
The bad file contains records that were rejected, either by SQL*Loader or by the Oracle database. Some of the possible reasons for rejection are discussed in the next sections.SQL*Loader Rejects
Datafile records are rejected by SQL*Loader when the input format is invalid. For example, if the second enclosure delimiter is missing, or if a delimited field exceeds its maximum length, SQL*Loader rejects the record. Rejected records are placed in the bad file.Oracle Database Rejects
After a datafile record is accepted for processing by SQL*Loader, it is sent to the Oracle database for insertion into a table as a row. If the Oracle database determines that the row is valid, then the row is inserted into the table. If the row is determined to be invalid, then the record is rejected and SQL*Loader puts it in the bad file. The row may be invalid, for example, because a key is not unique, because a required field is null, or because the field contains invalid data for the Oracle datatype.The Discard File
As SQL*Loader executes, it may create a file called the discard file. This file is created only when it is needed, and only if you have specified that a discard file should be enabled. The discard file contains records that were filtered out of the load because they did not match any record-selection criteria specified in the control file.The discard file therefore contains records that were not inserted into any table in the database. You can specify the maximum number of such records that the discard file can accept. Data written to any database table is not written to the discard file.
Log File and Logging Information
When SQL*Loader begins execution, it creates a log file. If it cannot create a log file, execution terminates. The log file contains a detailed summary of the load, including a description of any errors that occurred during the load.The other loading parameters such as the log file, the bad file, the discard file, and maximum number of errors are specified in the screen below.
The below screen is the final screen to initiate the data load process using the SQL*Loader. The data load can be scheduled to run immediately or at a specific time later
Oracle 10g Free Training - Creating Index-Organized Tables
Let us learn about Creating Index-Organized Tables by Creating an Index-Organized Table, further by Creating Index-Organized Tables that Contain Object Types and also you will learn how to View Information about Tables.Creating Index-Organized Tables
You use the CREATE TABLE statement to create index-organized tables, but you must provide additional information:An ORGANIZATION INDEX qualifier, which indicates that this is an index-organized table
A primary key, specified through a column constraint clause (for a single column primary key) or a table constraint clause (for a multiple-column primary key).
Optionally, you can specify the following:
· An OVERFLOW clause, which preserves dense clustering of the B-tree index by storing the row column values exceeding a specified threshold in a separate overflow data segment.
.
.
· A PCTTHRESHOLD value, which defines the percentage of space reserved in the index block for an index-organized table. Any portion of the row that exceeds the specified threshold is stored in the overflow segment. In other words, the row is broken at a column boundary into two pieces, a head piece and tail piece. The head piece fits in the specified threshold and is stored along with the key in the index leaf block. The tail piece is stored in the overflow area as one or more row pieces. Thus, the index entry contains the key value, the nonkey column values that fit the specified threshold, and a pointer to the rest of the row.
.
.
· An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in the overflow data segment.
Creating an Index-Organized Table
The following statement creates an index-organized table:CREATE TABLE admin_docindex(
token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
OVERFLOW TABLESPACE admin_tbs2;
Figure 12. Creating Index organized table
Specifying ORGANIZATION INDEX causes the creation of an index-organized table, admin_docindex, where the key columns and nonkey columns reside in an index defined on columns that designate the primary key or keys for the table. In this case, the primary keys are token and doc_id. An overflow segment is specified and is discussed in "Using the Overflow Clause ".
Creating Index-Organized Tables that Contain Object Types
Index-organized tables can store object types. The following example creates object type admin_typ, then creates an index-organized table containing a column of object type admin_typ:CREATE OR REPLACE TYPE admin_typ AS OBJECT
(col1 NUMBER, col2 VARCHAR2(6));
CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ)
ORGANIZATION INDEX;
You can also create an index-organized table of object types. For example:
CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY)
ORGANIZATION INDEX;
Another example, that follows, shows that index-organized tables store nested tables efficiently. For a nested table column, the database internally creates a storage table to hold all the nested table rows.
CREATE TYPE project_t AS OBJECT(pno NUMBER, pname VARCHAR2(80));
/
CREATE TYPE project_set AS TABLE OF project_t;
/
CREATE TABLE proj_tab (eno NUMBER, projects PROJECT_SET)
NESTED TABLE projects STORE AS emp_project_tab
((PRIMARY KEY(nested_table_id, pno))
ORGANIZATION INDEX)
RETURN AS LOCATOR;
The rows belonging to a single nested table instance are identified by a nested_table_id column. If an ordinary table is used to store nested table columns, the nested table rows typically get de-clustered. But when you use an index-organized table, the nested table rows can be clustered based on the nested_table_id column.
Viewing Information About Tables
The following views allow you to access information about tablesView | Description |
DBA_TABLES ALL_TABLES USER_TABLES | DBA view describes all relational tables in the database. ALL view describes all tables accessible to the user. USER view is restricted to tables owned by the user. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement. |
DBA_TAB_COLUMNS ALL_TAB_COLUMNS USER_TAB_COLUMNS | These views describe the columns of tables, views, and clusters in the database. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement. |
DBA_ALL_TABLES ALL_ALL_TABLES USER_ALL_TABLES | These views describe all relational and object tables in the database. Object tables are not specifically discussed in this book. |
DBA_TAB_COMMENTS ALL_TAB_COMMENTS USER_TAB_COMMENTS | These views display comments for tables and views. Comments are entered using the COMMENT statement. |
DBA_COL_COMMENTS ALL_COL_COMMENTS USER_COL_COMMENTS | These views display comments for table and view columns. Comments are entered using the COMMENT statement. |
DBA_EXTERNAL_TABLES ALL_EXTERNAL_TABLES USER_EXTERNAL_TABLES | These views list the specific attributes of external tables in the database. |
DBA_EXTERNAL_LOCATIONS ALL_EXTERNAL_LOCATIONS USER_EXTERNAL_LOCATIONS | These views list the data sources for external tables. |
DBA_TAB_HISTOGRAMS ALL_TAB_HISTOGRAMS USER_TAB_HISTOGRAMS | These views describe histograms on tables and views. |
DBA_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS | These views provide column statistics and histogram information extracted from the related TAB_COLUMNS views. |
DBA_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS | These views describe tables that have been modified since the last time table statistics were gathered on them. They are not populated immediately, but after a time lapse (usually 3 hours). |
DBA_UNUSED_COL_TABS ALL_UNUSED_COL_TABS USER_UNUSED_COL_TABS | These views list tables with unused columns, as marked by the ALTER TABLE ... SET UNUSED statement. |
DBA_PARTIAL_DROP_TABS ALL_PARTIAL_DROP_TABS USER_PARTIAL_DROP_TABS | These views list tables that have partially completed DROP COLUMN operations. These operations could be incomplete because the operation was interrupted by the user or a system failure. |
Column information, such as name, datatype, length, precision, scale, and default data values can be listed using one of the views ending with the _COLUMNS suffix. For example, the following query lists all of the default column values for the emp and dept tables:
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_LENGTH, LAST_ANALYZED FROM DBA_TAB_COLUMNS
WHERE OWNER = 'SYSTEM’
ORDER BY TABLE_NAME;
The following is the output from the query:
TABLE_NAME | COLUMN_NAME | DATA_TYPE | DATA_LENGTH | LAST_ANALYZED |
COUNTRIES | COUNTRY_ID | CHAR | 2 | 05-FEB-03 |
COUNTRIES | COUNTRY_NAME | VARCHAR2 | 40 | 05-FEB-03 |
COUNTRIES | REGION_ID | NUMBER | 22 | 05-FEB-03 |
DEPARTMENTS | DEPARTMENT_ID | NUMBER | 22 | 05-FEB-03 |
DEPARTMENTS | DEPARTMENT_NAME | VARCHAR2 | 30 | 05-FEB-03 |
DEPARTMENTS | MANAGER_ID | NUMBER | 22 | 05-FEB-03 |
DEPARTMENTS | LOCATION_ID | NUMBER | 22 | 05-FEB-03 |
EMPLOYEES | EMPLOYEE_ID | NUMBER | 22 | 05-FEB-03 |
EMPLOYEES | FIRST_NAME | VARCHAR2 | 20 | 05-FEB-03 |
EMPLOYEES | LAST_NAME | VARCHAR2 | 25 | 05-FEB-03 |
EMPLOYEES | EMAIL | VARCHAR2 | 25 | 05-FEB-03 |
. | ||||
. | ||||
. | ||||
LOCATIONS | COUNTRY_ID | CHAR | 2 | 05-FEB-03 |
REGIONS | REGION_ID | NUMBER | 22 | 05-FEB-03 |
REGIONS | REGION_NAME | VARCHAR2 | 25 | 05-FEB-03 |
Oracle 10g Free Training : Creating Tables
In this training you will learn about Creating Tables and Parallelizing Table Creation.Creating Tables
To create a new table in your schema, you must have the CREATE TABLE system privilege. To create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Additionally, the owner of the table must have a quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege.Create tables using the SQL statement
CREATE TABLE
.Creating a Table
When you issue the following statement, you create a table namedEmployee
in the your default schema and default tablespace. The below mentioned code can either be executed through SQL*PLUS or iSQL*PLUS.
CREATE TABLE employee (
..........empno NUMBER(5) PRIMARY KEY,
..........ename VARCHAR2(15) NOT NULL,
..........job VARCHAR2(10),
..........mgr NUMBER(5),
..........hiredate DATE DEFAULT (sysdate),
..........sal NUMBER(7,2),
..........comm NUMBER(7,2),
..........deptno NUMBER(3) NOT NULL
); .............................. .............................. ..............
Figure 1. Table creation through SQL*PLUS
Parallelizing Table Creation
When you specify the AS SELECT clause to create a table and populate it with data from another table, you can utilize parallel execution. The CREATE TABLE ... AS SELECT statement contains two parts: a CREATE part (DDL) and a SELECT part (query). Oracle Database can parallelize both parts of the statement. The CREATE part is parallelized if one of the following is true:
A PARALLEL clause is included in the CREATE TABLE ... AS SELECT statement
An ALTER SESSION FORCE PARALLEL DDL statement is specified
If you parallelize the creation of a table, that table then has a parallel declaration (the PARALLEL clause) associated with it. Any subsequent DML or queries on the table, for which parallelization is possible, will attempt to use parallel execution.
The following simple statement parallelizes the creation of a table and stores the result in a compressed format, using table compression:
CREATE TABLE admin_emp_dept
..........PARALLEL COMPRESS
..........AS SELECT * FROM employee
..........WHERE deptno = 10;
Figure 2. Parallelizing Table creation through SQL*PLUS
Oracle 10g Free Training - Dropping Tables
In this training you will learn about Dropping Tables, Consequences of Dropping a Table, CASCADE Clause and the PURGE Clause.Dropping Tables
To drop a table that you no longer need, use theDROP TABLE
statement. The table must be contained in your schema or you must have the DROP ANY TABLE
system privilege.Caution:
Before dropping a table, familiarize yourself with the consequences of doing so:· Dropping a table removes the table definition from the data dictionary. All rows of the table are no longer accessible.
.
.
· All indexes and triggers associated with a table are dropped.
.
.
· All views and PL/SQL program units dependent on a dropped table remain, yet become invalid (not usable).
.
.
· All synonyms for a dropped table remain, but return an error when used.
.
.
· All extents allocated for a table that is dropped are returned to the free space of the tablespace and can be used by any other object requiring new extents or new objects. All rows corresponding to a clustered table are deleted from the blocks of the cluster.
The following statement drops the admin_emp_dept table:DROP TABLE admin_emp_dept;
Figure 3. Drop Table
If the table to be dropped contains any primary or unique keys referenced by foreign keys of other tables and you intend to drop the FOREIGN KEY constraints of the child tables, then include the CASCADE clause in the DROP TABLE statement, as shown below:
DROP TABLE admin_emp_dept CASCADE CONSTRAINTS;
Figure 4. Drop Table Cascade Constraints
.
When you drop a table, normally the database does not immediately release the space associated with the table. Rather, the database renames the table and places it in a recycle bin, where it can later be recovered with the
FLASHBACK
TABLE
statement if you find that you dropped the table in error. If you should want to immediately release the space associated with the table at the time you issue the DROP TABLE statement, include the PURGE clause as shown in the following statement:DROP TABLE admin_emp_dept PURGE;
Figure 5. Drop Table Purge
Perhaps instead of dropping a table, you want to truncate it. The
TRUNCATE
statement provides a fast, efficient method for deleting all rows from a table, but it does not affect any structures associated with the table being truncated (column definitions, constraints, triggers, and so forth) or authorizations.Oracle 10g Tutorials : Oracle Net Services
In this tutorial you will learn Oracle Net Services - An Overview, Using Oracle Net Manager, Creating Listeners, Choosing General Parameters , enable the tracing and logging for the database transactions. We will also learn how to use TNSPING to test Oracle Net connectivity.Document Summary
This document gives the overview of the oracle net services. It also explains the uses of oracle net manager.Oracle Net Services Overview
Oracle Net Services provides enterprise-wide connectivity solutions in distributed, heterogeneous computing environments. Oracle Net Services eases the complexities of network configuration and management, maximizes performance, and improves network diagnostic capabilities. Oracle Net, a component of Oracle Net Services, enables a network session from client application to an Oracle database server. Once a network session is established, Oracle Net acts as a data courier for the client application and the database server. It is responsible for establishing and maintaining the connection between the client application and database server, as well as exchanging messages between them. Oracle Net is able to perform these jobs because it is located on each computer in the network. Network sessions are established with the help of a listener. The listener is a separate process that resides on the database server. The listener receives incoming client connection requests and manages the traffic of these requests to the server. The listener brokers the client request, handing off the request to the server. Every time a client requests a network session with a server, a listener receives the actual request. If the client's information matches the listener's information, the listener grants a connection to the server.Use Oracle Net Manager to create and configure listeners
Navigation – Start->Programs->Oracle-Oracle Client10g Home->Configuration and Migration Tools->Net Manager
Once you click on the Net Manager link on the above Navigation Path, following screen will appear in front of you.
Click on the Local link in the above screen and following screen will appear in front of you.
Click on the Listeners tab to see the list of existing listeners.
Please enter the name of the listener you want to create in the following pop up screen. You can choose any user friendly name.
You will see the LISTENER1 created in the following screen. Click on Add Address button to add the Listening location’s address.
Choose General Parameters from the drop down and you will see following screen.
Let the default values in this screen.
Let the default values in this screen.
Click on ‘Logging & Tracing’ tab and you will see following screen.
You can enable the tracing and logging for the database transactions and would be required to provide the files to store the logs or traces.
Click on ‘Authentication’ tab and you will see following screen. You can set the password for listener related operations in this screen.
Choose Database Services from Dropdown and you will see following screen.
Give the Global Database Name , SID and Oracle Home Directory to configure the database service for the Listener. Save the information and your listener is created and configured. You can change any of the listener’s parameters through oracle net manager.
Use TNSPING to test Oracle Net connectivity
You can also use tnsping utility on command prompt to check Oracle Net connectivity. Follow these steps to check the connection to particular SID.
1. Open Command prompt.
2. Write tnsping < SID for the database connectivity you want to check >
3. If you get the message “Used TNSNAMES adapter to resolve the alias” then tnsping utility is able to resolve the service name.
4. If you get “TNS-03505: Failed to resolve name” message then tnsping utility is not able to resolve the service name and you need to check the SQLNET.ora and TNSNAMES.ora files on your machine.
Frequently Asked Questions
1. What all tasks can be performed using oracle net manager?
2. What is the use of tnsping utility?
3. How do you enable tracing or logging for a particular listener?
4. What is the significance of “TNS-03505: Failed to resolve name “ message from tnsping utility?
Oracle 10g Free Training - SQL*Loader - Input Data and Datafiles
In this tutorial you will learn about SQL*Loader - Input Data and Datafiles, Fixed Record Format, Variable Record Format and Stream Record Format.SQL*Loader - Input Data and Datafiles
SQL*Loader reads data from one or more files (or operating system equivalents of files) specified in the control file. From SQL*Loader's perspective, the data in the datafile is organized as records. A particular datafile can be in fixed record format, variable record format, or stream record format. The record format can be specified in the control file with the INFILE parameter. If no record format is specified, the default is stream record format.Fixed Record Format
A file is in fixed record format when all records in a datafile are the same byte length. Although this format is the least flexible, it results in better performance than variable or stream format. Fixed format is also simple to specify. For example:INFILE datafile_name "fix n"
Variable Record Format
A file is in variable record format when the length of each record in a character field is included at the beginning of each record in the datafile. This format provides some added flexibility over the fixed record format and a performance advantage over the stream record format. For example, you can specify a datafile that is to be interpreted as being in variable record format as follows:INFILE "datafile_name" "var n"
Stream Record Format
A file is in stream record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator. Stream record format is the most flexible format, but there can be a negative effect on performance. The specification of a datafile to be interpreted as being in stream record format looks similar to the following:INFILE datafile_name ["str terminator_string"]
The terminator_string is specified as either 'char_string' or X'hex_string' where:
· 'char_string' is a string of characters enclosed in single or double quotation marks
· X'hex_string' is a byte string in hexadecimal format
When the terminator_string contains special (nonprintable) characters, it should be specified as a X'hex_string'. However, some nonprintable characters can be specified as ('char_string') by using a backslash. For example:· \n indicates a line feed
· \t indicates a horizontal tab
· \f indicates a form feed
· \v indicates a vertical tab
· \r indicates a carriage return
If the character set specified with the NLS_LANG parameter for your session is different from the character set of the datafile, character strings are converted to the character set of the datafile. This is done before SQL*Loader checks for the default record terminator.Hexadecimal strings are assumed to be in the character set of the datafile, so no conversion is performed.
On UNIX-based platforms, if no terminator_string is specified, SQL*Loader defaults to the line feed character, \n.
On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the datafile. This means that if you know that one or more records in your datafile has \n embedded in a field, but you want \r\n to be used as the record terminator, you must specify it.
The screen below asks for the data file details if it is not already specified in the control file.
The below screen asks for the load method options while loading the data using SQL*Loader.
No comments:
Post a Comment