Use a Single Connection in Oracle- Developing Successful Oracle Applications-2

If you use bind variables, then everyone who submits the same exact query that references the same object will use the compiled plan from the pool. You will compile your subroutine once and use it over and over again. This is very efficient and is the way the database intends you to work. Not only will you use fewer resources (a soft parse is much less resource-intensive), but also you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability.

Just to give you a tiny idea of how huge a difference this can make performance-wise, you only need to run a very small test. In this test, we’ll just be inserting some rows into a table; the simple table we will use is

SQL> drop table t purge;

SQL> create table t ( x int );

Table created.

Now we’ll create two very simple stored procedures. They both will insert the numbers 1 through 10,000 into this table; however, the first procedure uses a single SQL statement with a bind variable:

Procedure created.

The second procedure constructs a unique SQL statement for each row to be inserted:

SQL> create or replace procedure proc2 as

Now, the only difference between the two is that one uses a bind variable and the other does not. Both are using dynamic SQL and the logic is otherwise identical. The only difference is the use of a bind variable in the first.

Note For details on runstats and other utilities, see the “Setting Up Your Environment” section at the beginning of this book. You may not observe exactly the same values for CPU or any metric. Differences are caused by different Oracle versions, different operating systems, and different hardware platforms. The idea will be the same, but the exact numbers will undoubtedly be marginally different.

We are ready to evaluate the two approaches, and we’ll use runstats, a simple tool
I’ve developed, to compare the two in detail:

Now, the preceding result clearly shows that based on CPU time (measured in hundredths of seconds), it took significantly longer and significantly more resources to insert 10,000 rows without bind variables than it did with them. In fact, it took more than a magnitude more CPU time to insert the rows without bind variables. For every insert without bind variables, we spent the vast preponderance of the time to execute the statement simply parsing the statement! But it gets worse. When we look at other information, we can see a significant difference in the resources utilized by each approach:

The runstats utility produces a report that shows differences in latch utilization as well as differences in statistics. Here, I asked runstats to print out anything with a difference greater than 9500. You can see that we hard parsed 3 times in the first approach using bind variables and that we hard parsed 10,000 times without bind variables (once for each of the inserts). But that difference in hard parsing is just the tip of the iceberg. You can see here that we used an order of magnitude as many “latches” in the nonbind variable approach as we did with bind variables. That difference might beg the question “What is a latch?”

Let’s answer that question. A latch is a type of lock that is used to serialize access to shared data structures used by Oracle. The shared pool is an example; it’s a big, shared data structure found in the System Global Area (SGA), and this is where Oracle stores parsed, compiled SQL. When you modify anything in this shared structure, you must take care to allow only one process in at a time. (It is very bad if two processes or threads attempt to update the same in-memory data structure simultaneously— corruption would abound.) So, Oracle employs a latching mechanism, a lightweight locking method to serialize access. Don’t be fooled by the word lightweight. Latches are serialization devices, allowing access (to a memory structure) one process at a time. The latches used by the hard parsing implementation are some of the most used latches out there. These include the latches for the shared pool and for the library cache. Those are “big time” latches that people compete for frequently. What all this means is that as we increase the number of users attempting to hard parse statements simultaneously, our performance gets progressively worse over time. The more people parsing, the more people waiting in line to latch the shared pool, the longer the queues, the longer the wait.

The Black Box Approach- Developing Successful Oracle Applications-4

When we created the index, we had to choose between the following approaches:
•\ Just create an index on the processed-flag column.
•\ Create an index only on the processed-flag column when the processed flag is N, that is, only index the values of interest. We typically don’t want to use an index when the processed flag is Y since the vast majority of the records in the table have the value Y. Notice that I did not say “We never want to use….” You might want to very frequently count the number of processed records for some reason, and then an index on the processed records might well come in very handy.

In Chapter 11 on indexing, we’ll go into more detail on both types. In the end, we created a very small index on just the records where the processed flag was N. Access to those records was extremely fast, and the vast majority of Y records did not contribute to this index at all. We used a function-based index on a function decode( processed_ flag, ‘N’, ‘N’ ) to return either N or NULL—since an entirely NULL key is not placed into a conventional B*Tree index, we ended up only indexing the N records.

Note There is more information on NULLs and indexing in Chapter 11.

Was that the end of the story? No, not at all. My client still had a less than optimal solution on its hands. They still had to serialize on the “dequeue” of an unprocessed record. We could easily find the first unprocessed record—quickly—using select * from queue_table where decode( processed_flag, ‘N’, ‘N’) = ‘N’ FOR UPDATE, but only one session at a time could perform that operation. The project was using Oracle 10g and therefore could not yet make use of the relatively new SKIP LOCKED feature added in Oracle 11g. SKIP LOCKED would permit many sessions to concurrently find the first unlocked, unprocessed record, lock that record, and process it. Instead, we had to implement code to find the first unlocked record and lock it manually. Such code would generally look like the following in Oracle 10g and before. We begin by creating a table with the requisite index described earlier and populate it with some data, as follows:

SQL> drop table t purge;

Then we basically need to find any and all unprocessed records. One by one we ask the database “Is this row locked already? If not, then lock it and give it to me.” That code would look like this:

SQL> create or replace

Note In the preceding code, I ran some DDL—the CREATE OR REPLACE FUNCTION. Right before DDL runs, it automatically commits, so there was an implicit COMMIT in there. The rows we’ve inserted are committed in the database—and that fact is necessary for the following examples to work correctly. In general, I’ll use that fact in the remainder of the book. If you run these examples without performing the CREATE OR REPLACE, make sure to COMMIT first!

REST and ORDS- Data Management

Oracle REST Data Services provides a way to manage APIs for the data in the database without direct access to the database.
The credentials and privileges are all managed in the configuration of enabling the API.
This is not just a cloud service; this is available in on-premises databases and a fantastic tool for providing the needed data to applications.

To configure ORDS in the database, on the database system you install ORDS and enable it. Then you manage the views or tables of where you enable ORDS and the REST APIs. ORDS is available for download on the same site where you can get SQL Developer Web and other useful tools for REST services and database management APIs (oracle. com/ords).
You can also use yum to install it:
$ sudo yum install ords
$ ords –config /etc/ords/config install

REST endpoints start with http://localhost:8080/ords normally followed by the schema name and objects. Tables and views will need to be enabled:

SQL> begin ords.enable_object( p_enabled => TRUE, p_schema => ‘MMALCHER’, p_object => ‘ITEMS’, p_object_type => ‘TABLE’,p_object_alias => ‘items’); commit;end;

In the various tools, you can also use options to enable REST on a database object. Figure 18-6 shows how to use the menu to right-click an object to REST enable a table.

Figure 186. Enabling REST on table

Figure 18-7 provides the URL if you want to configure authentication for the API and the roles that are configured.

Figure 187. Configuring REST

To access the data, there are credentials that will be needed if configured, and then you can test the REST endpoint with the URL http://localhost:8080/ords/ mmalcher/items/.

It seems simple enough and is a powerful data management tool to use data for applications. We are just highlighting the ways to get started here with your databases, and of course there are more ways to configure the services on-premises and in the cloud.

Since we have been looking at Autonomous in this chapter as a tool for data management, let’s go back to Database Actions and look at the REST tool provided. Figure 18-8 shows the REST overview from Database Actions.

Figure18-8.REST overview

After enabling an object, you will be able to access the API with the REST endpoint URL. You can also pull up the menu on the table or view, which will allow you to edit, get a curl command, and open the data, as shown in Figure 18-9.

Figure 189. AutoREST, edit

One more option here to look at with REST tools is security. As shown in

Figure 18-10, you can manage the privileges of the REST APIs and OAuth clients and roles. This allows for the separation of administrators and managers of the REST services to grant the needed roles and access.

Figure18-10.SecurityforREST

ORDS is an easy way to provide a data service for access to data for applications, integrations, and data management needs. There are tools that will allow you to configure ORDS in Autonomous and SQL Developer for on-premises databases. Database Actions again is a tool set in the cloud to manage data services. The DBMS_CLOUD package is also available without the interface to perform these steps and configurations for data loading, privileges, and data services.

Tip Oracle offers a great way to try these tools and experiment. It is called Oracle LiveLabs, and there are workshops that you can do for free for each of these areas. Be sure to check it out at https://developer.oracle.com/livelabs.

My Approach- Developing Successful Oracle Applications

Before we begin, I feel it is only fair that you understand my approach to development. I tend to take a database-centric approach to problems. If I can do it in the database, I will. There are a couple of reasons for this—the first and foremost being that I know that if I build functionality in the database, I can deploy it anywhere. I am not aware of a popular, commercially viable server operating system on which Oracle is not available— from Windows to dozens of UNIX/Linux systems—the same exact Oracle software and options are available. I frequently build and test solutions on my laptop, running Oracle 21c, 19c, and 12c under UNIX/Linux or Windows on a virtual machine. I can then deploy them on a variety of servers running the same database software but different operating systems. When I have to implement a feature outside of the database, I find it extremely hard to deploy that feature anywhere I want. One of the main features that makes the Java language appealing to many people—the fact that their programs are always compiled in the same virtual environment, the Java virtual machine (JVM), and so are highly portable—is the exact same feature that makes the database appealing to me. The database is my virtual machine. It is my virtual operating system.

So I try to do everything I can in the database. If my requirements go beyond what the database environment can offer, I do it in Java outside of the database. In this way, almost every operating system intricacy will be hidden from me. I still have to understand how my “virtual machines” work (Oracle, and occasionally a JVM)—you need to know the tools you are using—but they, in turn, worry about how best to do things on a given OS for me.

Thus, simply knowing the intricacies of this one “virtual OS” allows you to build applications that will perform and scale well on many operating systems. I don’t mean to imply that you can be totally ignorant of your underlying OS, just that as a software developer building database applications you can be fairly well insulated from it, and you will not have to deal with many of its nuances. Your DBA, responsible for running the Oracle software, will be infinitely more in tune with the OS (if they are not, please get a new DBA!). If you develop client-server software and the bulk of your code is outside of the database and outside of a VM (Java virtual machines being perhaps the most popular VM), of course you’ll have to be concerned about your OS once again.

I have a pretty simple mantra when it comes to developing database software, one that has been consistent for many years:

•\   You should do it in a single SQL statement if at all possible. And believe it or not, it is almost always possible. This statement is even truer as time goes on. SQL is an extremely powerful language.

•\   If you can’t do it in a single SQL statement, do it in PL/SQL—as little PL/SQL as possible! Follow the saying that goes “more code = more bugs, less code = less bugs.”

•\   If you can’t do it in PL/SQL, do it in a C external procedure. This is most frequently the approach when raw speed or using a third-party API written in C is needed.

•\   If you can’t do it in a C external routine, you might want to seriously think about why it is you need to do it.

Throughout this book, you will see the preceding philosophy implemented. We’ll use PL/SQL—and object types in PL/SQL—to do things that SQL itself can’t do or can’t do efficiently. PL/SQL has been around for a very long time—over 34 years of tuning (as of 2022) has gone into it; in fact, way back in Oracle 10g, the PL/SQL compiler itself was rewritten to be an optimizing compiler for the first time. You’ll find no other language so tightly coupled with SQL, nor any as optimized to interact with SQL. Working with SQL in PL/SQL is a very natural thing—whereas in virtually every other language from Visual Basic to Java, using SQL can feel cumbersome. It never quite feels “natural”— it’s not an extension of the language itself. When PL/SQL runs out of steam—which is exceedingly rare today with current database releases—we’ll use Java. Occasionally, we’ll do something in C, but typically only when C is the only choice, or when the raw speed offered by C is required. Often, this last reason goes away with native compilation of Java—the ability to convert your Java bytecode into operating system–specific object code on your platform. This lets Java run just as fast as C in many cases.

FPP Steps- Migration to Multitenant and Fleet Management

The FPP Server is a repository for fours types of data:
• Gold images
• Working copies
• Client information
• Metadata related to users, roles, and permissions

An FPP server can use ASM to store the images as well as the file system. Let’s walk through the steps to set up an FPP server in a grid home, which assumes the grid infrastructure is installed with ASM.
First create a disk group in ASM for images using the configuration assistance or log into the ASM instance and use SQL to create the disk group:

$ $GRID_HOME/bin/asmca

Or do the following:
$ . oraenv +ASM
SQL> create diskgroup fppimage disk ‘/dev/oracleasm/disk/disk_fpp01’;

A mount point is needed on the nodes of the cluser:
$ mkdir -p /u03/fppstorage/images

Check to see whether the Grid Infrastructure Management Repository is configured:
$ srvctl status mgmtdb

If not configured and running, it can be in the grid home with the following:
$ mgmtca createGIMRContainer -storageDiskLocation fppimage

Create the FPP Server resource:
$ srvctl add rhpserver -storage /u03/fppstorage/images -diskgroup fppimage $ srvctl start rhpserver

The overview of steps for fleet patching and provisioning are as follows:

  1. Create reference environments with the required set of patches.
  2. Create gold images and set the version.
  3. Subscribe database and grid homes to a gold image.
  4. Deploy the image to a new Oracle home.
  5. Switch targets from the old to the new Oracle home.

To create a gold image, you can either import or add an image with rhpctl commands. Here is an example:
$ rhpctl import -image DB23_1

The parameters would include the type of image, so if it is Oracle Database or grid, and you can create your own image types along with the built-in image types. By default the type of image is the Oracle Database, but other types would need to be specified using the parameter -imagetype.
The built-in base image types are as follows:
• ORACLEDBSOFTWARE
• ORACLEGISOFTWARE
• ORACLEGGSOFTWARE
• EXAPATCHSOFTWARE
• SOFTWARE

Provisioning the gold image is done with the same rhpctl command but using the workingcopy parameter. This is an example of how to do this on a local client:
$ rhpctl add workingcopy –workingcopy DB_HOME_231 -image DB23_1 -storagetype LOCAL

Or for client:
$ rhpctl add workingcopy -image DB23_1 -path /u01/app/oracle/ product/23.1.00/db23c -workingcopy DB_HOME_23c -client client_042 -oraclebase /u01/app/oracle

Note enterprise Manager software is another way to implement fleet patching and provisioning. it can assist in these steps and configuration. it provides information for the reference environment with the available patches.

By using the emcli command, you can create the gold image. there are requirements to set up the Fpp server and enterprise Manager, but you can see with this example some of the commands that help create and manage images.this creates a new version:
$ emcli db_software_maintenance -createSoftwareImage -input_file=”data:/home/user/input_file”

here is an example to get a list of images in production:
$ emcli db_software_maintenance -getImages

In creating images, you will have to apply the patches to sources and create another image with the patched version and verify the image.
Once the images have been added to the FPP Server, they can be queried:
$ rhpctl query image

Then get the specific details from one of the images:
$ rhpctl query image -image DB23_1

Machine Learning- Data Management-2

Before we look at a couple of tools available for machine learning, let’s look at Database Actions for Oracle Database in the cloud, as shown in Figure 18-1. There are several different tools here. Under Development, there is a SQL Worksheet, Data Modeler, JSON Collections, REST APIs, APEX, Liquibase, and other tools. As you can see, there is quite a bit to explore.

Many of these tools are available for on-premises aswell but need to be installed or enabled. For on-premises tools, SQL Developer is a good starting point to download and use with all of the Oracle databases on-prem and in the cloud. Another tool is SQLcl, which is a command-line interface to the databases. SQLcl comes with SQL Developer and provides a command-line tool instead of installing the Oracle client to use SQLPlus.

Figure 181. Database actions

From Database Actions, you can launch the Oracle Machine Learning user interface. Figure 18-2 shows how to get started with Oracle Machine Learning with Autonomous Database.

Figure18-2.Machinelearninguserinterface

You can get to this user interface from Database Actions, and as you can see in Figure 18-2, you can create notebooks for data discovery and analytics. There are also plenty of examples to browse through. Again, you can leverage AutoML and the machine learning UI in OCI to familiarize yourself with the provided algorithms. The same algorithms are available in the Oracle Database on-premises and by using SQL as the example provided to create the model. The platform that you use might have different tools but still provides the same standard SQL and capabilities in the database.

Selecting examples will give you several examples to explore and see the built-in algorithms at work. Figure 18-3 demonstrates the “OML4SQL Anomaly Detection SVM” template, which includes a model using the 1-Class SVM algorithm to detect anomalies in the data.

Figure 183. Machine learning example

After reviewing the examples, new notebooks can be created with the scripts, SQL, or other languages such as R or Python to create the machine learning models.

As a database administrator, you are providing ways to work with the data in the Oracle Database and gaining insight to the workloads that can be coming your way. There might be additional resources needed, but it goes back to monitoring the databases. It depends on how much data and which algorithms are going to be used, and you can capture workloads on the system and tune accordingly. Besides just application and transaction code running on the database, there are going to be analytic workloads that include machine learning that use the different data types and leverage all kinds of data.

Data Studio- Data Management

Data Studio is one of the tools under Database Actions in OCI for Autonomous Database, as shown in Figure 18-1. We have already discussed Data Pump, external tables, and SQL*Loader, and it is also good to know about Data Studio. Even with Autonomous, if you do not have to take care of the patching and some of the administration, there are opportunities with data.

The data management that has been discussed so far can all be done in Autonomous or on-premises. The opportunities with data, various workloads, and different analytics provide answers, possibilities, and other insights derived from the data.

With Oracle Autonomous Database from Database Actions, if you select the Data Studio overview, you will see four options for loading, analyzing, gaining insights, and cataloging data.

Figure 18-4 shows these main areas and how to get started using Data Studio.

Figure 184. Data Studio overview

Figure 18-5 walks you through what you want to do, such as loading, linking, or feeding data, as well as where the source of the data is, which can be another database, local file, or cloud storage including other clouds (AWS, Azure, OCI).

You simply select Load Data and provide details about the data, CSV file or database connection, or connection to cloud storage. Then it is a drag and drop of the data to load. These can be set up as jobs or a one-time process.

This is a simple way to load data into an Autonomous Database. You can load from CSV, Excel, Parquet, JSON, and other files by just dragging and dropping the file and watching the data loading jobs that have been configured.

Figure 185. Data loader options

Depending how you want to work with JSON files, there are ways to create JSON collections under the Development section of Database Actions with JSON. This is a UI that works for managing JSON data. There are several different ways of working with JSON through the tools and SQL in the database.

Network- Migration to Multitenant and Fleet Management-2

In Figure 17-2, you see the compartment is the first choice and should be prepopulated if already working in that compartment. Provide a name for the infrastructure, and refer to your company standards and guidelines for naming conventions.

There is a choice of availability domain that depends on the tenancy you are working in, and another choice on the system model for the Exadata. The default is the latest and greatest model available.

Figure 172. Exadata infrastructure form

Maintenance can be configured differently and customized, including on the schedule shown in Figure 17-3. Patching can be rolling or nonrolling.

Figure17-3.Advanceoptionsmaintenance

Once the Exadata infrastructure is created, the VM clusters can be created. Figure 17-4 shows this as the next choice, working the way up from the Exadata infrastructure.

Figure 174. Creating VM clusters

As shown in Figure 17-5, the previously created Exadata infrastructure is included in order to create the VMs on the right infrastructure. The display name again should follow your company standards or guidelines set for this database environment.

Figure17-5.InformationforautonomousVMcluster

ECPUs are the elastic cores for Autonomous Database. Figure 17-6 shows not just the ECPUs to adjust but other resources on the VM to put limits on containers. Setting the details on the number of Autonomous Container Databases doesn’t limit the number

of pluggables or Autonomous Databases in the container, but the memory and storage limits might do that.

Figure17-6.SettingVMresourcesforcontainers

The last component we will look at here is the Autonomous Container Database, as shown in Figure 17-7.

Figure 177. Creating the Autonomous Container Database

The create information requires the compartment, the Exadata infrastructure, and the VM Cluster, as shown in Figure 17-8. Also, here you will see that the version of the Autonomous Container Database is available for the base image.

Figure 178. Providing information for the Autonomous Container Database

The last step in creating the container is editing the automatic maintenance. Figure 17-9 shows the choices for the schedule and patching.

Figure17-9.Configuringmaintenance

No-Copy or Copy Options – Migration to Multitenant and Fleet Management

The options to migrate to multitenant reuse the data files. The failback option is to use Data Pump to get back to a non-CDB, single-instance database. The recovery option is not necessarily faster, but the migration option is a simple and faster method.

If the recovery plan needed to be faster, then it might make more sense to copy the data files instead of reusing them. This will include copying and renaming the files with FILE_NAME_CONVERT.

It will take longer to migrate because there is a copy of the files before plugging in the database and migrating, but it will be a quicker rollback plan.

You create the manifest file and shut down the database. Copy the data files to a new location or to a new name.

In the target CDB, the next step to is create a PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ copy file_ name_convert=(‘SALES19C’,’SALESDB’);

One more migration to mention for plugging in a database is to clone the database over a database link. This consists of cloning a non-CDB single-instance database to a pluggable database in a CDB.

The autoupgrade command and the config file will make it easier to set up the options, and the database link makes it possible.

Here is the config file:

$ cat db19_to_23.cfg upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/db23c upgl.sid=salesdbupgl.target_cdb=cdb23c upgl.target_pdb_name=salesdbupgl.target_pdb_copy_option=file_name_convert=(‘SALES19C’,’SALESDB’)

Run this command:

$ java -jar autoupgrade.jar -config db19_to23.cgf -mode deploy

Other options for migration options are some of the typical options for upgrades migrations, not just to a non-CDB to a container:

•     Data Pump

•     GoldenGate

•     Transportable tablespaces

These options keep the original database for a rollback plan, just like copying the data files. GoldenGate does allow for zero downtime, while the other migration options will require some downtime to copy, move, and upgrade.

Tip there is a great resource for upgrades and migrations with Mike dietrich’s blog, upgrade your database – noW! he gives more details on these options and helps with planning for the migration.

Multitenant migration in 19c is a good first step to getting to 23c. You can upgrade and migrate at the same time as long as you are going from 19c. The upgrade path to 23c is from 19c or 21c. If you are using a non-CDB, single-instance database in 19c, then it makes sense to migrate the database to a pluggable database.

Troubleshooting Undo Tablespace Issues – Automation and Troubleshooting

Problems with the undo tablespace are usually of the following nature:

ORA-01555: snapshot too old

ORA-30036: unable to extend segment by … in undo tablespace ‘UNDOTBS1’

The prior errors can be caused by many different issues, such as incorrect sizing of the undo tablespace or poorly written SQL or PL/SQL code.

A snapshot being too old can also occur during exports because of updates to very large tables and is normally seen when either the undo retention or the size is not properly set.

For an export, it is an easy fix to rerun at a quieter time. So, sometimes it is just rerunning the job or transaction on the database. If these happen too often, it might be because of the undo tablespace size.

Or if the system is sharing the undo tablespace in the CDB, you can create the undo tablespace in each of the PDBs.

Determining Whether the Undo Is Correctly Sized

Suppose you have a long-running SQL statement that is throwing an ORA-01555: snapshot too old error, and you want to determine whether adding space to the undo tablespace might help alleviate the issue.

Run this next query to identify potential issues with your undo tablespace. Make sure you are in the PDB that you are checking the undo tablespace. The query checks for issues that have occurred within the last day:

SQL> select to_char(begin_time,’MM-DD-YYYY HH24:MI’) begin_time ,ssolderrcnt                             ORA_01555_cnt, nospaceerrcnt                             no_space_cnt,txncount ,expiredblks
max_num_txns, maxquerylen    max_query_len blck_in_expired from v$undostat where begin_time > sysdate – 1 order by begin_time;

The ORA_01555_CNT column indicates the number of times your database has encountered the ORA-01555: snapshot too old error. If this column reports a nonzero value, you need to do one or more of the following tasks:

•     Ensure that code does not contain COMMIT statements within cursor loops

•     Tune the SQL statement throwing the error so that it runs faster

•     Ensure that you have good statistics (so that your SQL runs efficiently)

•     Increase the UNDO_RETENTION initialization parameter

The NO_SPACE_CNT column displays the number of times space was requested in the undo tablespace. In this example, there were no such requests. If the NO_SPACE_CNT is reporting a nonzero value, however, you may need to add more space to your undo tablespace.

A maximum of 4 days worth of information is stored in the V$UNDOSTAT view. The statistics are gathered every 10 minutes, for a maximum of 576 rows in the table. If you have stopped and started your database within the last 4 days, this view will contain information only from the time you last started your database.

Another way to get advice on the undo tablespace sizing is to use the Oracle Undo

Advisor, which you can invoke by querying the PL/SQL DBMS_UNDO_ADV package from a SELECT statement. The following query displays the current undo size and the recommended size for an undo retention setting of 900 seconds:

SQL> select sum(bytes)/1024/1024             cur_mb_size ,dbms_undo_adv.required_undo_size(900) req_mb_size from dba_data_files where tablespace_name = (select value from v$parameter where name = ‘undo tablespace’);

Here is some sample output:

CUR_MB_SIZE REQ_MB_SIZE 36864                 20897

The output shows that the undo tablespace currently has 36GB allocated to it. In the prior query, you used 900 seconds as the amount of time to retain information in the undo tablespace. To retain undo information for 900 seconds, the Oracle Undo

Advisor estimates that the undo tablespace should be 20.4GB. In this example, the undo tablespace is sized adequately. If it were not sized adequately, you would have to either add space to an existing data file or add a data file to the undo tablespace.