Use a Single Connection in Oracle- Developing Successful Oracle Applications-2

If you use bind variables, then everyone who submits the same exact query that references the same object will use the compiled plan from the pool. You will compile your subroutine once and use it over and over again. This is very efficient and is the way the database intends you to work. Not only will you use fewer resources (a soft parse is much less resource-intensive), but also you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability.

Just to give you a tiny idea of how huge a difference this can make performance-wise, you only need to run a very small test. In this test, we’ll just be inserting some rows into a table; the simple table we will use is

SQL> drop table t purge;

SQL> create table t ( x int );

Table created.

Now we’ll create two very simple stored procedures. They both will insert the numbers 1 through 10,000 into this table; however, the first procedure uses a single SQL statement with a bind variable:

Procedure created.

The second procedure constructs a unique SQL statement for each row to be inserted:

SQL> create or replace procedure proc2 as

Now, the only difference between the two is that one uses a bind variable and the other does not. Both are using dynamic SQL and the logic is otherwise identical. The only difference is the use of a bind variable in the first.

Note For details on runstats and other utilities, see the “Setting Up Your Environment” section at the beginning of this book. You may not observe exactly the same values for CPU or any metric. Differences are caused by different Oracle versions, different operating systems, and different hardware platforms. The idea will be the same, but the exact numbers will undoubtedly be marginally different.

We are ready to evaluate the two approaches, and we’ll use runstats, a simple tool
I’ve developed, to compare the two in detail:

Now, the preceding result clearly shows that based on CPU time (measured in hundredths of seconds), it took significantly longer and significantly more resources to insert 10,000 rows without bind variables than it did with them. In fact, it took more than a magnitude more CPU time to insert the rows without bind variables. For every insert without bind variables, we spent the vast preponderance of the time to execute the statement simply parsing the statement! But it gets worse. When we look at other information, we can see a significant difference in the resources utilized by each approach:

The runstats utility produces a report that shows differences in latch utilization as well as differences in statistics. Here, I asked runstats to print out anything with a difference greater than 9500. You can see that we hard parsed 3 times in the first approach using bind variables and that we hard parsed 10,000 times without bind variables (once for each of the inserts). But that difference in hard parsing is just the tip of the iceberg. You can see here that we used an order of magnitude as many “latches” in the nonbind variable approach as we did with bind variables. That difference might beg the question “What is a latch?”

Let’s answer that question. A latch is a type of lock that is used to serialize access to shared data structures used by Oracle. The shared pool is an example; it’s a big, shared data structure found in the System Global Area (SGA), and this is where Oracle stores parsed, compiled SQL. When you modify anything in this shared structure, you must take care to allow only one process in at a time. (It is very bad if two processes or threads attempt to update the same in-memory data structure simultaneously— corruption would abound.) So, Oracle employs a latching mechanism, a lightweight locking method to serialize access. Don’t be fooled by the word lightweight. Latches are serialization devices, allowing access (to a memory structure) one process at a time. The latches used by the hard parsing implementation are some of the most used latches out there. These include the latches for the shared pool and for the library cache. Those are “big time” latches that people compete for frequently. What all this means is that as we increase the number of users attempting to hard parse statements simultaneously, our performance gets progressively worse over time. The more people parsing, the more people waiting in line to latch the shared pool, the longer the queues, the longer the wait.

The Black Box Approach- Developing Successful Oracle Applications-3

Here, I will use an autonomous transaction in the database to have two concurrent transactions in a single session. An autonomous transaction starts a “subtransaction” separate and distinct from any already established transaction in the session. The autonomous transaction behaves as if it were in an entirely different session—for all intents and purposes, the parent transaction is suspended. The autonomous transaction can be blocked by the parent transaction (as we’ll see), and, further, the autonomous transaction can’t see uncommitted modifications made by the parent transaction. For example, connecting to my pluggable database PDB1:

$ sqlplus eoda/foo@PDB1

Note I will use autonomous transactions throughout this book to demonstrate locking, blocking, and concurrency issues. It is my firm belief that autonomous transactions are a feature that Oracle should not have exposed to developers—for the simple reason that most developers do not know when and how to use them properly. The improper use of an autonomous transaction can and will lead to logical data integrity corruption issues. Beyond using them as a demonstration tool, autonomous transactions have exactly one other use—as an error-logging mechanism. If you wish to log an error in an exception block, you need to log that error into a table and commit it—without committing anything else. That would be a valid use of an autonomous transaction. If you find yourself using an autonomous transaction outside the scope of logging an error or demonstrating a concept, you are almost surely doing something very wrong.

Since I used an autonomous transaction and created a subtransaction, I received a deadlock—meaning my second insert was blocked by my first insert. Had I used two separate sessions, no deadlock would have occurred. Instead, the second insert would have been blocked and waited for the first transaction to commit or roll back. This symptom is exactly what the project in question was facing—the blocking, serialization issue.

So we had an issue whereby not understanding the database feature (bitmap indexes) and how it worked doomed the database to poor scalability from the start. To further compound the problem, there was no reason for the queuing code to ever have been written. The Oracle database has built-in queuing capabilities. This built-in queuing feature gives you the ability to have many producers (the sessions that insert the N, the unprocessed records) concurrently put messages into an inbound queue and have many consumers (the sessions that look for N records to process) concurrently receive these messages. That is, no special code should have been written in order to implement a queue in the database. The developers should have used the built-in feature. And they might have, except they were completely unaware of it.

Fortunately, once this issue was discovered, correcting the problem was easy. We did need an index on the processed-flag column, just not a bitmap index. We needed a conventional B*Tree index. It took a bit of convincing to get one created. No one wanted to believe that conventionally indexing a column with two distinct values was a good idea. But after setting up a simulation (I am very much into simulations, testing, and experimenting), we were able to prove it was not only the correct approach but also that it would work very nicely.

Note We create indexes, indexes of any type, typically to find a small number of rows in a large set of data. In this case, the number of rows we wanted to find via an index was one. We needed to find one unprocessed record. One is a very small number of rows; therefore, an index is appropriate. An index of any type would be appropriate. The B*Tree index was very useful in finding a single record out of a large set of records.

Detecting and Resolving Locking Issues – Automation and Troubleshooting

Sometimes, a developer or application user will report that a process that normally takes seconds to run is now taking several minutes and does not appear to be doing anything. In these situations, the problem is usually one of the following:

Space-related issue (e.g., the archive redo destination is full and has suspended all transactions).

A process has a lock on a table row and is not committing or rolling back, thus preventing another session from modifying the same row.

Oracle 23c has automated the process of aborting a low-priority transaction that holds a row lock and is blocking higher priority transactions. The priority is set to high, medium, or low for a user transaction. Users can also configure a maximum time a transaction will wait.

First check the alert log to see if there are any obvious issues that have occurred recently such as a wait on tablespace to extend. If there is nothing obvious in the alert log file, run a SQL query to look for locking issues.

SQL> set lines 80
SQL> col blkg_user form a10
SQL> col blkg_machine form a10
SQL> col blkg_sid form 99999999
SQL> col wait_user form a10

This situation is typical when applications do not explicitly issue a commit or rollback at appropriate times in the code. Oracle 23c does provide a way to prioritize the transactions to know how long a transaction will wait on the lock being released before aborting the lower-priority transaction.

You can also manually kill one of the sessions. Keep in mind that terminating a session may have unforeseen effects, and using the new features will allow for the transactions to roll back.

Resolving Open-Cursor Issues

The OPEN_CURSORS initialization parameter determines the maximum number of cursors a session can have open. This setting is per session. The default value of 50 is usually too low for any application. When an application exceeds the number of open cursors allowed, the following error is thrown:

ORA-01000: maximum open cursors exceeded

Usually, the prior error is encountered when

•     OPEN_CURSORS initialization parameter is set too low

•     Developers write code that does not close cursors properly

To investigate this issue, first determine the current setting of the parameter:

SQL> show parameter open_cursors;

If the value is less than 300, consider setting it higher. It is typical to set this value to 1,000 for busy OLTP systems. You can dynamically modify the value while your database is open, as shown here:

SQL> alter system set open_cursors=1000;

If you are using an spfile, consider making the change both in memory and in the spfile, at the same time:

SQL> alter system set open_cursors=1000 scope=both;

After setting OPEN_CURSORS to a higher value, if the application continues to exceed the maximum value, you probably have an issue with code that is not properly closing cursors.

If you work in an environment that has thousands of connections to the database, you may want to view only the top cursor-consuming sessions. The following query uses an inline view and the pseudocolumn ROWNUM to display the top 20 values:

SQL> select * from (select a.value, c.username, c.machine, c.sid, c.serial# from v$sesstat  a,v$statname b ,v$session     cwhere a.statistic# = b.statistic#and     c.sid and b.name and  a.value
= a.sid= ‘opened cursors current’ != 0and     c.username IS NOT NULL order by 1 desc,2) where rownum < 21;

If a single session has more than 1,000 open cursors, then the code is probably written such that the cursors are not closing. When the limit is reached, somebody should inspect the application code to determine whether a cursor is not being closed.

Tip It is recommended that you query V$SESSION instead of V$OPEN_CURSOR to determine the number of open cursors. V$SESSION provides a more accurate count of the cursors currently open.

Data Sharing- Data Management

Data sharing is needed because we always need the data in different places. Systems share data across databases and leverage information that they have in one place to provide important details and facts in another system. We spend plenty of time moving data around and managing it in services so that access to the data is available when and where needed. The database administrator makes sure all of these database systems can provide the data as needed and the systems are highly available and secure, just fordata to be used in ways that we might not even think about. Database administrators can assist in integrations; however, there are always new business cases that come up that want additional data from different sources, such as files, to be combined and used with the data from the customer database. There are various reasons to relate reference data or other source data from other sources; for example, the data in the inventory systems gets additional information from logs, and so on.

The idea around the Data Sharing tool is to improve business access to data and open up these new innovations and uses of data. We have done this before in reports when sending files, keeping spreadsheets on our laptops, etc. However, many of these ways to share data require extra work, require extra processing, and make copies and redundant data extractions, and of course we need to handle the security and make sure sensitive data stays that way.

Through the Data Sharing tool, a delta sharing protocol allows you to share data without copying it to another system. A user can consume the data that is made available and request access to the data by providing valid tokens, and the shared data is then accessed by the user.

The Data Share providers can make the data available to users through the Data Share tool. Figure 18-11 shows the provider and consumer data shares of Data Sharing in Database Actions for Autonomous Database.

Figure 1811. Data Share tool

When you create users for Autonomous Database, you can give them permissions to use Data Share, along with the other tools, or you can enable a schema for sharing in a SQL session.

SQL> begin dbms_share.enable_schema ( schema_name => ‘MMALCHER’, enabled => TRUE);end;

As a data provider, you would start by providing a share, and you can share using object storage. Consumers of the share will see the changes to the data only when new versions are published. The tables or objects are added to the data share, and you can define who is going to receive the share or just create the share and add the consumers or recipients later.

For the recipients, they will need to subscribe and access the data shares, as shown in Figure 18-12. This will also provide them with a personal authorization profile, which allows for tracking how the shares are being used and by which consumers. You can create external tables on top of the data shares to use with SQL, and you can see the data objects that are available to you.

Figure 1812. Subscribing to a share

The data share becomes another tool in building out data lakes, and it can be made available with authentication and authorizations for consumers inside and outside of your company. The cloud links provide an approach for sharing data from Autonomous Database and between Autonomous Databases. Now instead of squirreling away data files, spreadsheets, reports, and so on, a data asset can be shared in a secure way to allow the business to gain access to the needed data and leverage the data for even more valuable insights.

Why are all these data management tools important? Well, as the database administrator, you know where most of the company’s data can be found. These tools leverage the same skills in securing the data and enabling the right service for the job. The administrators taking care of the database systems can really dive into ways to make data available in all kinds of formats that are easy to consume.

You can dive into supporting the Oracle Database with system tasks, installations, provisioning, patching, and migrations. This also includes making sure the database system is highly available and secure, and provides business continuity with disaster recovery and restoring of databases. In large environments, there is plenty to do with tuning and architecting the databases, but data management is expanding to allow businesses to innovate and leverage their data. The professional database administrator will be able to support and manage the environment to allow for data services and growth while making sure the database system is reliable and secure. This is an excellent opportunity to add value and manage an extremely important company asset, data!

FPP Steps- Migration to Multitenant and Fleet Management

The FPP Server is a repository for fours types of data:
• Gold images
• Working copies
• Client information
• Metadata related to users, roles, and permissions

An FPP server can use ASM to store the images as well as the file system. Let’s walk through the steps to set up an FPP server in a grid home, which assumes the grid infrastructure is installed with ASM.
First create a disk group in ASM for images using the configuration assistance or log into the ASM instance and use SQL to create the disk group:

$ $GRID_HOME/bin/asmca

Or do the following:
$ . oraenv +ASM
SQL> create diskgroup fppimage disk ‘/dev/oracleasm/disk/disk_fpp01’;

A mount point is needed on the nodes of the cluser:
$ mkdir -p /u03/fppstorage/images

Check to see whether the Grid Infrastructure Management Repository is configured:
$ srvctl status mgmtdb

If not configured and running, it can be in the grid home with the following:
$ mgmtca createGIMRContainer -storageDiskLocation fppimage

Create the FPP Server resource:
$ srvctl add rhpserver -storage /u03/fppstorage/images -diskgroup fppimage $ srvctl start rhpserver

The overview of steps for fleet patching and provisioning are as follows:

  1. Create reference environments with the required set of patches.
  2. Create gold images and set the version.
  3. Subscribe database and grid homes to a gold image.
  4. Deploy the image to a new Oracle home.
  5. Switch targets from the old to the new Oracle home.

To create a gold image, you can either import or add an image with rhpctl commands. Here is an example:
$ rhpctl import -image DB23_1

The parameters would include the type of image, so if it is Oracle Database or grid, and you can create your own image types along with the built-in image types. By default the type of image is the Oracle Database, but other types would need to be specified using the parameter -imagetype.
The built-in base image types are as follows:
• ORACLEDBSOFTWARE
• ORACLEGISOFTWARE
• ORACLEGGSOFTWARE
• EXAPATCHSOFTWARE
• SOFTWARE

Provisioning the gold image is done with the same rhpctl command but using the workingcopy parameter. This is an example of how to do this on a local client:
$ rhpctl add workingcopy –workingcopy DB_HOME_231 -image DB23_1 -storagetype LOCAL

Or for client:
$ rhpctl add workingcopy -image DB23_1 -path /u01/app/oracle/ product/23.1.00/db23c -workingcopy DB_HOME_23c -client client_042 -oraclebase /u01/app/oracle

Note enterprise Manager software is another way to implement fleet patching and provisioning. it can assist in these steps and configuration. it provides information for the reference environment with the available patches.

By using the emcli command, you can create the gold image. there are requirements to set up the Fpp server and enterprise Manager, but you can see with this example some of the commands that help create and manage images.this creates a new version:
$ emcli db_software_maintenance -createSoftwareImage -input_file=”data:/home/user/input_file”

here is an example to get a list of images in production:
$ emcli db_software_maintenance -getImages

In creating images, you will have to apply the patches to sources and create another image with the patched version and verify the image.
Once the images have been added to the FPP Server, they can be queried:
$ rhpctl query image

Then get the specific details from one of the images:
$ rhpctl query image -image DB23_1

User Groups- Migration to Multitenant and Fleet Management

We discussed the different groups that are available to manage the Oracle software and processes. This is important when you are looking at upgrading and patching. The same user groups are needed to own the software on the servers and deploy them.
Groups are managed as part of the provisioning. They are not inherited in the images. The groups can be set using the -groups parameter on the rhpctl command. FPP will use the group of the user that is running the command to copy the image to the working copy.

Methods of Patching

Being able to roll through patches without too much downtime is important for most environments. Databases need to be up and available, so planning a strategy to keep it consistent, minimize downtime, and automate is important.
By default, rolling patching of the grid infrastructure is the method for FPP. When not using FPP, you can have two installed homes that basically use the same method, but this requires more manual intervention. The different methods for the grid infrastructure are as follows:
• Rolling: Moves the grid infrastructure home sequentially through each node in the clusters
• Nonrolling: Done in parallel to patch all of the nodes at the same time
• Batch: Patches the Oracle grid and database homes at the same time.
• Zero downtime: Does not bring down the databases that are part of the cluster while patching; this method automates the database upgrades without interrupting the service

The idea here with fleet patching and provisioning is to provide another solution for large environments and manage a patching system to roll out patched versions of the Oracle home consistently and efficiently. So, where we might have spent time patching and not even the ability to patch the whole environment, management tasks can focus on working with the gold images, testing the releases, and implementing the most effective solution for the environment.

Fleet Management with Autonomous

Shifting gears slightly, we are going to look at another type of fleet management. This is not completely different from patching and provisioning, but it does include some of these tasks. This is also not a tool to help you manage large environments but defines the responsibilities of an administrator of databases in the cloud on dedicated Exadata Infrastructure and Exadata Cloud@Customer.

As previously discussed with the installation of the Oracle Database, we looked at Autonomous Databases in the cloud. Autonomous Database – Serverless (ADB-S) is a quick creation of the database through the Oracle Cloud Infrastructure console. It doesn’t appear much is needed from a database administrator; however, there are administrative tasks and responsibilities for Autonomous Database Dedicated. This includes managing and monitoring the container databases for the environment.

The fleet administrators create and manage the Autonomous Container Database resources. They need the appropriate permissions in the Oracle Cloud to administer the Autonomous Exadata VM Clusters along with the container database. The fleet administrators will also need to set up the networking resources that are needed for the database environment and connections.

At the beginning of this book, we discussed the different roles that DBAs do, as well as the different default roles that are part of the configuration to have separation of duties. This includes managing the grid environment versus the databases and backups. The same applies here. There is the infrastructure side of Autonomous Dedicated that is very similar to managing the container databases and servers in multitenant. Both of these are for the administration of the fleet of databases.

Autonomous Database – Serverless is doing these same tasks to provide the self-service databases managed by Oracle. The fleet administration is responsible for these components in the dedicated architecture: the Exadata components, container databases, and security.

DBMS_CLOUD- Data Management

DBMS_CLOUD is a package that manages several Autonomous Database processes and routines, especially when working with all of the cloud resources that are part of this database system.

The cloud console provides an interface, but behind the scenes many of these activities use the DBMS_CLOUD package. Instead of using the user interface, there is a package that you can use to do several of these tasks, from loading data to setting up credentials.

DBMS_CLOUD offers subprograms for the following areas:
• Access management
• Object and file storage
• Bulk file management
• REST APIs

Credentials are set up for the management within the package and will allow for setting up least privileges for loading data and for querying external data in cloud resources including in other clouds besides OCI.

Permissions on the package are needed. When you create a user in Database Actions, there are options to allow for setting the permissions, but through SQL you can also just grant the following:

SQL> grant execute on dbms_cloud to mmalcher;

Credentials are stored in the DBA/ALL/USER_CREDENTIALS view, which grants access to OCI users for managing resources that are external to the Autonomous Database and allowing for data to be exported and loaded or processed for various sources.

To run these procedures, you can be connected through SQL in Database Actions or SQL Developer connections to the Autonomous Database.

SQL> begin dbms_cloud.create_credential( credential_name => ‘OCI_ADB_DATAMGMT’, username => ‘[email protected]’, password => ‘Cr4zyPa$$w0rd!’);end;

Objects and files can be in object storage in the cloud and in other clouds, and instead of having a local file, you need to access these files for external tables to work with the data in data lakes and in other formats.

Here is an example of creating an external table in Autonomous using object storage files (assuming we have the customer name, dates, and totals in the file):

SQL> begin dbms_cloud.create_external_table ( table_name => ‘CUSTOMER_SALES_JULY’,credential_name => ‘OCI_ADB_DATAMGMT’,file_uri_list => ‘https://objectstorage.us-ashburn-1.oracelcloud.com/n/ namespace1/b/customer_sales/cust_sales_0723.csv’,format => json_object(‘type’ value ‘csv’, ‘skipheaders’ value ‘1’), field_list => ‘CUSTOMER_ID,CUSTOMER_NAME, CUSTOMER_TOTAL,SALE_DATE DATE ”mm/dd/yyyy”’, column_list => ‘CUSTOMER_ID NUMBER, CUSTOMER_NAME VARCHAR2(100), CUSTOMER_TOTAL NUMBER,SALE_DATE DATE’); end;

The format for the external tables can be CSV, JSON, ORC, Avro, or Parquet and is not just limited to CSV files. Indexes can also be created on the externalfileswithdbms_cloud.create_external_text_index to be able to search through the files and find values.

As one more example for objects and files, here is an example to copy a file:

SQL> begin dbms_cloud.copy_object (source_credential_name => ‘OCI_ADB_DATAMGMT’,
source_object_uri => ‘https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace1/b/customer_sales/o/cust_sales_june.csv’,target_object_uri => ‘https://objectstorage.us-ashburn-1.oraclecloud.com/n/ namespace1/b/ext_tables_bucket/o/cust_sales_june.csv’);end;

You can also use the dbms_cloud.list_files function to get a list of the files in directories. As we saw for data_pump, there is a directory that is needed, and in Autonomous, there isn’t a file system; however, object storage can serve as thesedirectories.
You can query using this function to get the details of the files listed in the data_pump_dir directory.

SQL> select * from dbms_cloud.list_files(‘DATA_PUMP_DIR’);

And if you want to see the objects in one of the locations, you can list the objects in a bucket:

SQL> select * from dbms_cloud.list_objects(‘OCI_ADB_DATAMGMT’, ‘https://objectstorage.us-ashburn-1.oraclecloud.com/n/namspace1 /b/customer_sales/o/’);

Just like there are individual files, there are also procedures to handle bulk moves, uploads, and copies.
Another area for DBMS_CLOUD is the Cloud REST API procedures and functions. These procedures can get details about the Cloud REST APIs and can be used in PL/SQL and application code to get API requests and results. Since these are some of the various cloud APIs, you can do things such as creating buckets in object storage to store the files and manage these type of resources.

Tip there are several procedures and functions available in DBMS_CLOUD, and reviewing the documentation is very useful here.We barely touched the surface of this package, and it is most important to understand that the capabilities are there for managing the data and files in the cloud for autonomous environments.

Machine Learning- Data Management-2

Before we look at a couple of tools available for machine learning, let’s look at Database Actions for Oracle Database in the cloud, as shown in Figure 18-1. There are several different tools here. Under Development, there is a SQL Worksheet, Data Modeler, JSON Collections, REST APIs, APEX, Liquibase, and other tools. As you can see, there is quite a bit to explore.

Many of these tools are available for on-premises aswell but need to be installed or enabled. For on-premises tools, SQL Developer is a good starting point to download and use with all of the Oracle databases on-prem and in the cloud. Another tool is SQLcl, which is a command-line interface to the databases. SQLcl comes with SQL Developer and provides a command-line tool instead of installing the Oracle client to use SQLPlus.

Figure 181. Database actions

From Database Actions, you can launch the Oracle Machine Learning user interface. Figure 18-2 shows how to get started with Oracle Machine Learning with Autonomous Database.

Figure18-2.Machinelearninguserinterface

You can get to this user interface from Database Actions, and as you can see in Figure 18-2, you can create notebooks for data discovery and analytics. There are also plenty of examples to browse through. Again, you can leverage AutoML and the machine learning UI in OCI to familiarize yourself with the provided algorithms. The same algorithms are available in the Oracle Database on-premises and by using SQL as the example provided to create the model. The platform that you use might have different tools but still provides the same standard SQL and capabilities in the database.

Selecting examples will give you several examples to explore and see the built-in algorithms at work. Figure 18-3 demonstrates the “OML4SQL Anomaly Detection SVM” template, which includes a model using the 1-Class SVM algorithm to detect anomalies in the data.

Figure 183. Machine learning example

After reviewing the examples, new notebooks can be created with the scripts, SQL, or other languages such as R or Python to create the machine learning models.

As a database administrator, you are providing ways to work with the data in the Oracle Database and gaining insight to the workloads that can be coming your way. There might be additional resources needed, but it goes back to monitoring the databases. It depends on how much data and which algorithms are going to be used, and you can capture workloads on the system and tune accordingly. Besides just application and transaction code running on the database, there are going to be analytic workloads that include machine learning that use the different data types and leverage all kinds of data.

Provisioning- Migration to Multitenant and Fleet Management

The ADB Dedicated architecture consists of the following:
• Exadata infrastructure
• Autonomous VM cluster
• Autonomous Container Database
• Autonomous Database

The Exadata infrastructure can be in an Oracle Cloud Infrastructure (OCI) region or can be Exadata Cloud@Customer (so in your data center). This will include the compute nodes, storage, and networking.

The VM cluster is the set of virtual machines set up for the Autonomous Container Databases to run on, and it provides high availability with all of the nodes. The VMs will be allocated all of the resources of the Exadata infrastructure.

The Autonomous Container Database is the CDB that will be set up to manage the pluggable databases. Autonomous Databases are pluggable databases that can be configured to be transactional or configured for data warehouse workloads.

Provisioning will include all of these components. Fleet administrators will need the right policies and permissions at the cloud tenancy level and then need system DBA permissions to create CDBs and PDBs. The architecture should use a compartment in the tenancy to properly allocate resources and policies at the right level. In the OCI tenancy, create a compartment for the users, VMs, databases, and other resources.

Policies

After creating a compartment in the tenancy, let’s call it fleetdatabases in our examples, a group should be created to manage the fleet administrators, fleetDBA. The policies and users should be added to the group.
Here is a list of policies that can be manually edited in the OCI console under Policies for the compartment:
Allow group fleetDBA to manage cloud-exadata-infrastructures in compartment fleetdatabases
Allow group fleetDBA to manage autonomous-database-family in compartment fleetdatabases
Allow group fleetDBA to use virtual-network-family in compartment fleetdatabases
Allow group fleetDBA to use tag-namespaces in compartment fleetdatabases Allow group fleetDBA to use tag-defaults in compartment fleetdatabase

Note Compartments, groups, and policies are all created through the oCi console. if you want to automate this process and use either the command line or scripts, there are ways to do this through creating terraform scripts for consistent provisioning in the environments.

Fleet administrators can either have permission to give database users the permissions for Autonomous Databases or have the policies created for the database user groups. These would be users managing and using the Autonomous Databases in their own compartment in the tenancy.

Policies can depend on the environment and what the users are allowed to do. There might be additional policies that would be allowed, or different users might be allowed only certain policies. Here are some additional examples:
Allow group ADBusers to manage autonomous-databases in compartment ADBuserscompartment
Allow group ADBusers to manage autonomous-backups in compartment ADBuserscompartment
Allow group ADBusers to use virtual-network-family in compartment ADBuserscompartment
Allow group ADBusers to manage instance-family in compartment ADBuserscompartment
Allow group ADBusers to manage buckets in compartment ADBuserscompartment

Users can be added to the groups with the policies for permissions. The fleet administrators would be in the fleetDBA group, and those just using ADBs would be in the ADBusers group.

Data Studio- Data Management

Data Studio is one of the tools under Database Actions in OCI for Autonomous Database, as shown in Figure 18-1. We have already discussed Data Pump, external tables, and SQL*Loader, and it is also good to know about Data Studio. Even with Autonomous, if you do not have to take care of the patching and some of the administration, there are opportunities with data.

The data management that has been discussed so far can all be done in Autonomous or on-premises. The opportunities with data, various workloads, and different analytics provide answers, possibilities, and other insights derived from the data.

With Oracle Autonomous Database from Database Actions, if you select the Data Studio overview, you will see four options for loading, analyzing, gaining insights, and cataloging data.

Figure 18-4 shows these main areas and how to get started using Data Studio.

Figure 184. Data Studio overview

Figure 18-5 walks you through what you want to do, such as loading, linking, or feeding data, as well as where the source of the data is, which can be another database, local file, or cloud storage including other clouds (AWS, Azure, OCI).

You simply select Load Data and provide details about the data, CSV file or database connection, or connection to cloud storage. Then it is a drag and drop of the data to load. These can be set up as jobs or a one-time process.

This is a simple way to load data into an Autonomous Database. You can load from CSV, Excel, Parquet, JSON, and other files by just dragging and dropping the file and watching the data loading jobs that have been configured.

Figure 185. Data loader options

Depending how you want to work with JSON files, there are ways to create JSON collections under the Development section of Database Actions with JSON. This is a UI that works for managing JSON data. There are several different ways of working with JSON through the tools and SQL in the database.