Provisioning- Migration to Multitenant and Fleet Management

The ADB Dedicated architecture consists of the following:
• Exadata infrastructure
• Autonomous VM cluster
• Autonomous Container Database
• Autonomous Database

The Exadata infrastructure can be in an Oracle Cloud Infrastructure (OCI) region or can be Exadata Cloud@Customer (so in your data center). This will include the compute nodes, storage, and networking.

The VM cluster is the set of virtual machines set up for the Autonomous Container Databases to run on, and it provides high availability with all of the nodes. The VMs will be allocated all of the resources of the Exadata infrastructure.

The Autonomous Container Database is the CDB that will be set up to manage the pluggable databases. Autonomous Databases are pluggable databases that can be configured to be transactional or configured for data warehouse workloads.

Provisioning will include all of these components. Fleet administrators will need the right policies and permissions at the cloud tenancy level and then need system DBA permissions to create CDBs and PDBs. The architecture should use a compartment in the tenancy to properly allocate resources and policies at the right level. In the OCI tenancy, create a compartment for the users, VMs, databases, and other resources.

Policies

After creating a compartment in the tenancy, let’s call it fleetdatabases in our examples, a group should be created to manage the fleet administrators, fleetDBA. The policies and users should be added to the group.
Here is a list of policies that can be manually edited in the OCI console under Policies for the compartment:
Allow group fleetDBA to manage cloud-exadata-infrastructures in compartment fleetdatabases
Allow group fleetDBA to manage autonomous-database-family in compartment fleetdatabases
Allow group fleetDBA to use virtual-network-family in compartment fleetdatabases
Allow group fleetDBA to use tag-namespaces in compartment fleetdatabases Allow group fleetDBA to use tag-defaults in compartment fleetdatabase

Note Compartments, groups, and policies are all created through the oCi console. if you want to automate this process and use either the command line or scripts, there are ways to do this through creating terraform scripts for consistent provisioning in the environments.

Fleet administrators can either have permission to give database users the permissions for Autonomous Databases or have the policies created for the database user groups. These would be users managing and using the Autonomous Databases in their own compartment in the tenancy.

Policies can depend on the environment and what the users are allowed to do. There might be additional policies that would be allowed, or different users might be allowed only certain policies. Here are some additional examples:
Allow group ADBusers to manage autonomous-databases in compartment ADBuserscompartment
Allow group ADBusers to manage autonomous-backups in compartment ADBuserscompartment
Allow group ADBusers to use virtual-network-family in compartment ADBuserscompartment
Allow group ADBusers to manage instance-family in compartment ADBuserscompartment
Allow group ADBusers to manage buckets in compartment ADBuserscompartment

Users can be added to the groups with the policies for permissions. The fleet administrators would be in the fleetDBA group, and those just using ADBs would be in the ADBusers group.

Models- Data Management

Data modeling is an important part of application development so you understand how to structure the data. Decisions are made based on the application requirements, what information is needed, and how it is needed.

We could probably discuss for another couple chapters the value of normalization versus denormalization of the data model and whether to use surrogate or natural keys on tables, but those topics are not necessarily the point of the data management and model discussion here.

During application development, there are decisions being made to use data in a particular way based on the requirements. This might even influence the database that is being used.

Therefore, it is important to understand the capabilities of the database and tools to support these types of decisions and requirements.

There is a need to understand if something limits the use of the data or creates a need for complex data integrations if the data is going to be leveraged in other places.

It is good to recognize that the data requirements for one application might not be the same for another one, and being a database administrator, there are opportunities here to help develop a data architecture or plan to be able to reuse data, leverage the information in other analysis and calculations where it makes sense, or even use the data in other formats, making it easy to create specific applications that might be different than relational models.

So, if you need to use different models of the data, such as hierarchical, graph, and object, are there ways you can do that in the Oracle Database? Of course! The point of this discussion is to look at different ways of using the same data sets and leveraging a relational model for graph models, hierarchical for machine learning, and all of the relational models for several different uses and applications.

Workloads also play a part of how you model your data. Is this transactional data? Warehouse data? Normally, I find there are different hybrid approaches to this as well.

All of these factors come into play when designing systems and wanting to make the applications perform well. There are parameters and configurations to tune the database based on workloads and types of data being used, but database doesn’t limit the workloads or types.

The database architect and administrator have a responsibility to create a robust system and data model for these applications and tune them along the way.

After the applications are running and you’ve collected all kinds of data and information, shouldn’t this data be harnessed to be used for business decisions and other applications? Absolutely. Even after all the model decisions are made and applications have been running, there are reasons to use data in different ways because of the importance of the information that has been collected.

Oracle 23c has several new features that are focused on easier data management, using different data types and capabilities with all different kinds of workloads. This is to help leverage the data that is already there and gather new information, while ideally simplifying development.

Network- Migration to Multitenant and Fleet Management-1

Fleet administrators are not necessarily responsible for the network configuration but can provide some input and guidelines. The network will be configured with similar policies for on-prem databases.

For example, databases should be in a private subnet without Internet access, subnets will have their own security lists, and ingress ports should be opened only as needed.

It is important here to include network administrators to configure the network configurations and make sure the database servers are in compliance with policies.

With databases, we need to confirm that the TCP traffic will allow port 1521. If using APEX and SQLDeveloper and Database Actions, port 443 would be allowed. Additional ports would be 2484 and 6200 for encrypted traffic and application continuity.

If access is needed to the Internet, an Internet gateway can be created in the virtual cloud network (VCN). Bastion hosts can also be configured for SSH access and developer client machines.

Exadata Infrastructure

From the OCI console, under Autonomous Database, there are two choices: Autonomous Database and Dedicated Infrastructure. Autonomous Database is the serverless option and is typically used for just creating ADBs in the OCI environment. Dedicated Infrastructure is where you would be setting up the components of the dedicated environment.

It is important to make sure you are creating the Exadata infrastructure in the right compartment, as this will have the policies granted and keep the database components in the same area.

Let’s walk through a couple of screenshots from the OCI console to show the steps for the components and also give some detail about the responsibilities of the fleet administrator.

Figure 17-1 shows the starting point for creating the Exadata infrastructure.

Figure17-1.CreatingExadatainfrastructure

Machine Learning- Data Management-1

Just like with graph algorithms, Oracle 23c has more than 30 algorithms ready to use for machine learning models.
Did anyone ask for AI? Machine learning is AI, and of course, there is more AI beyond just machine learning, but these are the interesting pieceswe can perform in the Oracle Database.

We even discussed that the Oracle Database statistic collection is now leveraging machine learning algorithms to make real-time statistics more efficient and predict information about data changes and growth.
Machine learning is there to answer additional questions and has the ability to analyze large
volumes of data.
What questions are being asked of your data? Have you been able to answer the business needs with reports and queries? If not, it is time to investigate implementing graph and machine learning and using the data for business value.

Machine learning can automatically discover patterns and create actionable information. Using the PL/SQL package DBMS_DATA_MINING, machine learning models can be built quickly.
You will need to gain some understanding of the types of algorithms such as classification, which can be used for predictions about customers.

In Autonomous Database, Oracle AutoML is available and can help you select the right types of algorithms. AutoML helps nonexpert users leverage machine learning in the Oracle Database.

Here is the process for AutoML:
• Auto algorithm selection: Finds the best algorithm from in-database algorithms
• Adaptive sampling: Identifies and adjusts the right sample size
• Auto feature selection: Improves accuracy and performance
• Auto model tuning: Improves models with automated tuning

AutoML provides a quick way to get started with machine learning and test the built-in algorithms and build models without completely understanding all of the science. However, with the different skill sets, the packages are available.
For example, let’s take our customer list, and based on their begin date as a customer, figure out if they are at risk of not purchasing additional items or leaving as a customer:

SQL> create or replace view vw_customer_longterm asselect c.customer_id, c.customer_name, decode(to_char(customer_begin_ date,’YYYY’,’2023′,1,0) cust_date_value,decode(nvl(o.order_id,0),0,0,1) order_value from customer c left outer join orders o on o.customer_id=c.customer_id; View created.


SQL> declarev_setlst dbms_data_mining.setting_list; beginv_setlst(dbms_data_mining.algo_name) := dbms_data_mining.algo_support_ vector_machines;v_setlst(dbms_data_mining.prep_auto) :=dbms_data_mining.prep_auto_on; dbms_data_mining.create_model2(model_name =>’FIND_LONGTERM’, mining_function => ‘CLASSIFICATION’,data_query => ‘select * frommmtest.customer_longterm’, set_list => v_setlst,case_id_column_name => ‘CUSTOMER_ID’, target_column_name => ‘CUST_DATE_VALUE’); end;/\

SQL> create table customer_longterm as select customer_id,prediction(FIND_LONGTERM usering*) cust_likely_to_leave, prediction_details(FIND_LONGTERM using *) PDfrom vw_customer_longterm;

Machine learning can confirm observations and find new patterns. Discovery and predictive relationships are ways to start harnessing the data by pulling information out of the data.

Handling Temporary Tablespace Issues – Automation and Troubleshooting

Issues with temporary tablespaces are somewhat easy to spot.

For example, when the temporary tablespace runs out of space, the following error will be thrown:

ORA-01652: unable to extend temp segment by 128 in tablespace TEMP

When you see this error, you need to determine if there is not enough space in the temporary tablespace or if a rare runaway SQL query has temporarily consumed an inordinate amount of temporary space. Both of these issues are discussed in the following sections. Multiple temporary tablespaces can be created in a PDB. You can allocate an isolated temporary tablespace to a user to avoid causing issues with other users or applications.

Determining Whether Temporary Tablespace Is Sized Correctly

The temporary tablespace is used as a sorting area on disk when a process has consumed the available memory and needs more space. Operations that require a sorting area include the following:

•     Index creation

•     SQL sorting and hashing operations

•     Temporary tables and indexes

•     Temporary LOBs

•     Temporary B-trees

There is no exact formula for determining whether your temporary tablespace is sized correctly. It depends on the number and types of queries, index build operations, and parallel operations, as well as on the size of your memory sort space (PGA).

You will have to monitor your temporary tablespace while there is a load on your database to establish its usage patterns. Since TEMP tablespaces are temporary files, they are handled differently than the data files, and details are found in a different view.

Run the following query to show both the allocated and free space within the temporary tablespace:

SQL> select tablespace_name ,tablespace_size/1024/1024 mb_size ,allocated_space/1024/1024 mb_alloc ,free_space/1024/1024                                   mb_free from dba_temp_free_space;

The topics covered in this chapter went from automating jobs to troubleshooting, and this was definitely on purpose. The automated scripts and tuning make it easier to determine the cause of issues and problems. If you are using the tools available, you will know where to look when the pressure is on to fix an issue and troubleshoot a problem.

Diagnosing issues sometimes requires some system and network administrator skills. An effective DBA must know how to leverage the Oracle data dictionary to identify problems and know what jobs are being handled by the database and where to look to determine the actual issues.

Even though there are whole books devoted to just troubleshooting and performance tuning, all of the topics and activity are difficult to cover, but this has hit some of the top topics as you support your environment.

Migration to Multitenant – Migration to Multitenant and Fleet Management

When creating new databases in 23c, you get multitenant CDBs and PDBs. However, there are databases that need to be upgraded to 23c, and many of them can be noncontainer, single-instance databases. This of course is the chance to consolidate and leverage all of the advantages we just discussed, but there are a few steps and choices you have when migrating.

The settings always depend on database size, availability requirements, and backup strategies that are needed. We discussed Data Pump jobs, which can migrate data to a newly built 23c database. Exporting and importing the data will move the data, but there will be an outage when importing the data, and how big the database is matters.

Just as we discussed with the backup and recovery processes, there are strategies for upgrading and migrating and plenty of options. The main things to consider are the following:

•     Repeatable process

•     Easy way to back out or downgrade

•     Database size

•     Downtime window

Simplicity is another piece to consider, especially when maintaining a large environment. It is important to have steps to validate the process, execute it, and verify that the migration was successful.

Plug-In Method

Here is an example to walk through the steps to migrate to multitenant.

First, check the compatibility of the database. This will be performed on the source:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

This creates a manifest XML file, with information about services and data files of the database to be migrated.

Next, check the compatibility of the CDB, which is the target:

SQL> set serveroutput on begin if dbms_pdb.check_plug_compatibility(‘/tmp/salesdb.xml’) THEN dbms_output.put_line(‘PDB compatible?      ==> Yes’); else dbms_output.put_line(‘PDB compatible? ==> No’); end if; end; /

This will check if there are any issues that will appear when plugging in the database instance to the CDB. There is valuable information in the PDB_PLUG_IN_VIOLATIONS view after running the check_plug_compatibility script. There might be error and warning messages that can be resolved before plugging the database in.

For example, the version might be the same, so you can upgrade first or let the upgrade occur after plugging the database in. Also, there are possible warnings that are informational such as requiring noncdb_to_pdb.sql to run after it is plugged into the CDB.

SQL> select type, message from pdb_plug_in_violations where name=’SALESDB’ and status <> ‘RESOLVED’;

Next, you need to create the manifest file of the source single-instance database. The database will need to be in read-only mode.

Restart the source database in read-only mode:

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open read only;

The manifest file needs to be generated:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

SQL> shutdown immediate;

This is the same command we ran on the source to be able to validate the manifest file, but now the database was in read-only mode.

In the target CDB, the next step to is create the PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ nocopy tempfile reuse;

There are steps that are still needed to complete the migration to a container PDB. Convert to a PDB by running noncdb_to_pdb.sql:

SQL> alter pluggable database salesdb open;

SQL> alter session set container = salesdb;

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

SQL> alter pluggable database salesdb close;

SQL> alter pluggable database salesdb open;

Once again, check any plug-in violations from the target CDB:

SQL> select type, message from pdb_plug_in_violations

where name=’SALESDB’ and status <> ‘RESOLVED’;

Verify that the PDB is open so you can save the state of the PDB to have it set to auto-start:

SQL> select open_mode, restricted from v$pdbs;

SQL> alter pluggable database salesdb save state;

Once you plug the database in, it does not have the ability to revert to a noncontainer, single-instance database.

There are other recoverability options, but it should be considered to be prepared to troubleshoot the issues in the container and gives you another reason to make sure you check those error and warning messages before migrating.

When you plug in a database, you can use AutoUpgrade to upgrade the database with the new CDB. The target CDB is part of the config file, so the database will be upgraded.

Here is the config file:

$ cat db19to23.cfg Upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/23 upgl.sid=salesdbupgl.target_cdb_cdb=cdb23c

Run the autoupgrade command with the config file:

$ java -jar autoupgrade.jar -config db19to23.cfg -mode deploy