Network- Migration to Multitenant and Fleet Management-2

In Figure 17-2, you see the compartment is the first choice and should be prepopulated if already working in that compartment. Provide a name for the infrastructure, and refer to your company standards and guidelines for naming conventions.

There is a choice of availability domain that depends on the tenancy you are working in, and another choice on the system model for the Exadata. The default is the latest and greatest model available.

Figure 172. Exadata infrastructure form

Maintenance can be configured differently and customized, including on the schedule shown in Figure 17-3. Patching can be rolling or nonrolling.

Figure17-3.Advanceoptionsmaintenance

Once the Exadata infrastructure is created, the VM clusters can be created. Figure 17-4 shows this as the next choice, working the way up from the Exadata infrastructure.

Figure 174. Creating VM clusters

As shown in Figure 17-5, the previously created Exadata infrastructure is included in order to create the VMs on the right infrastructure. The display name again should follow your company standards or guidelines set for this database environment.

Figure17-5.InformationforautonomousVMcluster

ECPUs are the elastic cores for Autonomous Database. Figure 17-6 shows not just the ECPUs to adjust but other resources on the VM to put limits on containers. Setting the details on the number of Autonomous Container Databases doesn’t limit the number

of pluggables or Autonomous Databases in the container, but the memory and storage limits might do that.

Figure17-6.SettingVMresourcesforcontainers

The last component we will look at here is the Autonomous Container Database, as shown in Figure 17-7.

Figure 177. Creating the Autonomous Container Database

The create information requires the compartment, the Exadata infrastructure, and the VM Cluster, as shown in Figure 17-8. Also, here you will see that the version of the Autonomous Container Database is available for the base image.

Figure 178. Providing information for the Autonomous Container Database

The last step in creating the container is editing the automatic maintenance. Figure 17-9 shows the choices for the schedule and patching.

Figure17-9.Configuringmaintenance

Models- Data Management

Data modeling is an important part of application development so you understand how to structure the data. Decisions are made based on the application requirements, what information is needed, and how it is needed.

We could probably discuss for another couple chapters the value of normalization versus denormalization of the data model and whether to use surrogate or natural keys on tables, but those topics are not necessarily the point of the data management and model discussion here.

During application development, there are decisions being made to use data in a particular way based on the requirements. This might even influence the database that is being used.

Therefore, it is important to understand the capabilities of the database and tools to support these types of decisions and requirements.

There is a need to understand if something limits the use of the data or creates a need for complex data integrations if the data is going to be leveraged in other places.

It is good to recognize that the data requirements for one application might not be the same for another one, and being a database administrator, there are opportunities here to help develop a data architecture or plan to be able to reuse data, leverage the information in other analysis and calculations where it makes sense, or even use the data in other formats, making it easy to create specific applications that might be different than relational models.

So, if you need to use different models of the data, such as hierarchical, graph, and object, are there ways you can do that in the Oracle Database? Of course! The point of this discussion is to look at different ways of using the same data sets and leveraging a relational model for graph models, hierarchical for machine learning, and all of the relational models for several different uses and applications.

Workloads also play a part of how you model your data. Is this transactional data? Warehouse data? Normally, I find there are different hybrid approaches to this as well.

All of these factors come into play when designing systems and wanting to make the applications perform well. There are parameters and configurations to tune the database based on workloads and types of data being used, but database doesn’t limit the workloads or types.

The database architect and administrator have a responsibility to create a robust system and data model for these applications and tune them along the way.

After the applications are running and you’ve collected all kinds of data and information, shouldn’t this data be harnessed to be used for business decisions and other applications? Absolutely. Even after all the model decisions are made and applications have been running, there are reasons to use data in different ways because of the importance of the information that has been collected.

Oracle 23c has several new features that are focused on easier data management, using different data types and capabilities with all different kinds of workloads. This is to help leverage the data that is already there and gather new information, while ideally simplifying development.

No-Copy or Copy Options – Migration to Multitenant and Fleet Management

The options to migrate to multitenant reuse the data files. The failback option is to use Data Pump to get back to a non-CDB, single-instance database. The recovery option is not necessarily faster, but the migration option is a simple and faster method.

If the recovery plan needed to be faster, then it might make more sense to copy the data files instead of reusing them. This will include copying and renaming the files with FILE_NAME_CONVERT.

It will take longer to migrate because there is a copy of the files before plugging in the database and migrating, but it will be a quicker rollback plan.

You create the manifest file and shut down the database. Copy the data files to a new location or to a new name.

In the target CDB, the next step to is create a PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ copy file_ name_convert=(‘SALES19C’,’SALESDB’);

One more migration to mention for plugging in a database is to clone the database over a database link. This consists of cloning a non-CDB single-instance database to a pluggable database in a CDB.

The autoupgrade command and the config file will make it easier to set up the options, and the database link makes it possible.

Here is the config file:

$ cat db19_to_23.cfg upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/db23c upgl.sid=salesdbupgl.target_cdb=cdb23c upgl.target_pdb_name=salesdbupgl.target_pdb_copy_option=file_name_convert=(‘SALES19C’,’SALESDB’)

Run this command:

$ java -jar autoupgrade.jar -config db19_to23.cgf -mode deploy

Other options for migration options are some of the typical options for upgrades migrations, not just to a non-CDB to a container:

•     Data Pump

•     GoldenGate

•     Transportable tablespaces

These options keep the original database for a rollback plan, just like copying the data files. GoldenGate does allow for zero downtime, while the other migration options will require some downtime to copy, move, and upgrade.

Tip there is a great resource for upgrades and migrations with Mike dietrich’s blog, upgrade your database – noW! he gives more details on these options and helps with planning for the migration.

Multitenant migration in 19c is a good first step to getting to 23c. You can upgrade and migrate at the same time as long as you are going from 19c. The upgrade path to 23c is from 19c or 21c. If you are using a non-CDB, single-instance database in 19c, then it makes sense to migrate the database to a pluggable database.

Network- Migration to Multitenant and Fleet Management-1

Fleet administrators are not necessarily responsible for the network configuration but can provide some input and guidelines. The network will be configured with similar policies for on-prem databases.

For example, databases should be in a private subnet without Internet access, subnets will have their own security lists, and ingress ports should be opened only as needed.

It is important here to include network administrators to configure the network configurations and make sure the database servers are in compliance with policies.

With databases, we need to confirm that the TCP traffic will allow port 1521. If using APEX and SQLDeveloper and Database Actions, port 443 would be allowed. Additional ports would be 2484 and 6200 for encrypted traffic and application continuity.

If access is needed to the Internet, an Internet gateway can be created in the virtual cloud network (VCN). Bastion hosts can also be configured for SSH access and developer client machines.

Exadata Infrastructure

From the OCI console, under Autonomous Database, there are two choices: Autonomous Database and Dedicated Infrastructure. Autonomous Database is the serverless option and is typically used for just creating ADBs in the OCI environment. Dedicated Infrastructure is where you would be setting up the components of the dedicated environment.

It is important to make sure you are creating the Exadata infrastructure in the right compartment, as this will have the policies granted and keep the database components in the same area.

Let’s walk through a couple of screenshots from the OCI console to show the steps for the components and also give some detail about the responsibilities of the fleet administrator.

Figure 17-1 shows the starting point for creating the Exadata infrastructure.

Figure17-1.CreatingExadatainfrastructure

Troubleshooting Undo Tablespace Issues – Automation and Troubleshooting

Problems with the undo tablespace are usually of the following nature:

ORA-01555: snapshot too old

ORA-30036: unable to extend segment by … in undo tablespace ‘UNDOTBS1’

The prior errors can be caused by many different issues, such as incorrect sizing of the undo tablespace or poorly written SQL or PL/SQL code.

A snapshot being too old can also occur during exports because of updates to very large tables and is normally seen when either the undo retention or the size is not properly set.

For an export, it is an easy fix to rerun at a quieter time. So, sometimes it is just rerunning the job or transaction on the database. If these happen too often, it might be because of the undo tablespace size.

Or if the system is sharing the undo tablespace in the CDB, you can create the undo tablespace in each of the PDBs.

Determining Whether the Undo Is Correctly Sized

Suppose you have a long-running SQL statement that is throwing an ORA-01555: snapshot too old error, and you want to determine whether adding space to the undo tablespace might help alleviate the issue.

Run this next query to identify potential issues with your undo tablespace. Make sure you are in the PDB that you are checking the undo tablespace. The query checks for issues that have occurred within the last day:

SQL> select to_char(begin_time,’MM-DD-YYYY HH24:MI’) begin_time ,ssolderrcnt                             ORA_01555_cnt, nospaceerrcnt                             no_space_cnt,txncount ,expiredblks
max_num_txns, maxquerylen    max_query_len blck_in_expired from v$undostat where begin_time > sysdate – 1 order by begin_time;

The ORA_01555_CNT column indicates the number of times your database has encountered the ORA-01555: snapshot too old error. If this column reports a nonzero value, you need to do one or more of the following tasks:

•     Ensure that code does not contain COMMIT statements within cursor loops

•     Tune the SQL statement throwing the error so that it runs faster

•     Ensure that you have good statistics (so that your SQL runs efficiently)

•     Increase the UNDO_RETENTION initialization parameter

The NO_SPACE_CNT column displays the number of times space was requested in the undo tablespace. In this example, there were no such requests. If the NO_SPACE_CNT is reporting a nonzero value, however, you may need to add more space to your undo tablespace.

A maximum of 4 days worth of information is stored in the V$UNDOSTAT view. The statistics are gathered every 10 minutes, for a maximum of 576 rows in the table. If you have stopped and started your database within the last 4 days, this view will contain information only from the time you last started your database.

Another way to get advice on the undo tablespace sizing is to use the Oracle Undo

Advisor, which you can invoke by querying the PL/SQL DBMS_UNDO_ADV package from a SELECT statement. The following query displays the current undo size and the recommended size for an undo retention setting of 900 seconds:

SQL> select sum(bytes)/1024/1024             cur_mb_size ,dbms_undo_adv.required_undo_size(900) req_mb_size from dba_data_files where tablespace_name = (select value from v$parameter where name = ‘undo tablespace’);

Here is some sample output:

CUR_MB_SIZE REQ_MB_SIZE 36864                 20897

The output shows that the undo tablespace currently has 36GB allocated to it. In the prior query, you used 900 seconds as the amount of time to retain information in the undo tablespace. To retain undo information for 900 seconds, the Oracle Undo

Advisor estimates that the undo tablespace should be 20.4GB. In this example, the undo tablespace is sized adequately. If it were not sized adequately, you would have to either add space to an existing data file or add a data file to the undo tablespace.

JSON- Data Management

Let’s look again at the JSON data type. Using JSON documents for applications makes it easier to view and pull in custom information such as their properties and attributes as a JSON document. JSON is an easy-to-parse data format and doesn’t need the complete schema defined.

JSON is a text format, and the components are identified within the document. It is also easy to send JSON as an API and validate the format. JSON documents are popular with developers for temporarily storing data.

Oracle 23c introduced JSON relational duality views and the JSON data type, along with the functions to verify and format JSON documents using SQL.

Talking about JSON in this way, instead of a data type or different type of view in the database, highlights more of the way applications will be using the data or development teams will be accessing the data.

You can provide them with data in a format that uses the existing relational tables in the database, and they can access the JSON documents.

Let’s say we have customer, items, and orders tables for example purposes, as shown here:

SQL> create table customer (customer_id number primary key, customer_name varchar2(80), address1 varchar2(80),city varchar2(80), state varchar2(20), zipcode number,geo_location           sdo_geometry, customer_begin_date date);

Table created.

SQL> create table items (item_id number primary key, item_name varchar2(80),item_description varchar2(4000), item_price number);

Table created.

SQL> create table orders (order_id number primary key, customer_id number,item_id number, order_date date);

Table created.

SQL> alter table orders add constraint fk_customer foreign key (customer_id) references customer(customer_id);

SQL> alter table orders add constraint fk_items foreign key (item_id) references items(item_id);–need to have foreign key references for creating duality views.

These tables were created for taking orders, and then the data needs to be passed off to print invoices, ship products, or other processes for these orders.

The different systems can take the data in JSON documents, so all of the information is provided at once to give the needed details. If the transaction data was not stored as relational tables, and only as JSON, I might have duplicate data that would need to be kept regarding items and customers. Changes to items would have to update different documents for orders, which might cause data consistency issues.

The relational tables handle this. The JSON relational duality view on top of these tables provides the JSON document with real-time consistent data. Updates, inserts, and deletes can also be handled through these views.

Here is a sample JSON relational duality view, and you can create several other views depending on the needed attributes and uses. Since it does not store the data, just the metadata, it makes it easy to create a view that is designed for the use case:

SQL> create json relational duality view cust_orders_dv as select JSON {‘customer_id’ is c.customer_id, ‘customer_name’ is c.customer_name,’order’ is [select JSON {‘order_id’ is o.order_id, ‘order_date’ is o.order_date,’item_id’ is o.item_id } from orders owhere c.customer_id=o.customer_id]} from customer c;

Maybe there was a shipped column that allows for changes through the view to update that the order has been shipped. We did discuss this topic before with views, but it is important to mention it again when discussing data management.

This is just to start to get the ideas flowing and recognize when the development teams are looking for the JSON format, there are ways to do just that.

Besides the relational tables and JSON views, we can do analytics on the data and include JSON in machine learning and graph algorithms.

Machine Learning- Data Management-1

Just like with graph algorithms, Oracle 23c has more than 30 algorithms ready to use for machine learning models.
Did anyone ask for AI? Machine learning is AI, and of course, there is more AI beyond just machine learning, but these are the interesting pieceswe can perform in the Oracle Database.

We even discussed that the Oracle Database statistic collection is now leveraging machine learning algorithms to make real-time statistics more efficient and predict information about data changes and growth.
Machine learning is there to answer additional questions and has the ability to analyze large
volumes of data.
What questions are being asked of your data? Have you been able to answer the business needs with reports and queries? If not, it is time to investigate implementing graph and machine learning and using the data for business value.

Machine learning can automatically discover patterns and create actionable information. Using the PL/SQL package DBMS_DATA_MINING, machine learning models can be built quickly.
You will need to gain some understanding of the types of algorithms such as classification, which can be used for predictions about customers.

In Autonomous Database, Oracle AutoML is available and can help you select the right types of algorithms. AutoML helps nonexpert users leverage machine learning in the Oracle Database.

Here is the process for AutoML:
• Auto algorithm selection: Finds the best algorithm from in-database algorithms
• Adaptive sampling: Identifies and adjusts the right sample size
• Auto feature selection: Improves accuracy and performance
• Auto model tuning: Improves models with automated tuning

AutoML provides a quick way to get started with machine learning and test the built-in algorithms and build models without completely understanding all of the science. However, with the different skill sets, the packages are available.
For example, let’s take our customer list, and based on their begin date as a customer, figure out if they are at risk of not purchasing additional items or leaving as a customer:

SQL> create or replace view vw_customer_longterm asselect c.customer_id, c.customer_name, decode(to_char(customer_begin_ date,’YYYY’,’2023′,1,0) cust_date_value,decode(nvl(o.order_id,0),0,0,1) order_value from customer c left outer join orders o on o.customer_id=c.customer_id; View created.


SQL> declarev_setlst dbms_data_mining.setting_list; beginv_setlst(dbms_data_mining.algo_name) := dbms_data_mining.algo_support_ vector_machines;v_setlst(dbms_data_mining.prep_auto) :=dbms_data_mining.prep_auto_on; dbms_data_mining.create_model2(model_name =>’FIND_LONGTERM’, mining_function => ‘CLASSIFICATION’,data_query => ‘select * frommmtest.customer_longterm’, set_list => v_setlst,case_id_column_name => ‘CUSTOMER_ID’, target_column_name => ‘CUST_DATE_VALUE’); end;/\

SQL> create table customer_longterm as select customer_id,prediction(FIND_LONGTERM usering*) cust_likely_to_leave, prediction_details(FIND_LONGTERM using *) PDfrom vw_customer_longterm;

Machine learning can confirm observations and find new patterns. Discovery and predictive relationships are ways to start harnessing the data by pulling information out of the data.

Graph- Data Management

Relational data seems like enough to make connections in data; however, there are connections that can require recursive queries and multiple joins that might be difficult to do with straight SQL.

Even if we look at the previous example of tables and compare customers that are connected by location, when they became customers, and the items they ordered, traversing through a graph structure might be easier.
Oracle 23c has native support for property graph data structures, and you can use a GRAPH_TABLE function and MATCH clause to write the queries.

There are more than 60 built-in algorithms for graph analytics.

Here are a couple of areas for these analytics:
• Detecting components and communities
• Evaluating structures
• Predicting links
• Ranking and walking
• Path-finding
• Machine learning

Now, you can start thinking of the questions you can ask of the data, such as making product recommendations, identifying dependencies and relationships, and detecting fraud.

You can even think of the operational data where this can help with workflows and dependencies of assets and systems.


Let’s look at a quick example using the same tables of customers and orders. We can have the customer table be a vertex, and items are also a vertex. Orders are the edges because this will show the connections. The graph will show items the customers have ordered and allow us to see additional related orders.

SQL> create property graph cust_orders vertex tables(customer key (customer_id) properties (customer_name, city,state), items key (item_id) properties (item_name, item_price) edge tables ( orderssource key (item_id) references items (item_id)destination key (customer_id) references customer (customer_id) properties (item_id,customer_id,order_id, order_date));

After we create the graph, we can query it for matches and count the items ordered:

SQL> select item_name, count(1) from graph_table (cust_orders match (src)-[is orders] -> (dst)
columns (dst.item_name)) group by item_name;

The () are for the vertices customer or items, and [] are for the edges that are the orders.
The match clause specifies the pattern from the graph_table, which is the path pattern for the graph. As you can see, we are using the same data without moving it to another graph database or using a separate system; this allows us to perform the analytics or use the graph algorithms against the data.

These simple examples might not give you the big picture of the data possibilities, but knowing you have different ways of viewing, connecting, and analyzing the data with queries on existing data is going to be useful. This allows you to leverage the data assets and use a single source of truth with all of the other security, performance, and reliability of the database that has been configured.

Handling Temporary Tablespace Issues – Automation and Troubleshooting

Issues with temporary tablespaces are somewhat easy to spot.

For example, when the temporary tablespace runs out of space, the following error will be thrown:

ORA-01652: unable to extend temp segment by 128 in tablespace TEMP

When you see this error, you need to determine if there is not enough space in the temporary tablespace or if a rare runaway SQL query has temporarily consumed an inordinate amount of temporary space. Both of these issues are discussed in the following sections. Multiple temporary tablespaces can be created in a PDB. You can allocate an isolated temporary tablespace to a user to avoid causing issues with other users or applications.

Determining Whether Temporary Tablespace Is Sized Correctly

The temporary tablespace is used as a sorting area on disk when a process has consumed the available memory and needs more space. Operations that require a sorting area include the following:

•     Index creation

•     SQL sorting and hashing operations

•     Temporary tables and indexes

•     Temporary LOBs

•     Temporary B-trees

There is no exact formula for determining whether your temporary tablespace is sized correctly. It depends on the number and types of queries, index build operations, and parallel operations, as well as on the size of your memory sort space (PGA).

You will have to monitor your temporary tablespace while there is a load on your database to establish its usage patterns. Since TEMP tablespaces are temporary files, they are handled differently than the data files, and details are found in a different view.

Run the following query to show both the allocated and free space within the temporary tablespace:

SQL> select tablespace_name ,tablespace_size/1024/1024 mb_size ,allocated_space/1024/1024 mb_alloc ,free_space/1024/1024                                   mb_free from dba_temp_free_space;

The topics covered in this chapter went from automating jobs to troubleshooting, and this was definitely on purpose. The automated scripts and tuning make it easier to determine the cause of issues and problems. If you are using the tools available, you will know where to look when the pressure is on to fix an issue and troubleshoot a problem.

Diagnosing issues sometimes requires some system and network administrator skills. An effective DBA must know how to leverage the Oracle data dictionary to identify problems and know what jobs are being handled by the database and where to look to determine the actual issues.

Even though there are whole books devoted to just troubleshooting and performance tuning, all of the topics and activity are difficult to cover, but this has hit some of the top topics as you support your environment.

Migration to Multitenant – Migration to Multitenant and Fleet Management

When creating new databases in 23c, you get multitenant CDBs and PDBs. However, there are databases that need to be upgraded to 23c, and many of them can be noncontainer, single-instance databases. This of course is the chance to consolidate and leverage all of the advantages we just discussed, but there are a few steps and choices you have when migrating.

The settings always depend on database size, availability requirements, and backup strategies that are needed. We discussed Data Pump jobs, which can migrate data to a newly built 23c database. Exporting and importing the data will move the data, but there will be an outage when importing the data, and how big the database is matters.

Just as we discussed with the backup and recovery processes, there are strategies for upgrading and migrating and plenty of options. The main things to consider are the following:

•     Repeatable process

•     Easy way to back out or downgrade

•     Database size

•     Downtime window

Simplicity is another piece to consider, especially when maintaining a large environment. It is important to have steps to validate the process, execute it, and verify that the migration was successful.

Plug-In Method

Here is an example to walk through the steps to migrate to multitenant.

First, check the compatibility of the database. This will be performed on the source:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

This creates a manifest XML file, with information about services and data files of the database to be migrated.

Next, check the compatibility of the CDB, which is the target:

SQL> set serveroutput on begin if dbms_pdb.check_plug_compatibility(‘/tmp/salesdb.xml’) THEN dbms_output.put_line(‘PDB compatible?      ==> Yes’); else dbms_output.put_line(‘PDB compatible? ==> No’); end if; end; /

This will check if there are any issues that will appear when plugging in the database instance to the CDB. There is valuable information in the PDB_PLUG_IN_VIOLATIONS view after running the check_plug_compatibility script. There might be error and warning messages that can be resolved before plugging the database in.

For example, the version might be the same, so you can upgrade first or let the upgrade occur after plugging the database in. Also, there are possible warnings that are informational such as requiring noncdb_to_pdb.sql to run after it is plugged into the CDB.

SQL> select type, message from pdb_plug_in_violations where name=’SALESDB’ and status <> ‘RESOLVED’;

Next, you need to create the manifest file of the source single-instance database. The database will need to be in read-only mode.

Restart the source database in read-only mode:

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open read only;

The manifest file needs to be generated:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

SQL> shutdown immediate;

This is the same command we ran on the source to be able to validate the manifest file, but now the database was in read-only mode.

In the target CDB, the next step to is create the PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ nocopy tempfile reuse;

There are steps that are still needed to complete the migration to a container PDB. Convert to a PDB by running noncdb_to_pdb.sql:

SQL> alter pluggable database salesdb open;

SQL> alter session set container = salesdb;

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

SQL> alter pluggable database salesdb close;

SQL> alter pluggable database salesdb open;

Once again, check any plug-in violations from the target CDB:

SQL> select type, message from pdb_plug_in_violations

where name=’SALESDB’ and status <> ‘RESOLVED’;

Verify that the PDB is open so you can save the state of the PDB to have it set to auto-start:

SQL> select open_mode, restricted from v$pdbs;

SQL> alter pluggable database salesdb save state;

Once you plug the database in, it does not have the ability to revert to a noncontainer, single-instance database.

There are other recoverability options, but it should be considered to be prepared to troubleshoot the issues in the container and gives you another reason to make sure you check those error and warning messages before migrating.

When you plug in a database, you can use AutoUpgrade to upgrade the database with the new CDB. The target CDB is part of the config file, so the database will be upgraded.

Here is the config file:

$ cat db19to23.cfg Upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/23 upgl.sid=salesdbupgl.target_cdb_cdb=cdb23c

Run the autoupgrade command with the config file:

$ java -jar autoupgrade.jar -config db19to23.cfg -mode deploy