JSON- Data Management

Let’s look again at the JSON data type. Using JSON documents for applications makes it easier to view and pull in custom information such as their properties and attributes as a JSON document. JSON is an easy-to-parse data format and doesn’t need the complete schema defined.

JSON is a text format, and the components are identified within the document. It is also easy to send JSON as an API and validate the format. JSON documents are popular with developers for temporarily storing data.

Oracle 23c introduced JSON relational duality views and the JSON data type, along with the functions to verify and format JSON documents using SQL.

Talking about JSON in this way, instead of a data type or different type of view in the database, highlights more of the way applications will be using the data or development teams will be accessing the data.

You can provide them with data in a format that uses the existing relational tables in the database, and they can access the JSON documents.

Let’s say we have customer, items, and orders tables for example purposes, as shown here:

SQL> create table customer (customer_id number primary key, customer_name varchar2(80), address1 varchar2(80),city varchar2(80), state varchar2(20), zipcode number,geo_location           sdo_geometry, customer_begin_date date);

Table created.

SQL> create table items (item_id number primary key, item_name varchar2(80),item_description varchar2(4000), item_price number);

Table created.

SQL> create table orders (order_id number primary key, customer_id number,item_id number, order_date date);

Table created.

SQL> alter table orders add constraint fk_customer foreign key (customer_id) references customer(customer_id);

SQL> alter table orders add constraint fk_items foreign key (item_id) references items(item_id);–need to have foreign key references for creating duality views.

These tables were created for taking orders, and then the data needs to be passed off to print invoices, ship products, or other processes for these orders.

The different systems can take the data in JSON documents, so all of the information is provided at once to give the needed details. If the transaction data was not stored as relational tables, and only as JSON, I might have duplicate data that would need to be kept regarding items and customers. Changes to items would have to update different documents for orders, which might cause data consistency issues.

The relational tables handle this. The JSON relational duality view on top of these tables provides the JSON document with real-time consistent data. Updates, inserts, and deletes can also be handled through these views.

Here is a sample JSON relational duality view, and you can create several other views depending on the needed attributes and uses. Since it does not store the data, just the metadata, it makes it easy to create a view that is designed for the use case:

SQL> create json relational duality view cust_orders_dv as select JSON {‘customer_id’ is c.customer_id, ‘customer_name’ is c.customer_name,’order’ is [select JSON {‘order_id’ is o.order_id, ‘order_date’ is o.order_date,’item_id’ is o.item_id } from orders owhere c.customer_id=o.customer_id]} from customer c;

Maybe there was a shipped column that allows for changes through the view to update that the order has been shipped. We did discuss this topic before with views, but it is important to mention it again when discussing data management.

This is just to start to get the ideas flowing and recognize when the development teams are looking for the JSON format, there are ways to do just that.

Besides the relational tables and JSON views, we can do analytics on the data and include JSON in machine learning and graph algorithms.

Graph- Data Management

Relational data seems like enough to make connections in data; however, there are connections that can require recursive queries and multiple joins that might be difficult to do with straight SQL.

Even if we look at the previous example of tables and compare customers that are connected by location, when they became customers, and the items they ordered, traversing through a graph structure might be easier.
Oracle 23c has native support for property graph data structures, and you can use a GRAPH_TABLE function and MATCH clause to write the queries.

There are more than 60 built-in algorithms for graph analytics.

Here are a couple of areas for these analytics:
• Detecting components and communities
• Evaluating structures
• Predicting links
• Ranking and walking
• Path-finding
• Machine learning

Now, you can start thinking of the questions you can ask of the data, such as making product recommendations, identifying dependencies and relationships, and detecting fraud.

You can even think of the operational data where this can help with workflows and dependencies of assets and systems.


Let’s look at a quick example using the same tables of customers and orders. We can have the customer table be a vertex, and items are also a vertex. Orders are the edges because this will show the connections. The graph will show items the customers have ordered and allow us to see additional related orders.

SQL> create property graph cust_orders vertex tables(customer key (customer_id) properties (customer_name, city,state), items key (item_id) properties (item_name, item_price) edge tables ( orderssource key (item_id) references items (item_id)destination key (customer_id) references customer (customer_id) properties (item_id,customer_id,order_id, order_date));

After we create the graph, we can query it for matches and count the items ordered:

SQL> select item_name, count(1) from graph_table (cust_orders match (src)-[is orders] -> (dst)
columns (dst.item_name)) group by item_name;

The () are for the vertices customer or items, and [] are for the edges that are the orders.
The match clause specifies the pattern from the graph_table, which is the path pattern for the graph. As you can see, we are using the same data without moving it to another graph database or using a separate system; this allows us to perform the analytics or use the graph algorithms against the data.

These simple examples might not give you the big picture of the data possibilities, but knowing you have different ways of viewing, connecting, and analyzing the data with queries on existing data is going to be useful. This allows you to leverage the data assets and use a single source of truth with all of the other security, performance, and reliability of the database that has been configured.

Migration to Multitenant – Migration to Multitenant and Fleet Management

When creating new databases in 23c, you get multitenant CDBs and PDBs. However, there are databases that need to be upgraded to 23c, and many of them can be noncontainer, single-instance databases. This of course is the chance to consolidate and leverage all of the advantages we just discussed, but there are a few steps and choices you have when migrating.

The settings always depend on database size, availability requirements, and backup strategies that are needed. We discussed Data Pump jobs, which can migrate data to a newly built 23c database. Exporting and importing the data will move the data, but there will be an outage when importing the data, and how big the database is matters.

Just as we discussed with the backup and recovery processes, there are strategies for upgrading and migrating and plenty of options. The main things to consider are the following:

•     Repeatable process

•     Easy way to back out or downgrade

•     Database size

•     Downtime window

Simplicity is another piece to consider, especially when maintaining a large environment. It is important to have steps to validate the process, execute it, and verify that the migration was successful.

Plug-In Method

Here is an example to walk through the steps to migrate to multitenant.

First, check the compatibility of the database. This will be performed on the source:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

This creates a manifest XML file, with information about services and data files of the database to be migrated.

Next, check the compatibility of the CDB, which is the target:

SQL> set serveroutput on begin if dbms_pdb.check_plug_compatibility(‘/tmp/salesdb.xml’) THEN dbms_output.put_line(‘PDB compatible?      ==> Yes’); else dbms_output.put_line(‘PDB compatible? ==> No’); end if; end; /

This will check if there are any issues that will appear when plugging in the database instance to the CDB. There is valuable information in the PDB_PLUG_IN_VIOLATIONS view after running the check_plug_compatibility script. There might be error and warning messages that can be resolved before plugging the database in.

For example, the version might be the same, so you can upgrade first or let the upgrade occur after plugging the database in. Also, there are possible warnings that are informational such as requiring noncdb_to_pdb.sql to run after it is plugged into the CDB.

SQL> select type, message from pdb_plug_in_violations where name=’SALESDB’ and status <> ‘RESOLVED’;

Next, you need to create the manifest file of the source single-instance database. The database will need to be in read-only mode.

Restart the source database in read-only mode:

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open read only;

The manifest file needs to be generated:

SQL> exec dbms_pdb.describe(‘/tmp/salesdb.xml’);

SQL> shutdown immediate;

This is the same command we ran on the source to be able to validate the manifest file, but now the database was in read-only mode.

In the target CDB, the next step to is create the PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ nocopy tempfile reuse;

There are steps that are still needed to complete the migration to a container PDB. Convert to a PDB by running noncdb_to_pdb.sql:

SQL> alter pluggable database salesdb open;

SQL> alter session set container = salesdb;

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

SQL> alter pluggable database salesdb close;

SQL> alter pluggable database salesdb open;

Once again, check any plug-in violations from the target CDB:

SQL> select type, message from pdb_plug_in_violations

where name=’SALESDB’ and status <> ‘RESOLVED’;

Verify that the PDB is open so you can save the state of the PDB to have it set to auto-start:

SQL> select open_mode, restricted from v$pdbs;

SQL> alter pluggable database salesdb save state;

Once you plug the database in, it does not have the ability to revert to a noncontainer, single-instance database.

There are other recoverability options, but it should be considered to be prepared to troubleshoot the issues in the container and gives you another reason to make sure you check those error and warning messages before migrating.

When you plug in a database, you can use AutoUpgrade to upgrade the database with the new CDB. The target CDB is part of the config file, so the database will be upgraded.

Here is the config file:

$ cat db19to23.cfg Upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/23 upgl.sid=salesdbupgl.target_cdb_cdb=cdb23c

Run the autoupgrade command with the config file:

$ java -jar autoupgrade.jar -config db19to23.cfg -mode deploy