Use a Single Connection in Oracle- Developing Successful Oracle Applications-2

If you use bind variables, then everyone who submits the same exact query that references the same object will use the compiled plan from the pool. You will compile your subroutine once and use it over and over again. This is very efficient and is the way the database intends you to work. Not only will you use fewer resources (a soft parse is much less resource-intensive), but also you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability.

Just to give you a tiny idea of how huge a difference this can make performance-wise, you only need to run a very small test. In this test, we’ll just be inserting some rows into a table; the simple table we will use is

SQL> drop table t purge;

SQL> create table t ( x int );

Table created.

Now we’ll create two very simple stored procedures. They both will insert the numbers 1 through 10,000 into this table; however, the first procedure uses a single SQL statement with a bind variable:

Procedure created.

The second procedure constructs a unique SQL statement for each row to be inserted:

SQL> create or replace procedure proc2 as

Now, the only difference between the two is that one uses a bind variable and the other does not. Both are using dynamic SQL and the logic is otherwise identical. The only difference is the use of a bind variable in the first.

Note For details on runstats and other utilities, see the “Setting Up Your Environment” section at the beginning of this book. You may not observe exactly the same values for CPU or any metric. Differences are caused by different Oracle versions, different operating systems, and different hardware platforms. The idea will be the same, but the exact numbers will undoubtedly be marginally different.

We are ready to evaluate the two approaches, and we’ll use runstats, a simple tool
I’ve developed, to compare the two in detail:

Now, the preceding result clearly shows that based on CPU time (measured in hundredths of seconds), it took significantly longer and significantly more resources to insert 10,000 rows without bind variables than it did with them. In fact, it took more than a magnitude more CPU time to insert the rows without bind variables. For every insert without bind variables, we spent the vast preponderance of the time to execute the statement simply parsing the statement! But it gets worse. When we look at other information, we can see a significant difference in the resources utilized by each approach:

The runstats utility produces a report that shows differences in latch utilization as well as differences in statistics. Here, I asked runstats to print out anything with a difference greater than 9500. You can see that we hard parsed 3 times in the first approach using bind variables and that we hard parsed 10,000 times without bind variables (once for each of the inserts). But that difference in hard parsing is just the tip of the iceberg. You can see here that we used an order of magnitude as many “latches” in the nonbind variable approach as we did with bind variables. That difference might beg the question “What is a latch?”

Let’s answer that question. A latch is a type of lock that is used to serialize access to shared data structures used by Oracle. The shared pool is an example; it’s a big, shared data structure found in the System Global Area (SGA), and this is where Oracle stores parsed, compiled SQL. When you modify anything in this shared structure, you must take care to allow only one process in at a time. (It is very bad if two processes or threads attempt to update the same in-memory data structure simultaneously— corruption would abound.) So, Oracle employs a latching mechanism, a lightweight locking method to serialize access. Don’t be fooled by the word lightweight. Latches are serialization devices, allowing access (to a memory structure) one process at a time. The latches used by the hard parsing implementation are some of the most used latches out there. These include the latches for the shared pool and for the library cache. Those are “big time” latches that people compete for frequently. What all this means is that as we increase the number of users attempting to hard parse statements simultaneously, our performance gets progressively worse over time. The more people parsing, the more people waiting in line to latch the shared pool, the longer the queues, the longer the wait.

The Black Box Approach- Developing Successful Oracle Applications-5

Now, if we use two different transactions, we can see that both get different records. We also see that both get different records concurrently (using autonomous transactions once again to demonstrate the concurrency issues):
SQL> set serverout on

Now, in Oracle 11g and above, we can achieve the preceding logic using the SKIP LOCKED clause. In the following example, we’ll do two concurrent transactions again, observing that they each find and lock separate records concurrently:
SQL> declare

Both of the preceding “solutions” would help to solve the second serialization problem my client was having when processing messages. But how much easier would the solution have been if my client had just used Advanced Queuing and invoked DBMS_AQ.DEQUEUE? To fix the serialization issue for the message producer, we had to implement a function-based index. To fix the serialization issue for the consumer, we had to use that function-based index to retrieve the records and write code. So we fixed their major problem, caused by not fully understanding the tools they were using and found only after lots of looking and study since the system was not nicely instrumented.

What we hadn’t fixed yet were the following issues:
•\ The application was built without a single consideration for scaling at the database level.
•\ The application was performing functionality (the queue table) that the database already supplied in a highly concurrent and scalable fashion. I’m referring to the Advance Queuing (AQ) software that is burned into the database, functionality they were trying to reinvent.
•\ Experience shows that 80 to 90 percent (or more!) of all tuning should be done at the application level (typically the interface code reading and writing to the database), not at the database level.
•\ The developers had no idea what the beans did in the database or where to look for potential problems.


This was hardly the end of the problems on this project. We also had to figure out the following:
•\ How to tune SQL without changing the SQL. In general, that is very hard to do. We can accomplish this magic feat to some degree with SQL Profiles (this option requires a license for the Oracle Tuning Pack), with extended statistics, and with adaptive query optimization. But inefficient SQL will remain inefficient SQL.
•\ How to measure performance.
•\ How to see where the bottlenecks were.
•\ How and what to index. And so on.

At the end of the week, the developers, who had been insulated from the database, were amazed at what the database could actually provide for them and how easy it was to get that information. Most importantly, they saw how big of a difference taking advantage of database features could make to the performance of their application. In the end, they were successful—just behind schedule by a couple of weeks.

My point about the power of database features is not a criticism of tools or technologies like Hibernate, EJBs, and container-managed persistence. It is a criticism of purposely remaining ignorant of the database and how it works and how to use it. The technologies used in this case worked well—after the developers got some insight into the database itself.

The bottom line is that the database is typically the cornerstone of your application. If it does not work well, nothing else really matters. If you have a black box and it does not work, what are you going to do about it? About the only thing you can do is look at it and wonder why it is not working very well. You can’t fix it; you can’t tune it. Quite simply, you do not understand how it works—and you made the decision to be in this position. The alternative is the approach that I advocate: understand your database, know how it works, know what it can do for you, and use it to its fullest potential.

The Black Box Approach- Developing Successful Oracle Applications-1

I have an idea, borne out by first-hand personal experience (meaning I made the mistake myself), as to why database-backed software development efforts so frequently fail.

Let me be clear that I’m including here those projects that may not be documented as failures, but nevertheless take much longer to roll out and deploy than originally planned because of the need to perform a major rewrite, re-architecture, or tuning effort. Personally, I call such delayed projects failures: more often than not they could have been completed on schedule (or even faster).

The single most common reason for failure is a lack of practical knowledge of the database—a basic lack of understanding of the fundamental tool that is being used. The black box approach involves a conscious decision to protect the developers from the database. They are actually encouraged not to learn anything about it! In many cases, they are prevented from exploiting it. The reasons for this approach appear to be FUD related (Fear, Uncertainty, and Doubt). Developers have heard that databases are “hard,” that SQL, transactions, and data integrity are “hard.” The solution: don’t make anyone do anything hard. They treat the database as a black box and have some software tool generate all of the code. They try to insulate themselves with many layers of protection so that they don’t have to touch this “hard” database.

This is an approach to database development that I’ve never been able to understand, in part because, for me, learning Java and C was a lot harder than learning the concepts behind the database. I’m now pretty good at Java and C, but it took a lot more hands-on experience for me to become competent using them than it did to become competent using the database. With the database, you need to be aware of how it works, but you don’t have to know everything inside and out. When programming in C or Java/J2EE, you do need to know everything inside and out—and these are huge languages.

If you are building a database application, the most important piece of software is the database. A successful development team will appreciate this and will want its people to know about it, to concentrate on it. Many times I’ve walked into a project where almost the opposite was true. A typical scenario would be as follows:

•\ The developers were fully trained in the GUI tool or the language they were using to build the front end (such as Java). In many cases, they had had weeks if not months of training in it.

•\ The team had zero hours of Oracle training and zero hours of Oracle experience. Most had no database experience whatsoever. They would also have a mandate to be “database independent”—a mandate (edict from management or learned through theoretical academic instruction) they couldn’t hope to follow for many reasons. The most obvious one is they didn’t know enough about what databases are or what they do to even find the lowest common denominator among them.

•\ The developers encountered massive performance problems, data integrity problems, hanging issues, and the like (but very pretty screens).

As a result of the inevitable performance problems, I now get called in to help solve the difficulties (in the past, as a learning developer I was sometimes the cause of such issues).

Note Even today, I often find that the developers of database applications have spent no time reading the documentation. On my website, asktom.oracle. com, I frequently get questions along the lines of “what is the syntax for…” coupled with “we don’t have the documentation so please just tell us.” I refuse to directly answer many of those questions, but rather point them to the online documentation freely available to anyone, anywhere in the world. In the last 15 years, the excuses like “We don’t have documentation,” or “We don’t have access to resources,” have disappeared. Sites like www.oracle.com/technical-resources (Oracle Technical Resources, formerly known as the Oracle Technology Network) make it inexcusable to not have a full set of documentation at your fingertips! Today, everyone has access to all of the documentation; they just have to read it or—even easier—Google it.

Fleet Management – Migration to Multitenant and Fleet Management

As a DBA, you probably take care of several databases. Your job involves managing the environment, servers, software, storage, and more. Patching large environments is a difficult task, and you also want to be able to provision new databases quickly and easily. When we are talking numbers like that, there needs to be a way to manage multiple databases as a group, also called a fleet. This includes upgrading and patching with repeatable processes, making it easier to roll through and still minimize downtime.

Oracle’s Fleet Patching and Provisioning (FPP) tool helps maintain the life cycle of a large environment. This is one way to think of fleet management, which will be discussed in the next couple of sections. After that, we are going to look at fleet administrators.

Not only do they do the patching and provisioning tasks, but this has developed into a new role for Autonomous Databases on dedicated systems. It’s a slightly different way of looking at fleet management and administration, but it’s an important responsibility for DBAs transitioning to multitenant and cloud environments.

Oracle Fleet Patching and Provisioning

FPP provides a standard method to patch, upgrade, and provision databases, and it is a service in the grid infrastructure. It applies to both the grid and database homes across all environments. The software is installed once and stored on FPP Server, which maintains a gold image of the software and patches to be used for patching and upgrades. Commands can be run against hundreds of targets at the same time. This allows you to be able to do quarterly patches, and it implements the much-needed automation for these large environments.

New database patches and updates are images, and each image for the database version is a new version. This includes quarterly security patches and release updates. The images are not just the database but also the grid home.

FPP can be configured as a central server to deploy gold images to any number of nodes and database and grid homes across the environment. Clients of FPP would be configured to retrieve the gold images from the FPP server and based on policies upload and apply operations to the server where the client is configured. Since this is part of the grid infrastructure, it can just be run locally without any central server. The local Oracle homes can be patched and additional nodes provisioned locally with this option.

Types of Patching

Patching in-place requires a longer downtime, as you apply the patch in the current environment. This will require stopping instances, patching and starting the database again, and running the additional patching or upgrade steps. In this way, you have to install or run the patch each time.

FPP uses out-of-place patching because it deploys a new working copy, and then databases are moved to the Oracle home. This might be something you already do. Install new patched binaries in the new Oracle grid or database home. After that, the process consists of a stop, a move, and a start. The outage includes the time to restart in a new home with the newly installed binaries.

With multitenant, you can also patch a CDB and the PDBs together. PDBs can also be patched separately by moving them to a new CDB running in the patched or upgraded Oracle home.

User Groups- Migration to Multitenant and Fleet Management

We discussed the different groups that are available to manage the Oracle software and processes. This is important when you are looking at upgrading and patching. The same user groups are needed to own the software on the servers and deploy them.
Groups are managed as part of the provisioning. They are not inherited in the images. The groups can be set using the -groups parameter on the rhpctl command. FPP will use the group of the user that is running the command to copy the image to the working copy.

Methods of Patching

Being able to roll through patches without too much downtime is important for most environments. Databases need to be up and available, so planning a strategy to keep it consistent, minimize downtime, and automate is important.
By default, rolling patching of the grid infrastructure is the method for FPP. When not using FPP, you can have two installed homes that basically use the same method, but this requires more manual intervention. The different methods for the grid infrastructure are as follows:
• Rolling: Moves the grid infrastructure home sequentially through each node in the clusters
• Nonrolling: Done in parallel to patch all of the nodes at the same time
• Batch: Patches the Oracle grid and database homes at the same time.
• Zero downtime: Does not bring down the databases that are part of the cluster while patching; this method automates the database upgrades without interrupting the service

The idea here with fleet patching and provisioning is to provide another solution for large environments and manage a patching system to roll out patched versions of the Oracle home consistently and efficiently. So, where we might have spent time patching and not even the ability to patch the whole environment, management tasks can focus on working with the gold images, testing the releases, and implementing the most effective solution for the environment.

Fleet Management with Autonomous

Shifting gears slightly, we are going to look at another type of fleet management. This is not completely different from patching and provisioning, but it does include some of these tasks. This is also not a tool to help you manage large environments but defines the responsibilities of an administrator of databases in the cloud on dedicated Exadata Infrastructure and Exadata Cloud@Customer.

As previously discussed with the installation of the Oracle Database, we looked at Autonomous Databases in the cloud. Autonomous Database – Serverless (ADB-S) is a quick creation of the database through the Oracle Cloud Infrastructure console. It doesn’t appear much is needed from a database administrator; however, there are administrative tasks and responsibilities for Autonomous Database Dedicated. This includes managing and monitoring the container databases for the environment.

The fleet administrators create and manage the Autonomous Container Database resources. They need the appropriate permissions in the Oracle Cloud to administer the Autonomous Exadata VM Clusters along with the container database. The fleet administrators will also need to set up the networking resources that are needed for the database environment and connections.

At the beginning of this book, we discussed the different roles that DBAs do, as well as the different default roles that are part of the configuration to have separation of duties. This includes managing the grid environment versus the databases and backups. The same applies here. There is the infrastructure side of Autonomous Dedicated that is very similar to managing the container databases and servers in multitenant. Both of these are for the administration of the fleet of databases.

Autonomous Database – Serverless is doing these same tasks to provide the self-service databases managed by Oracle. The fleet administration is responsible for these components in the dedicated architecture: the Exadata components, container databases, and security.

Monitoring- Migration to Multitenant and Fleet Management

Ideally, you are seeing that there are opportunities for database administrators with fleet management. Whether we are talking about patching or Autonomous Dedicated, the architecture and infrastructure need to be configured and managed, even if the Autonomous Database is then provided on demand.

The health of the environment also needs to be monitored.

The dedicated environment has some similar choices as database creation, but notice that the software and patching are automated.

Even with automation, there are different areas that management and policies that can be inserted to make sure you are meeting your company’s requirements.

It is also important for the fleet administrator to monitor the ADBs and tune them.

There might be different configurations needed for VMs or for the Autonomous Container Database. Just like with the migration of pluggable databases, the fleet administration can relocate databases in different containers to help with performance or redistribute resources as needed.

This is part of the monitoring and migration tasks that are needed for the dedicated system.

Restoring and availability come into play here too. Most of the details of the previous chapters can be leveraged to manage large and cloud databases.

A dedicated environment also lets you set the different options in the maintenance schedule and types of patching. With ADB-S, there are no choices, but with dedicated, it can meet your company’s needs and maintenance windows and backup strategies.

This also includes deciding on the VM clusters and how many Autonomous Container Databases are created and the resources allocated.

When migrating to multitenant and cloud, the database administrator’s job is changing. There are options for fleet administration and FPP Server management. The administration tasks here are infrastructure system administration responsibilities.

Next, we are going to look at other ways the administration role is changing with the management of the data.

Network- Migration to Multitenant and Fleet Management-2

In Figure 17-2, you see the compartment is the first choice and should be prepopulated if already working in that compartment. Provide a name for the infrastructure, and refer to your company standards and guidelines for naming conventions.

There is a choice of availability domain that depends on the tenancy you are working in, and another choice on the system model for the Exadata. The default is the latest and greatest model available.

Figure 172. Exadata infrastructure form

Maintenance can be configured differently and customized, including on the schedule shown in Figure 17-3. Patching can be rolling or nonrolling.

Figure17-3.Advanceoptionsmaintenance

Once the Exadata infrastructure is created, the VM clusters can be created. Figure 17-4 shows this as the next choice, working the way up from the Exadata infrastructure.

Figure 174. Creating VM clusters

As shown in Figure 17-5, the previously created Exadata infrastructure is included in order to create the VMs on the right infrastructure. The display name again should follow your company standards or guidelines set for this database environment.

Figure17-5.InformationforautonomousVMcluster

ECPUs are the elastic cores for Autonomous Database. Figure 17-6 shows not just the ECPUs to adjust but other resources on the VM to put limits on containers. Setting the details on the number of Autonomous Container Databases doesn’t limit the number

of pluggables or Autonomous Databases in the container, but the memory and storage limits might do that.

Figure17-6.SettingVMresourcesforcontainers

The last component we will look at here is the Autonomous Container Database, as shown in Figure 17-7.

Figure 177. Creating the Autonomous Container Database

The create information requires the compartment, the Exadata infrastructure, and the VM Cluster, as shown in Figure 17-8. Also, here you will see that the version of the Autonomous Container Database is available for the base image.

Figure 178. Providing information for the Autonomous Container Database

The last step in creating the container is editing the automatic maintenance. Figure 17-9 shows the choices for the schedule and patching.

Figure17-9.Configuringmaintenance

No-Copy or Copy Options – Migration to Multitenant and Fleet Management

The options to migrate to multitenant reuse the data files. The failback option is to use Data Pump to get back to a non-CDB, single-instance database. The recovery option is not necessarily faster, but the migration option is a simple and faster method.

If the recovery plan needed to be faster, then it might make more sense to copy the data files instead of reusing them. This will include copying and renaming the files with FILE_NAME_CONVERT.

It will take longer to migrate because there is a copy of the files before plugging in the database and migrating, but it will be a quicker rollback plan.

You create the manifest file and shut down the database. Copy the data files to a new location or to a new name.

In the target CDB, the next step to is create a PDB from the manifest file just created:

SQL> create pluggable database salesdb using ‘/tmp/salesdb.xml’ copy file_ name_convert=(‘SALES19C’,’SALESDB’);

One more migration to mention for plugging in a database is to clone the database over a database link. This consists of cloning a non-CDB single-instance database to a pluggable database in a CDB.

The autoupgrade command and the config file will make it easier to set up the options, and the database link makes it possible.

Here is the config file:

$ cat db19_to_23.cfg upgl.source_home=/u01/app/oracle/product/19 upgl.target_home=/u01/app/oracle/product/db23c upgl.sid=salesdbupgl.target_cdb=cdb23c upgl.target_pdb_name=salesdbupgl.target_pdb_copy_option=file_name_convert=(‘SALES19C’,’SALESDB’)

Run this command:

$ java -jar autoupgrade.jar -config db19_to23.cgf -mode deploy

Other options for migration options are some of the typical options for upgrades migrations, not just to a non-CDB to a container:

•     Data Pump

•     GoldenGate

•     Transportable tablespaces

These options keep the original database for a rollback plan, just like copying the data files. GoldenGate does allow for zero downtime, while the other migration options will require some downtime to copy, move, and upgrade.

Tip there is a great resource for upgrades and migrations with Mike dietrich’s blog, upgrade your database – noW! he gives more details on these options and helps with planning for the migration.

Multitenant migration in 19c is a good first step to getting to 23c. You can upgrade and migrate at the same time as long as you are going from 19c. The upgrade path to 23c is from 19c or 21c. If you are using a non-CDB, single-instance database in 19c, then it makes sense to migrate the database to a pluggable database.

Network- Migration to Multitenant and Fleet Management-1

Fleet administrators are not necessarily responsible for the network configuration but can provide some input and guidelines. The network will be configured with similar policies for on-prem databases.

For example, databases should be in a private subnet without Internet access, subnets will have their own security lists, and ingress ports should be opened only as needed.

It is important here to include network administrators to configure the network configurations and make sure the database servers are in compliance with policies.

With databases, we need to confirm that the TCP traffic will allow port 1521. If using APEX and SQLDeveloper and Database Actions, port 443 would be allowed. Additional ports would be 2484 and 6200 for encrypted traffic and application continuity.

If access is needed to the Internet, an Internet gateway can be created in the virtual cloud network (VCN). Bastion hosts can also be configured for SSH access and developer client machines.

Exadata Infrastructure

From the OCI console, under Autonomous Database, there are two choices: Autonomous Database and Dedicated Infrastructure. Autonomous Database is the serverless option and is typically used for just creating ADBs in the OCI environment. Dedicated Infrastructure is where you would be setting up the components of the dedicated environment.

It is important to make sure you are creating the Exadata infrastructure in the right compartment, as this will have the policies granted and keep the database components in the same area.

Let’s walk through a couple of screenshots from the OCI console to show the steps for the components and also give some detail about the responsibilities of the fleet administrator.

Figure 17-1 shows the starting point for creating the Exadata infrastructure.

Figure17-1.CreatingExadatainfrastructure

JSON- Data Management

Let’s look again at the JSON data type. Using JSON documents for applications makes it easier to view and pull in custom information such as their properties and attributes as a JSON document. JSON is an easy-to-parse data format and doesn’t need the complete schema defined.

JSON is a text format, and the components are identified within the document. It is also easy to send JSON as an API and validate the format. JSON documents are popular with developers for temporarily storing data.

Oracle 23c introduced JSON relational duality views and the JSON data type, along with the functions to verify and format JSON documents using SQL.

Talking about JSON in this way, instead of a data type or different type of view in the database, highlights more of the way applications will be using the data or development teams will be accessing the data.

You can provide them with data in a format that uses the existing relational tables in the database, and they can access the JSON documents.

Let’s say we have customer, items, and orders tables for example purposes, as shown here:

SQL> create table customer (customer_id number primary key, customer_name varchar2(80), address1 varchar2(80),city varchar2(80), state varchar2(20), zipcode number,geo_location           sdo_geometry, customer_begin_date date);

Table created.

SQL> create table items (item_id number primary key, item_name varchar2(80),item_description varchar2(4000), item_price number);

Table created.

SQL> create table orders (order_id number primary key, customer_id number,item_id number, order_date date);

Table created.

SQL> alter table orders add constraint fk_customer foreign key (customer_id) references customer(customer_id);

SQL> alter table orders add constraint fk_items foreign key (item_id) references items(item_id);–need to have foreign key references for creating duality views.

These tables were created for taking orders, and then the data needs to be passed off to print invoices, ship products, or other processes for these orders.

The different systems can take the data in JSON documents, so all of the information is provided at once to give the needed details. If the transaction data was not stored as relational tables, and only as JSON, I might have duplicate data that would need to be kept regarding items and customers. Changes to items would have to update different documents for orders, which might cause data consistency issues.

The relational tables handle this. The JSON relational duality view on top of these tables provides the JSON document with real-time consistent data. Updates, inserts, and deletes can also be handled through these views.

Here is a sample JSON relational duality view, and you can create several other views depending on the needed attributes and uses. Since it does not store the data, just the metadata, it makes it easy to create a view that is designed for the use case:

SQL> create json relational duality view cust_orders_dv as select JSON {‘customer_id’ is c.customer_id, ‘customer_name’ is c.customer_name,’order’ is [select JSON {‘order_id’ is o.order_id, ‘order_date’ is o.order_date,’item_id’ is o.item_id } from orders owhere c.customer_id=o.customer_id]} from customer c;

Maybe there was a shipped column that allows for changes through the view to update that the order has been shipped. We did discuss this topic before with views, but it is important to mention it again when discussing data management.

This is just to start to get the ideas flowing and recognize when the development teams are looking for the JSON format, there are ways to do just that.

Besides the relational tables and JSON views, we can do analytics on the data and include JSON in machine learning and graph algorithms.