Quick note on Financial Risk Management

In this article we will see few most commonly used formulas in financial risk management.

Quick note on Financial Risk Management

Financial risk management involves a number of quantitative techniques and tools to help businesses and organizations to identify and mitigate various types of financial risk.
Below mentioned formulas are few most commonly used in financial risk management:

Value-at-Risk (VaR):

VaR is a statistical measure used to estimate the maximum potential loss that an investment or portfolio may experience over a given time period with a certain level of confidence.
 The formula for VaR is:
            VaR = Expected return – (Z-score x Standard deviation)
Where:

  • Expected return is the expected rate of return on the investment or portfolio
  • Z-score is the number of standard deviations corresponding to the desired level of confidence
  • Standard deviation is the measure of volatility of the investment or portfolio

Capital Asset Pricing Model (CAPM):

CAPM is a model that helps investors calculate the expected return on an investment given its risk level. The formula for CAPM is:
         Expected return = Risk-free rate + Beta x (Market return – Risk-free rate)
Where:

  • Risk-free rate is the rate of return on a risk-free investment such as a government bond
  • Beta is a measure of the investment’s volatility relative to the overall market
  • Market return is the expected return on the overall market

Black-Scholes Model:

The Black-Scholes model is used to calculate the theoretical price of a European call or put option on a stock or other asset. The formula for the Black-Scholes model is:
           Option price = S x N(d1) – X x e^(-rt) x N(d2)
Where:

  • S is the current price of the underlying asset
  • N() is the cumulative standard normal distribution function
  • d1 = [ln(S/X) + (r + σ^2/2) x t] / (σ x √t)
  • d2 = d1 – σ x √t
  • X is the strike price of the option
  • r is the risk-free interest rate
  • σ is the volatility of the underlying asset
  • t is the time to expiration of the option

Debt to Equity Ratio:

Debt to equity is also known as “debt-equity ratio” or “Risk Ratio”  or “D/E”.  The identification of the ratio helps to indicate the company’s risk on dept. Higher the ratio indicates the high risk of the company.
The ratio can be calculated by considering the total financial labilities which included the total long term dept , short term depth including any fixed payments against total shareholders’ equity.

Debt-to-equity ratio (D/E) = Total debt ( Long term dept + Short term depth + Other fixed payments ) / Total equity

Beta:

Beta is a measure of a stock’s volatility relative to the overall market. The formula for beta is:

Beta = Covariance (stock, market) / Variance(market)

Where:

  • Covariance (stock, market) is the covariance between the stock and the overall market
  • Variance(market) is the variance of the overall market

These are few of the many formulas and models that are used in financial risk management. Based on specific type of risk managed the formulas could be different and models can also vary and will be relevant to each business needs. Most importantly that we must work with qualified financial professional to design comprehensive risk management strategy which could be unique and suits respective business or organizations.

Thank you for reading my blog, the reference is taken from one of my article that was published by Experts-Exchange.

Please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

Advertisement

Setting Up Oracle 23c Using Vagrant-Projects

Introduction

Oracle Database 23c is the latest release of Oracle’s flagship database management system. It includes several new features and enhancements, including:

  • JSON Relational Duality: This feature allows developers to store and query JSON data in a relational database. This makes it possible to use the same tools and techniques for both JSON and relational data, which can simplify development and maintenance.
  • Operational Graphs: This feature allows developers to store and query graph data in a relational database. This makes it possible to use the same tools and techniques for both graph and relational data, which can simplify development and maintenance.
  • Microservices support: This feature allows developers to build and deploy microservices on top of Oracle Database. This can simplify development and maintenance, and can also improve performance and scalability.
  • Real-time machine learning: This feature allows developers to train and deploy machine learning models on top of Oracle Database. This feature can improve the performance and readings of applications, and turndown the need for data movement.
  • Support for new data types: It allows the developers to store and query data in a variety of data types like JSON, XML, and spatial data. This makes the storage and query data much efficient and improves the performance of the application.

Now let us see how we can setup an environment to explore the above mentioned features locally.

To install Oracle 23c using vagrant-projects, you will need to have the following:

After we install the above two tools we need to clone the repository as mentioned below

git clone https://github.com/oracle/vagrant-projects

Once the repository is cloned, navigate to and  type “vagrant up”  to initiate the installation process:

 “..\vagrant-projects\OracleDatabase\23.2.0-Free”


Wait till all the setups are completed.

 Once you see the below line on your command prompt you are ready to use your Oracle 23c:

oracle23c-free-vagrant: INSTALLER: Installation complete, database ready to use!

To connect your the database let us ssh to the server and try to login following the below steps:

C:\Users\Owner\Desktop\Study\oracle-vm\vagrant-projects\OracleDatabase\23.2.0-Free>vagrant ssh
==> vagrant: Getting Proxy Configuration from Host...

Welcome to Oracle Linux Server release 8.7 (GNU/Linux 5.15.0-5.76.5.1.el8uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

  * /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

  * https://yum.oracle.com/

[vagrant@localhost ~]$ sudo su - oracle

The above  “sudo su – oracle”  command will connect you to oracle user. The passwords are auto generated but we can modify the passwords using the below commands.

[oracle@localhost ~]$ cd /home/oracle
[oracle@localhost ~]$ ls
setPassword.sh
[oracle@localhost ~]$ ./setPassword.sh <<Your New password>>

SQL*Plus: Release 23.0.0.0.0 - Developer-Release on Mon Apr 10 14:41:01 2023
Version 23.2.0.0.0

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>
User altered.

SQL>
User altered.

SQL>
Session altered.

SQL>
User altered.

SQL> Disconnected from Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

Now let us connect to the database using the new password:

[oracle@localhost ~]$ sqlplus sys as sysdba

SQL*Plus: Release 23.0.0.0.0 - Developer-Release on Mon Apr 10 14:41:23 2023
Version 23.2.0.0.0

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>

So now we are all set to use the new developer released version of Oracle 23c and try out all the new features provided by Oracle.

Thank you for reading my blog, please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

Identity and Access Management

In this blog we will see how Identity and Access Management plays a key role any enterprise-level database management system.

Identity and Access Management (IAM)

Identity and Access Management (IAM) is one of the critical components of any enterprise level database management system which required to define, manage the access identity, and apply the control policies. With this it will be more efficient and gives a better control of any user access to our systems. Managing user access, permissions, and privileges is essential to ensure data security and compliance with regulatory requirements. In this article, we will see how to design for IAM on Oracle database (One of the most popular enterprise database management systems that are available). There are many products available but the best option is to use own design to achieve the organization goal for better visibility.

Step 1: Define User Roles and Responsibilities 

The first step in creating an IAM design for Oracle Database is to define user roles and responsibilities. This includes identifying which users need access to which data, what level of access they require, and what their responsibilities are within the organization. Once user roles and responsibilities are defined, you can create security policies and procedures that align with them.

Step 2: Create User Groups 

The next action that we need to take for creating the user groups as the user roles and responsibilities are defined on earlier steps. User groups are a collection of users with similar roles and responsibilities. By grouping users together, you can assign permissions and access controls to the entire group, rather than having to manage individual users’ permissions separately.

Step 3: Define Access Controls

These are used to manage user access to data and resources within the Oracle Database. Access controls can be defined at the user, group, or object level. Which means this defines what level or type of access to specific data a user or group will have and what actions they can performed.

Step 4: Authentication using SSO (Single Sign-On)

SSO is a security method or process that allows users to login once and access multiple systems without having to enter their credentials repeatedly. SSO can be implemented using Oracle Access Manager, which provides a centralized authentication and authorization service that can be integrated with Oracle Database.

Step 5: Implement Multi-Factor Authentication (MFA) – Conditional or optional

It is an additional layer of security that can be implemented for users to access Oracle database more than one form of authentication but this is optional not mandatory that should be implemented based on business criticality and if only brining values on making this. MFA can be implemented using Oracle Identity Cloud Service, which provides a range of authentication methods, including SMS, email, and biometric verification.

Step 6: Monitor and Audit User Activity Monitoring

Monitoring and Auditing user activity is very essential and critical for detecting and preventing security breaches. Built-in auditing capabilities are already present within Oracle database to track all the footprints of a user from login to access any data and logouts. You can also use third-party tools such as Oracle Audit Vault and Database Firewall to enhance auditing and monitoring capabilities or you can define your custom audit process to capture but better option is not use built-in capabilities or use a third-party tool to avoid more maintenance.

In conclusion, creating a design for IAM on Oracle Database is a crucial step in ensuring data security and compliance. By defining user roles and responsibilities, creating user groups, implementing access controls, SSO, MFA, and monitoring and auditing user activity, you can establish a robust and secure IAM framework for your Oracle Database.

Thank you for reading my blog, please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

Integrated Surveillance on Financial Sector

In this blog we will see how Integrated Surveillance or Surveillance model plays a key role to identify the financial sector .

The financial sector is a crucial pillar of any economy.
Banks, as the most prominent financial institutions, play a vital role in ensuring the smooth functioning of the economy. Banks are involved in a range of activities, including lending, borrowing, investing, and providing other financial services. In recent years, there has been an increase in the complexity and interconnectedness of the financial system, leading to various trade issues. These issues can significantly impact the economy, and therefore, it is crucial to identify and mitigate them as early as possible. Integrated surveillance models are an effective tool that can be used to identify these trade issues.

Integrated surveillance models are systems that combine various sources of data to provide a holistic view of the financial system. These models typically use advanced data analytics tools and techniques to analyze large volumes of data in real-time. They can identify patterns, trends, and anomalies that may indicate potential trade issues.

The following are some of the integrated surveillance models that can be used to identify trade issues in banks:

  1. Risk-Based Surveillance (RBS) Model: This model focuses on the risk exposure of banks. It uses various risk metrics, such as credit risk, liquidity risk, and market risk, to monitor the health of banks. The RBS model can identify potential trade issues by detecting changes in these risk metrics.
  2. Early Warning System (EWS) Model: This model is designed to detect early signs of financial distress. It uses a combination of financial and non-financial indicators to identify banks that may be facing trade issues. The EWS model can provide early warning signals that enable policymakers to take appropriate action before the issue escalates.
  3. Macroprudential Surveillance Model: This model focuses on the overall stability of the financial system. It analyzes various macroeconomic indicators, such as GDP, inflation, and interest rates, to identify potential trade issues. The macroprudential surveillance model can detect systemic risks that may impact the entire financial system.
  4. Network Analytics Model: This model analyzes the interconnectedness of banks. It looks at the relationships between banks and identifies potential contagion effects that may result from the failure of one bank. The network analytics model can detect trade issues that may impact multiple banks and the entire financial system.

Integrated surveillance models can be used to identify a range of trade issues in banks. These issues may include credit risk, liquidity risk, market risk, operational risk, and systemic risk. By identifying these issues early, policymakers can take appropriate action to mitigate the impact on the economy.

In conclusion, integrated surveillance models are a critical tool for identifying trade issues in banks. They provide a holistic view of the financial system and can detect potential issues before they escalate. By using these models, policymakers can take proactive measures to ensure the stability and resilience of the financial system.

Risk-Based Surveillance (RBS) models are effective tools that can be used to monitor these risks and identify potential issues before they escalate. In this blog, we will explore the RBS model and demonstrate how to implement it using Python.

What is Risk-Based Surveillance (RBS) Model?
The RBS model is a surveillance system that monitors the risk exposure of banks. It uses various risk metrics, such as credit risk, liquidity risk, and market risk, to assess the health of banks. The RBS model can detect potential trade issues by identifying changes in these risk metrics.

Implementing RBS Model Using Python
Python is a popular programming language used for data analysis, and it has several libraries that are suitable for implementing the RBS model. In this section, we will demonstrate how to implement the RBS model using Python. We will use the pandas and matplotlib libraries for data analysis and visualization.

Step 1: Import Required Libraries
The first step is to import the required libraries. We will use the pandas library to load and manipulate data and the matplotlib library to ………continue reading from my original article that was published on Experts-Exchange.com and provides a basic explanation on how to use Tuples in Python programming language.

Thank you for reading my blog, the reference is taken from one of my article that was published by Experts-Exchange.

Please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

How to get all the Oracle SQL statements that took more than 10 minutes to execute

In this block we will use a simple program to identify all the Oracle SQL statements that took more than 10 minutes to execute, you can use the below PL/SQL block which helped a lot.

To get all the Oracle SQL statements that took more than 10 minutes to execute, you can use the below PL/SQL block which helped a lot:

DECLARE
  v_start_time TIMESTAMP;
  v_end_time TIMESTAMP;
  v_elapsed_time NUMBER;
BEGIN
  FOR c IN (SELECT *
            FROM dba_hist_sqltext
            WHERE parsing_schema_name NOT IN ('SYS', 'SYSTEM')
              AND timestamp > SYSDATE - 2
            ORDER BY sql_id, piece_number)
  LOOP
    v_start_time := NULL;
    v_end_time := NULL;
    v_elapsed_time := NULL;

    FOR r IN (SELECT *
              FROM dba_hist_sqlstat
              WHERE sql_id = c.sql_id
                AND child_number = c.child_number
                AND begin_interval_time > SYSDATE - 2
              ORDER BY snap_id)
    LOOP
      IF v_start_time IS NULL THEN
        v_start_time := r.snap_time;
      END IF;
      v_end_time := r.snap_time;
      v_elapsed_time := v_elapsed_time + r.elapsed_time / 1000000;
    END LOOP;

    IF v_elapsed_time > 600 AND v_elapsed_time < 86400 THEN
      DBMS_OUTPUT.PUT_LINE('SQL ID: ' || c.sql_id || ', Child Number: ' || c.child_number || ', Elapsed Time: ' || v_elapsed_time || ' seconds');
      DBMS_OUTPUT.PUT_LINE('SQL Text: ' || c.sql_text);
    END IF;
  END LOOP;
END;

This PL/SQL block uses the dba_hist_sqltext and dba_hist_sqlstat views to retrieve the SQL statements that were executed in the last two days and their corresponding execution statistics. It then iterates over each SQL statement and calculates the total elapsed time for its executions in that time period. If the elapsed time is more than 10 minutes (600 seconds) and less than 24 hours (86400 seconds), it outputs the SQL ID, child number, elapsed time, and SQL text using the DBMS_OUTPUT package.

Note that you may need to adjust the WHERE clause of the SQL query to exclude certain schemas or add additional filters based on your specific requirements. Additionally, you may need to enable the DBMS_OUTPUT package output by executing SET SERVEROUTPUT ON before running the PL/SQL block.

Thank you for reading my blog, please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

JSON In Oracle Database

In this article, we’ll take a closer look at JSON in Oracle SQL and how you can use it in your applications.

JSON (JavaScript Object Notation) is a popular data interchange format that has gained widespread adoption in recent years. It is a lightweight and flexible format that is easy to read and write, making it ideal for use in web applications, mobile apps, and other scenarios where data needs to be transmitted between different systems.

Oracle SQL has supported JSON data types since version 12.1, which means you can store JSON data in Oracle tables and query it using SQL. 
JSON Data Types in Oracle SQL

Oracle SQL supports two JSON data types: VARCHAR2 and CLOB. These data types can store JSON data as strings, with VARCHAR2 being used for shorter strings and CLOB for longer strings.

To create a table that stores JSON data, you can use the following syntax:

CREATE TABLE my_table (id NUMBER, json_data CLOB);



This creates a table called “my_table” with two columns: “id” and “json_data”. The “json_data” column is defined as a CLOB data type, which means it can store large JSON strings.


Working with JSON Data in Oracle SQL
Once you have a table that stores JSON data, you can use SQL to query and manipulate the data. Oracle SQL provides a number of functions for working with JSON data, including:

JSON_VALUE: This function extracts a scalar value from a JSON string.
JSON_QUERY: This function extracts a JSON object or array from a JSON string.
JSON_TABLE: This function converts a JSON array into a table.
Here’s an example of using the JSON_VALUE function to extract a value from a JSON string:

SELECT JSON_VALUE(json_data, '$.name') AS name FROM my_table;

This query extracts the value of the “name” field from the JSON data stored in the “json_data” column of the “my_table” table.
Working with Nested JSON Data

JSON data can be nested, with objects and arrays containing other objects and arrays. Oracle SQL provides functions for working with nested JSON data, including:

  • JSON_OBJECT: This function creates a JSON object from key-value pairs.
  • JSON_ARRAY: This function creates a JSON array from a list of values.
  • JSON_TABLE: This function can handle nested JSON data and convert it into a table.

Here’s an example of using the JSON_TABLE function to query a nested JSON object:

SELECT j.*
FROM my_table t, JSON_TABLE(t.json_data, '$.items[*]' 
    COLUMNS (
        id NUMBER PATH '$.id',
        name VARCHAR2(50) PATH '$.name',
        price NUMBER PATH '$.price'
    )
) j
WHERE t.id = 1;

This query extracts data from a nested JSON object that contains an array of items. The JSON_TABLE function is used to convert the array into a table, which can then be queried using SQL.
Conclusion

JSON is a powerful and flexible data format that is widely used in modern web and mobile applications. With support for JSON data types and functions, Oracle SQL makes it easy to work with JSON data in your applications. Whether you’re querying simple JSON objects or working with complex nested data structures, Oracle SQL has the tools you need to get the job done.

Thank you for reading my blog, the reference is taken from one of my article that was published by Experts-Exchange.

Please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

Ways To Load Large Tables In Oracle

When it comes to loading large tables in Oracle, there are several options available:

  1. SQLLoader: SQLLoader is a command-line tool that can be used to load data from external files into Oracle database tables. It is highly efficient and can handle large data volumes.
  2. External Tables: External Tables allow you to access data in external files as if it were in a regular database table. This feature is similar to SQL*Loader but can be used within SQL statements.
  3. Oracle Data Pump: Oracle Data Pump is a utility for importing and exporting data between Oracle databases. It can be used to load large tables by creating a data pump export file and then importing the data into the target table.
  4. Direct Path Load: The Direct Path Load option is a feature of SQL*Loader and External Tables. It bypasses the SQL engine and writes data directly to the data files. This method can be faster than conventional loading but requires more temporary space.
  5. Partitioning: Partitioning is a way to divide large tables into smaller, more manageable pieces. This can improve query performance and simplify data maintenance tasks such as backup and recovery.

The best option for loading large tables in Oracle depends on factors such as the size and complexity of the data, the available resources, and the specific requirements of the project. SQL*Loader and External Tables are generally the most efficient methods for loading large amounts of data, but other options may be more appropriate depending on the situation.

Direct path load and DBMS Parallel Execute are both high-performance methods for loading data into Oracle databases, but their speed will depend on various factors such as the size and complexity of the data, the number of processors and the amount of memory available on the system.

In general, direct path load is considered to be faster than DBMS Parallel Execute for loading large amounts of data, especially when the data is already in a flat file format. Direct path load bypasses the SQL engine and writes data directly to the data files, which can result in faster load times.

On the other hand, DBMS Parallel Execute can be faster than direct path load for complex operations that require complex transformation or data manipulation, such as merging data from multiple sources or performing calculations on large datasets. DBMS Parallel Execute allows for parallel processing, which can significantly reduce the time required for such operations.

It’s worth noting that both methods have their own advantages and limitations, and the best option will depend on the specific requirements of the project. It’s always a good idea to test both methods and measure their performance to determine which method is the most suitable for your particular use case.

When it comes to loading large amounts of data into an Oracle database, SQLLoader is often the tool of choice. SQLLoader is a powerful and flexible utility that can load data from a variety of sources, including flat files, external tables, and even other databases.

Let us take a closer look at how to use SQL*Loader to load huge tables in Oracle databases.

1. Preparing the Data

Before you can load data into an Oracle database using SQLLoader, you need to prepare the data in a format that SQLLoader can recognize. This usually involves creating a flat file that contains the data, with each row of data on a separate line and each column separated by a delimiter, such as a comma or a tab.

For example, here’s a sample file that contains sales data:

101,2019-01-01,Widget A,100,10.99
102,2019-01-02,Widget B,200,19.99
103,2019-01-03,Widget C,300,29.99


The file contains five columns of data: an ID column, a date column, a product name column, a quantity column, and a price column.

2. Creating a Control File

Once you have prepared the data file, you need to create a control file that tells SQLLoader how to load the data into the database. The control file contains instructions on how to interpret the data file, such as the delimiter to use, the format of the data, and how to map the data columns in the database.

Here’s an example of a control file that can be used to load the sales data from the sample files above:

LOAD DATA
INFILE 'sales_data.csv'
APPEND
INTO TABLE sales
FIELDS TERMINATED BY ','
(
  id,
  sale_date DATE "YYYY-MM-DD",
  product_name CHAR(20),
  quantity INTEGER,
  price FLOAT EXTERNAL
)


This control file specifies that the data is stored in a file called “sales_data.csv”, that it should be appended to an existing table called “sales”, and that the fields in the data file are separated by commas. It also maps each column in the data file to a column in the database, specifying the data type for each column.

3. Running SQLLoader

Once you have prepared the data file and the control file, you can run SQLLoader to load the data into the database. To do this, simply run the following command:

sqlldr username/password control=control_file.ctl log=log_file.log


This command tells SQL*Loader to use the specified control file to load data into the database, and to log any errors or warnings to a log file.

4. Conclusion

SQLLoader is a powerful tool that can help you load large amounts of data into Oracle databases quickly and efficiently. By preparing your data in a format that SQLLoader can recognize, creating a control file that tells SQLLoader how to interpret the data, and running SQLLoader to laod the data into the database, you can easily and reliably load huge tables in Oracle.

When it comes to loading large amounts of data into an Oracle database, direct path load can be a powerful option. Direct path load bypasses much of the Oracle buffer cache and performs the load operation directly on the database files, making it faster and more efficient than other load methods.

Let us understand how direct path load in Oracle is used to load data into a table.

  • What is Direct Path Load?

Direct path load is a method of loading data into an Oracle database that bypasses the database buffer cache and writes data directly to the database files. This makes direct path load faster and more efficient than other load methods, as it reduces the amount of I/O required and minimizes the impact on the buffer cache.

Direct path load is ideal for loading large amounts of data into the database, such as when migrating data from a legacy system or loading data for data warehousing and business intelligence purposes.

  • How to Use Direct Path Load

To use direct path load in Oracle, you can use the SQLLoader utility with the DIRECT parameter. Here’s an example of how to load data into a table using direct path load:

Assume that you have a CSV file containing data that needs to be loaded into a table called “sales_data”. The file contains five columns of data: an ID column, a date column, a product name column, a quantity column, and a price column.

Create a control file that tells SQLLoader how to load the data into the database. Here’s an example of a control file that can be used to load the sales data from the sample file:

LOAD DATA
INFILE 'sales_data.csv'
APPEND
INTO TABLE sales_data
FIELDS TERMINATED BY ','
(
  id,
  sale_date DATE "YYYY-MM-DD",
  product_name CHAR(20),
  quantity INTEGER,
  price FLOAT EXTERNAL
)


This control file specifies that data is stored in a file called “sales_data.csv”, that it should be appended to an existing table called “sales_data”, and that the fields in the data file are separated by commas. It also maps each column in the data file to a column in the database, specifying the data type for each column.

Run SQLLoader with the DIRECT parameter to perform a direct path load. Here’s an example of the command you would use:

sqlldr username/password control=control_file.ctl log=log_file.log direct=true


This command tells SQLLoader to use the specified control file to load data into the database, and to log any errors or warnings to a log file. The “direct=true” parameter specifies that a direct path load should be used.

Direct path load can be a powerful option for loading large amounts of data into an Oracle database quickly and effectively. By bypassing the buffer cache and writing data directly to the database files, direct path load can reduce the amount of I/O required and minimize the impact on the buffer cache. With SQLLoader, performing a direct path load is easy and can be done with just a few simple steps.

DBMS Parallel Execute is a powerful feature of Oracle Database that allows you to execute SQL statements in parallel.

Here are a few examples of how to use DBMS Parallel Execute in Oracle: 

  • Execute a SELECT statement in parallel:
DECLARE
  l_sql VARCHAR2(1000) := 'SELECT * FROM my_table';
BEGIN
  dbms_parallel_execute.Run_task(l_sql);
END;/


This code will execute the SELECT statement in parallel across multiple threads, which can significantly speed up the query execution time.

  • Execute a DML statement in parallel:
DECLARE
    l_sql VARCHAR2(1000) := 'UPDATE my_table SET column1 = :1 WHERE id = :2';
BEGIN
 dbms_parallel_execute.Run_task(l_sql, dbms_sql.native, 'parallel', 10, 1000)
    ;
END;
/ 

 


This code will update the “column1” field in the “my_table” table in parallel, using 10 threads and a batch size of 1000.

  • Divide a large query into smaller chunks using DBMS Parallel Execute:
DECLARE
    l_sql VARCHAR2(1000) :=
    'SELECT * FROM my_table WHERE id BETWEEN :start_id AND :end_id';
BEGIN
    dbms_parallel_execute.Create_task('my_task');

    dbms_parallel_execute.Create_chunks_by_sql('my_task', l_sql, TRUE);

    dbms_parallel_execute.Run_task('my_task');
END;


This code will divide a large query into smaller chunks using DBMS Parallel Execute, then execute each chunk in parallel across multiple threads.

Overall, DBMS Parallel Execute is a powerful tool for improving the performance of SQL statements in Oracle Database. By leveraging parallel processing, you can execute queries and updates faster and more efficiently, especially when dealing with large datasets.

Thank you for reading my blog, the reference is taken from one of my article that was published by Experts-Exchange.

Please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

ORA-65140: invalid common profile name

How to solve if you get an error “ORA-65140” while creating a profile on Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 .

Let us see how we can resolve this error.

CREATE PROFILE "APP_PROFILE" LIMIT
CPU_PER_SESSION DEFAULT
CPU_PER_CALL DEFAULT
CONNECT_TIME DEFAULT
IDLE_TIME 180
SESSIONS_PER_USER DEFAULT
LOGICAL_READS_PER_SESSION DEFAULT
LOGICAL_READS_PER_CALL DEFAULT
PRIVATE_SGA DEFAULT
COMPOSITE_LIMIT DEFAULT
PASSWORD_LIFE_TIME DEFAULT
PASSWORD_GRACE_TIME DEFAULT
PASSWORD_REUSE_MAX DEFAULT
PASSWORD_REUSE_TIME DEFAULT
PASSWORD_VERIFY_FUNCTION DEFAULT
FAILED_LOGIN_ATTEMPTS DEFAULT
PASSWORD_LOCK_TIME DEFAULT;

Error:

Error report -
ORA-65140: invalid common profile name
65140. 00000 -  "invalid common profile name"
*Cause:    An attempt was made to create a common profile with a name
           that is not valid for common profiles.  Common profile names
           must start with C## or c## and consist only of ASCII characters.
*Action:   Specify a valid common profile name.

Now alter the system session “_oracle_script” to true.

alter session set "_oracle_script"=true;
Session altered.

Now let us try to run the profile again.

CREATE PROFILE "APP_PROFILE" LIMIT
CPU_PER_SESSION DEFAULT
CPU_PER_CALL DEFAULT
CONNECT_TIME DEFAULT
IDLE_TIME 180
SESSIONS_PER_USER DEFAULT
LOGICAL_READS_PER_SESSION DEFAULT
LOGICAL_READS_PER_CALL DEFAULT
PRIVATE_SGA DEFAULT
COMPOSITE_LIMIT DEFAULT
PASSWORD_LIFE_TIME DEFAULT
PASSWORD_GRACE_TIME DEFAULT
PASSWORD_REUSE_MAX DEFAULT
PASSWORD_REUSE_TIME DEFAULT
PASSWORD_VERIFY_FUNCTION DEFAULT
FAILED_LOGIN_ATTEMPTS DEFAULT
PASSWORD_LOCK_TIME DEFAULT;
--------------------------------------------------
Profile "APP_PROFILE" created.

We can also create the profiles without altering the system by giving a name that starts with C## or c## to avoid this error on the non-pluggable database.

Thank you for reading my blog ,please feel free to leave me some feedback or to suggest any future topics.

Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

ORA-01950: no privileges on tablespace

Many times we might encounter with an error on our application log or from database alert lot for ORA-01950 error.

Which means that the user/schema user trying to perform a job where it required to extent the tablespace that is tied with the user or the objects, the error occurs because it doesn’t have the privilege to perform this activity.

First we need to check if the user has the any limit defined on the allocating space for the specific tablespace, to verify we can query “DBA_TS_QUOTAS” table to check. If there is any limit set, we can increase it by using the below command .

ALTER USER <USERNAME> QUOTA <SIZE> ON <TABLESPACE NAME> ;
Example:
alter user sloba quote 500M on sloba_data ;

If we see that the user limit is not defined we can grant unlimited .

Syntax to grant unlimited quote :

GRANT UNLIMITED TABLESPACE TO <USERNAME>;
         or
ALTER USER <USERNAME> DEFAULT TABLESPACE <TABLESAPCE NAME> QUOTA UNLIMITED ON <TABLESPACE NAME> ;

Thank you for reading my blog, please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )

ORA-04068: existing state of packages has been discarded

In this blog, we will see how to solve ORA-04068 with an example while using global variables.

Create a sample table:

CREATE TABLE MY_SAMPLE_TABLE(
COL1 NUMBER ,
COL2 VARCHAR2(50) )
/
Table MY_SAMPLE_TABLE created.

Now let us create a sample package that can be used as a type variable:


CREATE OR REPLACE PACKAGE pkg_test1 AS
 X NUMBER;
END;
/
Package PKG_TEST1 compiled


Create a simple trigger to insert data to my column no 1 for static values in my case there was additional logic but the issue was same.

create or replace trigger MY_SAMPLE_TABLE_TRG
before insert on MY_SAMPLE_TABLE
for each row
begin
   pkg_test1.X:=1;
     IF :NEW.COL1  IS NULL THEN
    :new.COL1 := pkg_test1.X;
  END IF;
end;
/
Trigger MY_SAMPLE_TABLE_TRG compiled

Now we see that the package type can be used globally in all other codes that is associated to a trigger.

For example if I want to use the same globla variable on any PLSQL block as shown below:

DECLARE
    l_n   NUMBER;
BEGIN
    pkg_test1.x   := 2;
    l_n           := pkg_test1.x;
    dbms_output.put_line(l_n);
END;

PL/SQL procedure successfully completed.
2

We see that it works fine. And status of all the above objects are valid as shown below:

Let us also try to insert a row to the table to verify if it works:

insert into MY_SAMPLE_TABLE values (null, 'Test 1');
/
1 row inserted.

Commit complete.

Now let us try to login with another session and try to modify the package and add a new variable,


So my old session is

Let us modify the object on my new session

CREATE OR REPLACE PACKAGE pkg_test1 AS
    x NUMBER;
    y VARCHAR2(10); -- new variable
END;
/
Package PKG_TEST1 compiled

Now let us go back to our old session and try to run the PLSQL block and also try to insert a row and check the results:

select sys_context('userenv','sid') from dual ;
/
SYS_CONTEXT('USERENV','SID')                                                                                                                                                                                                                                    
----------------------------------------------------------------------
464
/
DECLARE
    l_n   NUMBER;
BEGIN
    pkg_test1.x   := 2;
    l_n           := pkg_test1.x;
    dbms_output.put_line(l_n);
END;
/

Error report -
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "SLOBA.PKG_TEST1" has been invalidated
ORA-04065: not executed, altered or dropped package "SLOBA.PKG_TEST1"
ORA-06508: PL/SQL: could not find program unit being called: "SLOBA.PKG_TEST1"
ORA-06512: at line 4
04068. 00000 -  "existing state of packages%s%s%s has been discarded"
*Cause:    One of errors 4060 - 4067 when attempt to execute a stored
           procedure.
*Action:   Try again after proper re-initialization of any application's
           state.

We see that using the package type on the PLSQL block we are getting the above errors, now let us give a try to insert a row.

insert into MY_SAMPLE_TABLE values (null, 'Test 2');
/
1 row inserted.

We know that PLSQL block is not working but while inserting it works fine.

Now let us go back to our second session and use a pragma (SERIALLY_REUSABLE) to the package and then try.

CREATE OR REPLACE PACKAGE pkg_test1 AS
 PRAGMA SERIALLY_REUSABLE ;
     X NUMBER;
     y varchar2(10); -- new variable
END;
/
Package PKG_TEST1 compiled

Now let us go back to our first session and try to run the PLSQL block and the insert too.

DECLARE
    l_n   NUMBER;
BEGIN
    pkg_test1.x   := 4;
    l_n           := pkg_test1.x;
    dbms_output.put_line(l_n);
END;
/
PL/SQL procedure successfully completed.
 

Now let us try the insert statement on our first session.

insert into MY_SAMPLE_TABLE values (null, 'Test 4 from session 1');
/
Error report -
ORA-06534: Cannot access Serially Reusable package "SLOBA.PKG_TEST1"
ORA-06512: at "SLOBA.MY_SAMPLE_TABLE_TRG", line 2
ORA-04088: error during execution of trigger 'SLOBA.MY_SAMPLE_TABLE_TRG'

We see that now we encountered another issue while using Serially Reusable on the package that used as TYPE.

To solve this we can add “PRAGMA AUTONOMOUS_TRANSACTION” and execute the insert and PLSQL block again.

CREATE OR REPLACE TRIGGER MY_SAMPLE_TABLE_trg BEFORE
    INSERT ON MY_SAMPLE_TABLE
    FOR EACH ROW
DECLARE
    PRAGMA autonomous_transaction;
BEGIN
    pkg_test1.x   := 1;
    IF
        :new.col1 IS NULL
    THEN
        :new.col1   := pkg_test1.x;
    END IF;

END;
/
Trigger MY_SAMPLE_TABLE_TRG compiled

Now let us try again:

insert into MY_SAMPLE_TABLE values (null, 'Test 4 from session 1');
 /
1 row inserted.

DECLARE
    l_n   NUMBER;
BEGIN
    pkg_test1.x   := 5;
    l_n           := pkg_test1.x;
    dbms_output.put_line(l_n);
END;
/
PL/SQL procedure successfully completed.

Then only part we need to keep in mind is “compound trigger” are not supported.

Thank you for reading my blog, please feel free to leave me some feedback or to suggest any future topics.

Looking forward to hearing from you – Swadhin Ray (Sloba) -( LinkedIn ) ( Twitter )