Feed aggregator

Oracle Powers Full BIM Model Coordination for Design and Construction Teams

Oracle Press Releases - 0 sec ago
Press Release
Oracle Powers Full BIM Model Coordination for Design and Construction Teams Aconex cloud solution connects all project participants in model management process to eliminate complexity and speed project success

Future of Projects, Washington D.C.—Apr 24, 2019

Building information modeling (BIM) is an increasingly important component of construction project delivery, but is currently limited by a lack of collaboration, reliance on multiple applications, and missing integrations. The Oracle Aconex Model Coordination Cloud Service eliminates these challenges by enabling construction design and project professionals to collaboratively manage BIM models across the entire project team in a true common data environment (CDE). As such, organizations can reduce the risk of errors and accelerate project success by ensuring each team member has access to accurate, up-to-date models. 

The BIM-methodology uses 3D, 4D and 5D modeling, in coordination with a number of tools and technologies, to provide digital representations of the physical and functional characteristics of places.

“Issues with model management means projects go over budget, run over schedule, and end up with a higher total cost of ownership for the client. As part of the early access program for Oracle Aconex Model Coordination, it was great to experience how Oracle has solved these challenges,” said Davide Gatti, digital manager, Multiplex.

Single Source of Truth for Project Data

With Oracle Aconex Model Coordination, organizations can eliminate the need for various point solutions in favor of project-wide BIM participation that drives productivity with faster processes and cycle times, enables a single source of truth for project information, and delivers a fully connected data set at handover for asset operation.

The Model Coordination solution enhances Oracle Aconex’s existing CDE capabilities, which are built around Open BIM standards (e.g., IFC 4 and BCF 2.1) and leverage a cloud-based, full model server to enable efficient, secure, and comprehensive model management at all stages of the project lifecycle.

The Oracle Aconex CDE, which is based on ISO 19650 and DIN SPEC 91391 definitions, provides industry-leading neutrality, security, and data interoperability. By enabling model management in this environment, Oracle Aconex unlocks new levels of visibility, coordination, and productivity across people and processes, including enabling comprehensive model-based issue and clash management.    

Key features of the new solution include: 

  • Seamless clash and design issue management and resolution
  • Dashboard overview and reporting
  • Creation of viewpoints – e.g. personal “bookmarks” within models and the linking of documents to objects
  • Integrated measurements
  • Process support and a full audit trail with the supply chain

“With Oracle Aconex Model Coordination, we’re making the whole model management process as seamless and easy as possible. By integrating authoring and validation applications to the cloud, users don’t need to upload and download their issues and clashes anymore,” said Frank Weiss, director of new products, BIM and innovation at Oracle Construction and Engineering.

“There’s so much noise and confusion around BIM and CDEs, much of it driven by misinformation in the market about what each term means. We believe everybody on a BIM project should work with the best available tool for their discipline. Therefore, open formats are critical for interoperability, and the use of a true CDE is key to efficient and effective model management.”

For more information on the Model Coordination solution, please visit https://www.oracle.com/industries/construction-engineering/aconex-products.html.

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+1.312.255.3086
brent.curry@hkstrategies.com
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle, please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1.312.255.3086

Subtract time from a constant time

Tom Kyte - 7 hours 26 sec ago
Hello there, I want to create a trigger that will insert a Time difference value into a table Example: I have attendance table Sign_in date; Sign_out date; Late_in number; Early_out number; Now I want to create a trigger that will insert la...
Categories: DBA Blogs

Clob column in RDBMS(oracle) table with key value pairs

Tom Kyte - 7 hours 26 sec ago
In our product in recent changes, oracle tables are added with clob column having key value pairs in xml/json format with new columns. <b>example of employee:(Please ignore usage of parenthesis) </b> 100,Adam,{{"key": "dept", "value": "Marketi...
Categories: DBA Blogs

current value of sequence

Tom Kyte - 7 hours 26 sec ago
Hi. Simple question :-) Is it possible to check current value of sequence? I though it is stored in SEQ$ but that is not true (at least in 11g). So is it now possible at all? Regards
Categories: DBA Blogs

Oracle Linux certified under Common Criteria and FIPS 140-2

Oracle Security Team - 8 hours 26 min ago

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection Profile (OSPP) v4.1 as well as a FIPS 140-2 validation of its cryptographic modules.  Oracle Linux is currently one of only two operating systems – and the only Linux distribution – on the NIAP Product Compliant List. 

U.S. Federal procurement policy requires IT products sold to the Department of Defense (DoD) to be on this list; therefore, Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system that also includes FIPS 140-2 validated cryptographic modules, by making Oracle Linux 7 the platform for their cloud services solution.

Common Criteria Certification for Oracle Linux 7

The National Information Assurance Partnership (NIAP) is “responsible for U.S. implementation of the Common Criteria, including management of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) validation body.”(See About NIAP at https://www.niap-ccevs.org/ )

The Operating Systems Protection Profile (OSPP) series are the only NIAP-approved Protection Profiles for operating systems. “A Protection Profile is an implementation-independent set of security requirements and test activities for a particular technology that enables achievable, repeatable, and testable (CC) evaluations.”  They are intended to “accurately describe the security functionality of the systems being certified in terms of [CC] and to define functional and assurance requirements for such products.”  In other words, the OSPP enables organizations to make an accurate comparison of operating systems security functions. (For both quotations, see NIAP Frequently Asked Questions (FAQ) at https://www.niap-ccevs.org/Ref/FAQ.cfm)

In addition, products that certify against these Protection Profiles can also help you meet certain US government procurement rules.  As set forth in the Committee on National Security Systems Policy (CNSSP) #11, National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology Products (published in June 2013), “All [common off-the-shelf] COTS IA and IA-enabled IT products acquired for use to protect information on NSS shall comply with the requirements of the NIAP program in accordance with NSA-approved processes.”  

Oracle Linux is now the only Linux distribution on the NIAP Product Compliant List.  It is one of only two operating systems on the list.

You may recall that Linux distributions (including Oracle Linux) have previously completed Common Criteria evaluations (mostly against a German standard protection profile), these evaluations are now limited because they are only officially recognized in Germany and within the European SOG-IS agreement. Furthermore, the revised Common Criteria Recognition Arrangement (CCRA) announcement on the CCRA News Page from September 8th 2014, states that “After September 8th 2017, mutually recognized certificates will either require protection profile-based evaluations or claim conformance to evaluation assurance levels 1 through 2 in accordance with the new CCRA.”  That means evaluations conducted within the CCRA acceptance rules, such as the Oracle Linux 7.3 evaluation, are globally recognized in the 30 countries that have signed the CCRA. As a result, Oracle Linux 7.3 is the only Linux distribution that meets current US procurement rules.

It is important to recognize that the exact status of the certifications of operating systems under the NIAP OSPP has significant implications for the use of cloud services by U.S. government agencies.  The Federal Risk and Authorization Management Program (FedRAMP) website states that it is a “government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.” For both FedRamp Moderate and High, the SA-4 Guidance states “The use of Common Criteria (ISO/IEC 15408) evaluated products is strongly preferred.

FIPS 140-2 Level 1 Validation for Oracle Linux 6 and 7

In addition to the Common Criteria Certification, Oracle Linux cryptographic modules are also now FIPS 140-2 validated. FIPS 140-2 is a prerequisite for NIAP Common Criteria evaluations. “All cryptography in the TOE for which NIST provides validation testing of FIPS-approved and NIST-recommended cryptographic algorithms and their individual components must be NIST validated (CAVP and/or CMVP). At a minimum an appropriate NIST CAVP certificate is required before a NIAP CC Certificate will be awarded.” (See NIAP Policy Letter #5, June 25, 2018 at https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/policy-ltr-5-update3.pdf )

FIPS is also a mandatory standard for all cryptographic modules used by the US government. “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106.” (See Cryptographic Module Validation Program; What Is The Applicability Of CMVP To The US Government? at https://csrc.nist.gov/projects/cryptographic-module-validation-program ).

Finally, FIPS is required for any cryptography that is a part of a FedRamp certified cloud service. “For data flows crossing the authorization boundary or anywhere else encryption      is required, FIPS 140 compliant/validated cryptography must be employed. FIPS 140 compliant/validated products will have certificate numbers. These certificate numbers will be required to be identified in the SSP as a demonstration of this capability. JAB TRs will not authorize a cloud service that does not have this capability.” (See FedRamp Tips & Cues Compilation, January 2018, at https://www.fedramp.gov/assets/resources/documents/FedRAMP_Tips_and_Cues.pdf ).

Oracle includes FIPS 140-2 Level 1 validated cryptography into Oracle Linux 6 and Oracle Linux 7 on x86-64 systems with the Unbreakable Enterprise Kernel and the Red Hat Compatible Kernel. The platforms used for FIPS 140 validation testing include Oracle Server X6-2 and Oracle Server X7-2, running Oracle Linux 6.9 and 7.3. Oracle “vendor affirms” that the FIPS validation is maintained on other x86-64 equivalent hardware that has been qualified in its Oracle Linux Hardware Certification List (HCL), on the corresponding Oracle Linux releases.

Oracle Linux cryptographic modules enable FIPS 140-compliant operations for key use cases such as data protection and integrity, remote administration (SSH, HTTPS TLS, SNMP, and IPSEC), cryptographic key generation, and key/certificate management.

Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system (that also includes FIPS 140-2 validated cryptographic modules) by making Oracle Linux 7 the bedrock of their cloud services solution.

Oracle Linux is engineered for open cloud infrastructure. It delivers leading performance, scalability, reliability, and security for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists, zero-downtime updates using Ksplice, additional management tools such as Oracle Enterprise Manager and lifetime support, all at a low cost.

For a matrix of Oracle security evaluations currently in progress as well as those completed, please refer to the Oracle Security Evaluations.

Visit Oracle Linux Security to learn how Oracle Linux can help keep your systems secure and improve the speed and stability of your operations.

 

University of California San Diego to Streamline Finance with Oracle ERP Cloud

Oracle Press Releases - 10 hours 6 min ago
Press Release
University of California San Diego to Streamline Finance with Oracle ERP Cloud Cutting-edge research institution boosts efficiency, insights and decision making with modern cloud platform

Redwood Shores, Calif.—Apr 24, 2019

UC San Diego Publications

Photo Credit: UC San Diego Publications

The University of California San Diego, one of the country’s top research institutions, is replacing its legacy financial management system with Oracle Enterprise Resource Planning (ERP) Cloud. Oracle ERP Cloud will enable UC San Diego to increase overall productivity and bolster decision making with real-time business insights and easily incorporate emerging technologies going forward.

Established in 1960, UC San Diego is one of the world’s most reputable and innovative research universities with more than 36,000 students and nearly $5 billion in annual revenue. To increase efficiencies and improve decision-making, UC San Diego needed to replace its existing financial management system—which had become complex and expensive to update—with a secure, scalable and configurable business platform that could reduce redundant business processes and enable different campus systems to share financial data securely. After a nine-month competitive bid process, which included participation from more than 100 stakeholders and subject matter experts, the university selected Oracle ERP Cloud.

“Making sense of the data in our heavily-customized legacy ERP system was creating headaches for our finance team and required significant levels of technical support. It wasn’t sustainable long term and was holding us back,” said William McCarroll, Senior Director, Business & Financial Services General Accounting at UC San Diego. “We anticipate that Oracle ERP Cloud will give us better access to innovative new technology, without painful software upgrades, and improve our finance team’s overall efficiency. Ultimately, we are implementing these system changes to keep UC San Diego moving forward as we nurture the next generation of changemakers.”

Oracle ERP Cloud is designed to allow organizations like UC San Diego to be able to increase productivity, lower costs and improve controls. One benefit of Oracle ERP Cloud is it enables the university to move from overnight (and weekend) batch data processing to real-time business insights that significantly speed up month-end and year-end closing for the finance team. In addition, Oracle ERP Cloud will provide UC San Diego with tools to embrace finance best practices and more easily access and deploy emerging technologies to support changing organizational demands.

“We are seeing many higher education institutions leveraging our modern business applications to create efficiencies while reducing cost and dramatically impacting productivity,” said Rondy Ng, senior vice president of applications development at Oracle. “With Oracle ERP Cloud, UC San Diego will be able to empower its finance team to play a strategic role in the university’s success. We look forward to partnering with UC San Diego as it embraces new innovations.”

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About UC San Diego

At the University of California San Diego, we constantly push boundaries and challenge expectations. Established in 1960, UC San Diego has been shaped by exceptional scholars who aren’t afraid to take risks and redefine conventional wisdom. Today, as one of the top 15 research universities in the world, we are driving innovation and change to advance society, propel economic growth and make our world a better place. Learn more at www.ucsd.edu.

Additional Information

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

Bloom Filter Efficiency And Cardinality Estimates

Randolf Geist - Tue, 2019-04-23 18:45
I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.

While other operations based on SQL workareas like hash joins for example can be affected by such cardinality mis-estimates, too, these seem to be capable of adapting at runtime - at least to a certain degree. However I haven't seen such an adaptive behaviour of Bloom Filter operations at runtime (not even when executing the same statement multiple times and statistics feedback not kicking in).

To demonstrate the issue I'll create two simple tables that get joined and one of them gets a filter applied:

create table t1 parallel 4 nologging compress
as
with
generator1 as
(
select /*+
cardinality(1e3)
materialize
*/
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e3
),
generator2 as
(
select /*+
cardinality(1e4)
materialize
*/
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
)
select
id
, id as id2
, rpad('x', 100) as filler
from (
select /*+ leading(b a) */
(a.id - 1) * 1e4 + b.id as id
from
generator1 a
, generator2 b
)
;

alter table t1 noparallel;

create table t2 parallel 4 nologging compress as select * from t1;

alter table t2 noparallel;

All I did here is create two tables with 10 million rows each, and I'll look at the runtime statistics of the following query:

select /*+ no_merge(x) */ * from (
select /*+
leading(t1)
use_hash(t2)
px_join_filter(t2)
opt_estimate(table t1 rows=1)
--opt_estimate(table t1 rows=250000)
monitor
*/
t1.id
, t2.id2
from
t1
, t2
where
mod(t1.id2, 40) = 0
-- t1.id2 between 1 and 250000
and t1.id = t2.id
) x
where rownum > 1;

Note: If you try to reproduce make sure you get actually a Bloom Filter operation - in an unpatched version 12.1.0.2 I had to add a PARALLEL(2) hint to actually get the Bloom Filter operation.

The query filters on T1 so that 250K rows will be returned and then joins to T2. The first interesting observation regarding the efficiency of the Bloom Filter is that the actual data pattern makes a significant difference: When using the commented filter "T1.ID2 BETWEEN 1 and 250000" the resulting cardinality will be same as when using the "MOD(T1.ID2, 40) = 0", but the former will result in a perfect filtering of the Bloom Filter regardless of the OPT_ESTIMATE hint used, whereas when using the latter the efficiency will be dramatically different.

This is what I get when using version 18.3 (12.1.0.2 showed very similar results) and force the under-estimate using the OPT_ESTIMATE ROWS=1 hint - the output is from my XPLAN_ASH script and edited for brevity:

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Execs | A-Rows | PGA |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12556K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 10000K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 10000K | |
------------------------------------------------------------------------------------

The Bloom Filter didn't help much, only a few rows were actually filtered (otherwise my XPLAN_ASH script would have shown "10M" as actually cardinality instead of "10000K", which is something slightly less than 10M rounded up).

Repeat the same but this time using the OPT_ESTIMATE ROWS=250000 hint:

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Execs | A-Rows| PGA |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 1 | 0 | |
| 1 | COUNT | | | | | 1 | 0 | |
|* 2 | FILTER | | | | | 1 | 0 | |
| 3 | VIEW | | 252K| 6402K| | 1 | 250K | |
|* 4 | HASH JOIN | | 252K| 5909K| 5864K| 1 | 250K | 12877K |
| 5 | JOIN FILTER CREATE| :BF0000 | 250K| 2929K| | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 250K| 2929K| | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| | 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| | 1 | 815K | |
-------------------------------------------------------------------------------------------

So we end up with exactly the same execution plan but the efficiency of the Bloom Filter at runtime has changed dramatically due to the different cardinality estimate the Bloom Filter is based on.

I haven't spent much time yet with the corresponding undocumented parameters that might influence the Bloom Filter behaviour, but when I repeated the same and used the following settings in the session (and ensuring an adequate PGA_AGGREGATE_TARGET setting otherwise the hash join might be starting spilling to disk, which means the Bloom Filter size is considered when calculating SQL workarea sizes):

alter session set "_bloom_filter_size" = 1000000;

I got the following result:

-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Execs | A-Rows| PGA |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12568K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 815K | |
-----------------------------------------------------------------------------------

which shows a slightly increased PGA usage compared to the first output but the same efficiency as when having the better cardinality estimate in place.

Increasing the size I couldn't however convince Oracle to make the Bloom Filter even more efficient, even when the better cardinality estimate was in place.

Summary

Obviously the efficiency / internal sizing of the Bloom Filter vector at runtime depends on the cardinality estimates of the optimizer. Depending on the actual data pattern this can make a significant difference in terms of efficiency. Yet another reason why having good cardinality estimates is a good thing and yet sometimes so hard to achieve, in particular for join cardinalities.

Footnote

On MyOracleSupport I've found the following note regarding Bloom Filter efficiency:

Bug 8932139 - Bloom filtering efficiency is inversely proportional to DOP (Doc ID 8932139.8)

Another interesting behaviour - the bug is only fixed in version 19.1 but also included in the latest RU(R)s of 18c and 12.2 from January 2019 on.

Chinar Aliyev's Blog

Randolf Geist - Tue, 2019-04-23 17:04
Chinar Aliyev has recently started to pick up on several of my blog posts regarding Parallel Execution and the corresponding new features introduced in Oracle 12c.

It is good to see that obviously Oracle has since then improved some of these and added new ones as well.

Here are some links to the corresponding posts:

New automatic Parallel Outer Join Null Handling in 18c

Improvements regarding automatic parallel distribution skew handling in 18c

Chinar has also put some more thoughts on the HASH JOIN BUFFERED operation:

New thoughts about the HASH JOIN BUFFERED operation

There are also a number of posts on his blog regarding histograms and in particular how to properly calculate the join cardinality in the presence of additional filters and resulting skew, which is a very interesting topic and yet to be handled properly by the optimizer even in the latest versions.

Parse Calls

Jonathan Lewis - Tue, 2019-04-23 12:31

When dealing with the library cache / shared pool it’s always worth checking from time to time to see if a new version of Oracle has changed any of the statistics you rely on as indicators of potential problems. Today is also (coincidentally) a day when comments about “parses” and “parse calls” entered my field of vision from two different directions. I’ve tweeted out references to a couple of quirkly little posts I did some years ago about counting parse calls and what a parse call may entail, but I thought I’d finish the day off with a little demo of what the session cursor cache does for you when your client code issues parse calls.

There are two bit of information I want to highlight – activity in the library cache and a number that shows up in the session statistics. Here’s the code to get things going:

rem
rem     Script:         12c_session_cursor_cache.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2019
rem
rem     Note:
rem     start_1.sql contains the one line
rem          select * from t1 where n1 = 0;
rem

create table t1 
as
select 99 n1 from dual
;

execute dbms_stats.gather_table_stats(user,'t1')

spool 12c_session_cursor_cache

prompt  =======================
prompt  No session cursor cache
prompt  =======================

alter session set session_cached_cursors = 0;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

@start_1000

set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap


prompt  ============================
prompt  Session cursor cache enabled
prompt  ============================


alter session set session_cached_cursors = 50;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

@start_1000

set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap

spool off

I’ve made use of a couple of little utilities I wrote years ago to take snapshots of my session statistics and the library cache (v$librarycache) stats. I’ve also used my “repetition” framework to execute a basic query 1,000 times. The statement is a simple “select from t1 where n1 = 0”, chosen to return no rows.

The purpose of the whole script is to show you the effect of running exactly the same SQL statement many times – first with the session cursor cache disabled (session_cached_cursors = 0) then with the cache enabled at its default size.

Here are some results from an instance of 12.2.0.1 – which I’ve edited down by eliminating most of the single-digit numbers.

=======================
No session cursor cache
=======================
---------------------------------
Session stats - 23-Apr 17:41:06
Interval:-  4 seconds
---------------------------------
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,034
user calls                                                                   2,005
session logical reads                                                        9,115
non-idle wait count                                                          1,014
session uga memory                                                          65,488
db block gets                                                                2,007
db block gets from cache                                                     2,007
db block gets from cache (fastpath)                                          2,006
consistent gets                                                              7,108
consistent gets from cache                                                   7,108
consistent gets pin                                                          7,061
consistent gets pin (fastpath)                                               7,061
logical read bytes from cache                                           74,670,080
calls to kcmgcs                                                              5,005
calls to get snapshot scn: kcmgss                                            1,039
no work - consistent read gets                                               1,060
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,091
parse count (total)                                                          1,035
parse count (hard)                                                               8
execute count                                                                1,033
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

---------------------------------
Library Cache - 23-Apr 17:41:06
Interval:-      4 seconds
---------------------------------
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                       1,040       1,032   1.0       1,089       1,073   1.0         0         1
NAMESPACE TABLE/PROCEDURE                   17          16    .9         101          97   1.0         0         0
NAMESPACE BODY                               9           9   1.0          26          26   1.0         0         0
NAMESPACE SCHEDULER GLOBAL ATTRIBU          40          40   1.0          40          40   1.0         0         0

PL/SQL procedure successfully completed.

The thing to notice, of course, is the large number of statistics that are (close to) multiples of 1,000 – i.e. the number of executions of the SQL statement. In particular you can see the ~1,000 “parse count (total)” which is not reflected in the “parse count (hard)” because the statement only needed to be loaded into the library cache and optimized once.

The other notable statistics come from the library cache where we do 1,000 gets and pins on the “SQL AREA” – the “get” creates a “KGL Lock” (the “breakable parse lock”) that is made visible as an entry in v$open_cursor (x$kgllk), and the the “pin” created a “KGL Pin” that makes it impossible for anything to flush the child cursor from memory while we’re executing it.

So what changes when we enabled the session cursor cache:


============================
Session cursor cache enabled
============================

Session altered.

---------------------------------
Session stats - 23-Apr 17:41:09
Interval:-  3 seconds
---------------------------------
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,004
user calls                                                                   2,005
session logical reads                                                        9,003
non-idle wait count                                                          1,013
db block gets                                                                2,000
db block gets from cache                                                     2,000
db block gets from cache (fastpath)                                          2,000
consistent gets                                                              7,003
consistent gets from cache                                                   7,003
consistent gets pin                                                          7,000
consistent gets pin (fastpath)                                               7,000
logical read bytes from cache                                           73,752,576
calls to kcmgcs                                                              5,002
calls to get snapshot scn: kcmgss                                            1,002
no work - consistent read gets                                               1,000
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
session cursor cache hits                                                    1,000
session cursor cache count                                                       3
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,002
parse count (total)                                                          1,002
execute count                                                                1,003
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

---------------------------------
Library Cache - 23-Apr 17:41:09
Interval:-      3 seconds
---------------------------------
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                           5           5   1.0       1,014       1,014   1.0         0         0
NAMESPACE TABLE/PROCEDURE                    7           7   1.0          31          31   1.0         0         0
NAMESPACE BODY                               6           6   1.0          19          19   1.0         0         0

PL/SQL procedure successfully completed.

The first thing to note is that “parse count (total)” still shows up 1,000 parse calls. However we also see the statistic “session cursor cache hits” at 1,000. Allowing for a little noise around the edges virtually every parse call has turned into a short-cut that takes us through the session cursor cache directly to the correct cursor.

This difference shows up in the library cache activity where we still see 1,000 pins – we have to pin the cursor to execute it – but we no longer see 1,000 “gets”. In the absence of the session cursor cache the session has to keep searching for the statement then creating and holding a KGL Lock while we execute the statement – but when the cache is enabled the session will very rapidly recognise that the statement is one we are likely to re-use, so it will continue to hold the KGL lock
after we have finished executing the statement and we can record the location of the KGL lock in a session state object. After the first couple of executions of the statement we no longer have to search for the statement and attach a spare lock to it, we can simply navigate from our session state object to the cursor.

As before, the KGL Lock will show up in v$open_cursor – though this time it will not disappear between executions of the statement. Over the history of Oracle versions the contents of v$open_cursor have become increasingly helpful, so I’ll just show you what the view held for my session by the end of the test:


SQL> select cursor_type, sql_text from V$open_cursor where sid = 250 order by cursor_type, sql_text;

CURSOR_TYPE                                                      SQL_TEXT
---------------------------------------------------------------- ------------------------------------------------------------
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN DBMS_OUTPUT.DISABLE; END;
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_libcache.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_my_stats.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  SELECT DECODE('A','A','1','2') FROM SYS.DUAL
OPEN                                                             begin         dbms_application_info.set_module(
OPEN                                                             table_1_ff_2eb_0_0_0
OPEN-RECURSIVE                                                    SELECT VALUE$ FROM SYS.PROPS$ WHERE NAME = 'OGG_TRIGGER_OPT
OPEN-RECURSIVE                                                   select STAGING_LOG_OBJ# from sys.syncref$_table_info where t
OPEN-RECURSIVE                                                   update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0
PL/SQL CURSOR CACHED                                             SELECT INDX, KGLSTTYP LTYPE, KGLSTDSC NAME, KGLSTGET GETS, K
PL/SQL CURSOR CACHED                                             SELECT STATISTIC#, NAME, VALUE FROM V$MY_STATS WHERE VALUE !
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.ENABLE(1000000); END;
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
SESSION CURSOR CACHED                                            BEGIN snap_libcache.start_snap; END;
SESSION CURSOR CACHED                                            BEGIN snap_my_stats.start_snap; END;
SESSION CURSOR CACHED                                            select * from t1 where n1 = 0
SESSION CURSOR CACHED                                            select /*+ no_parallel */ spare4 from sys.optstat_hist_contr

17 rows selected.

The only one of specific interest is the penultimate one in the output – its type is “SESSION CURSOR CACHED” and we can recognise our “select from t1” statement.

Deploying A Micronaut Microservice To The Cloud

OTN TechBlog - Tue, 2019-04-23 10:17

So you've finally done it. You created a shiny new microservice. You've written tests that pass, ran it locally and everything works great. Now it's time to deploy and you're ready to jump to the cloud. That may seem intimidating, but honestly there's no need to worry. Deploying your Micronaut application to the Oracle Cloud is really quite easy and there are several options to chose from. In this post I'll show you a few of those options and by the time you're done reading it you'll be ready to get your app up and running.

If you haven't yet created an application, feel free to check out my last post and use that code to create a simple app that uses GORM to interact with an Oracle ATP instance.  Once you've created your Micronaut application you'll need to create a runnable JAR file. For this blog post I'll assume you followed my blog post and any assets that I refer to will reflect that assumption. With Micronaut creating a runnable JAR is as easy as using ./gradlew assemble or ./mvnw package (depending on which build automation tool your project uses). Creating the artifact will take a bit longer than you're probably used to if you haven't used Micronaut before. That's because Micronaut precompiles all necessary metadata for Dependency Injection so that it can minimize/reduce runtime reflection to obtain that metadata. Once your task completes you will have a runnable JAR file in the build/libs directory of your project. You can launch your application locally by running java -jar /path/to/your.jar. So to launch the JAR created from the previous blog post, I set some environment variables and run:

Which results in the application running locally:

So far, pretty easy. But we want to do more than launch a JAR file locally. We want to run it in the cloud, so let's see what that takes. The first method I want to look at is more of a "traditional" approach: launching a simple compute instance and deploying the JAR file.

Creating A Virtual Network

If this is your first time creating a compute instance you'll need to set up virtual networking.  If you have a network ready to go, skip down to "Creating An Instance" below. 

Your instance needs to be associated with a virtual network in the Oracle Cloud. Virtual cloud networks (hereafter referred to as VCNs) can be pretty complicated, but as a developer you need to know enough about them to make sure that your app is secure and accessible from the internet. To get started creating a VCN, either click "Create a virtual cloud network" from the dashboard:

Or select "Networking" -> "Virtual Cloud Networks" from the sidebar menu and then click "Create Virtual Cloud Network" on the VCN overview page:

In the "Create Virtual Cloud Network" dialog, populate a name and choose the option "Create Virtual Cloud Network Plus Related Resources" and click "Create Virtual Cloud Network" at the bottom of the dialog:

The "related resources" here refers to the necessary Internet Gateways, Route Table, Subnets and related Security Lists for the network. The security list by default will allow SSH, but not much else, so we'll edit that once the VCN is created.  When everything is complete, you'll receive confirmation:

Close the dialog and back on the VCN overview page, click on the name of the new VCN to view details:

On the details page for the VCN, choose a subnet and click on the Security List to view it:

On the Security List details page, click on "Edit All Rules":

And add a new rule that will expose port 8080 (the port that our Micronaut application will run on) to the internet:

Make sure to save the rules and close out. This VCN is now ready to be associated with an instance running our Micronaut application.

Creating An Instance

To get started with an Oracle Cloud compute instance log in to the cloud dashboard and either select "Create a VM instance":

Or choose "Compute" -> "Instances" from the sidebar and click "Create Instance" on the Instance overview page:

In the "Create Instance" dialog you'll need to populate a few values and make some selections. It seems like a long form, but there aren't many changes necessary from the default values for our simple use case. The first part of the form requires us to name the instance, select an Availability Domain, OS and instance type:

 

The next section asks for the instance shape and boot volume configuration, both of which I leave as the default. At this point I select a public key that I can use later on to SSH in to the machine:

Finally, select the a VCN that is internet accessible with port 8080 open:

Click "Create" and you'll be taken to the instance details page where you'll notice the instance in a "Provisioning" state.  Once the instance has been provisioned, take note of the public IP address:

Deploying Your Application To The New Instance

Using the instance public IP address, SSH in via the private key associated with the public key used to create the instance:

We're almost ready to deploy our application, we just need a few things.  First, we need a JDK.  I like to use SDKMAN for that, so I first install SDKMAN, then use it to install the JDK with sdk install java 8.0.212-zulu and confirm the installation:

We'll also need to open port 8080 on the instance firewall so that our instance will allow the traffic:

We can now upload our instance with SCP:

I've copied the JAR file, my Oracle ATP wallet and 2 simple scripts to help me out. The first script sets some environment variables:

The second script is what we'll use to launch the application:

Next, move the wallet directory from the user home directory to the root with sudo mv wallet/ /wallet and source the environment variables with . ./env.sh. Now run the application with ./run.sh:

And hit the public IP in your browser to confirm the app is running and returning data as expected!

You've just deployed your Micronaut application to the Oracle Cloud! Of course, a manual VM install is just one method for deployment and isn't very maintainable long term for many applications, so in future posts we'll look at some other options for deploying that fit in the modern application development cycle.

.gist{ border-left: none } code { padding: 2px 4px; font-size: 90%; display: inline; margin: 0;}

Latest Blog Posts from Oracle ACEs: April 14-20, 2019

OTN TechBlog - Tue, 2019-04-23 10:06

In writing the blog posts listed below, the endgame for the Oracle ACE program members is simple: sharing their experience and expertise with the community. That doesn't make them superheroes, but you have to marvel at their willingness to devote time and energy to helping others.

Here's what they used their powers to produce for the week of April 14-20, 2019.

 

Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia

 

Oracle ACE Director Ludovico CaldaraLudovico Caldara
Computing Engineer, CERN
Nyon, Switzerland

 

Oracle ACE Director Martin Giffy D'SouzaMartin D'Souza
Director of Innovation, Insum Solutions
Alberta, Canada

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, Texas

 

Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
CTO, eProseed
Riyadh, Saudi Arabia

 

Oracle ACE Alfredo KreigAlfredo Krieg
Senior Principal Consultant, Viscosity North America
Dallas, Texas

 

Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany
Oracle ACE Marco Mischke

 

Oracle ACE Noriyushi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise Japan
Tokyo, Japan
Oracle ACE Noriyushi Shinoda

 

 

Oracle ACE Patrick JolliffePatrick Jolliffe
Manager, Li & Fung Limited
Hong Kong
Oracle ACE Patrick Joliffe

 

Oracle ACE Phil WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom
Oracle ACE Phil Wilkins

 

Oracle ACE Syed ZaheerZaheer Syed
Oracle Application Specialist, Tabadul
Riyadh, Saudi Arabia
Oracle ACE Zaheer Syed

 

Batmunkh Moltov
Chief Technology Officer, Global Data Engineering Co.
Ulaanbaatar, Mongolia
Oracle ACE Associate

 

Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland
Oracle ACE Associate Flora Barriele

 

 

Related Resources

62 Percent of Restaurants Feel Unprepared for a Mobile Future

Oracle Press Releases - Tue, 2019-04-23 07:00
Press Release
62 Percent of Restaurants Feel Unprepared for a Mobile Future Restaurateurs engaging customers with mobile offerings today, but are not confident in keeping pace with the mobile innovations of tomorrow

Redwood Shores, Calif.—Apr 23, 2019

A recent survey of food and beverage leaders highlights that while a large percentage feel confident in their restaurant’s current use of mobile technology, only 48 percent feel prepared to capitalize on future innovations. Sixty-two percent of respondents expressed doubts over their ability to keep up with the speed of mobile technology changes. And more than half (59 percent) agreed that their company faces the threat of disruption from their more mobile-enabled competitors.

“The rise of mobile ordering and on-demand food delivery services are completely changing the restaurant and guest experience,” said Simon de Montfort Walker, senior vice president and general manager for Oracle Food and Beverage. “In order to remain relevant to a rapidly evolving audience, restaurants must act quickly to modernize their mobile strategy and offerings. Today, the experience a customer has ordering online or from a kiosk can be just as essential as if they were ordering in the store.”

The study findings point to a clear and urgent need for restaurants to embrace the right mobile and back-end technology to drive higher ticket value, turn tables faster and enable more cross and upsell. In addition, the findings highlight the need to embrace mobile technology to avoid being outpaced by the competition, help cut labor costs and improve the guest experience—all critical components to revenue growth.

Improving Loyalty and the Dining Experience

Today’s foodies want choices. In addition to great food, what drives their loyalty is easy ordering and delivery, fast, seamless payments, and a personalized experience.

  • 86 percent of operators say branded mobile apps increase their speed of service and therefore revenue
  • 93 percent believe their guest-facing apps enhance the guest experience, promote loyalty and drive repeat business
  Cutting Costs, Saving Time Equals Increased Revenues

Restaurants are investing in mobile technology to cut costs and save time in areas such as hiring less serving staff but more runners, keeping a close eye on stock levels to avoid over-ordering and waste, and the ability to quickly change the menu and offer specials when there is an over-stock of inventory.  

  • 84 percent of food and beverage executives believe the adoption of guest-facing apps drives down labor costs
  • 96 percent agree, with 40 percent strongly agreeing, that expanded mobile inventory management will drive time and money savings
  Perceived Future Benefits of Mobile Technology

Restaurants are already using mobile devices for table reservations, taking orders, and processing payments, but what value do restaurateurs believe will come from future mobile innovations?

  • 82 percent believe partnerships with third-party delivery services like Uber Eats and GrubHub will help grow their business
  • 89 percent believe check averages will increase thanks to in-app recommendations
  • 95 percent believe the guest experience and customer loyalty will continue to improve
  The Road Ahead

While most organizations rated themselves as highly able to meet new consumer demands, an undercurrent of anxiety about the future was also apparent with only 48 percent of respondents reporting that they have the tools they need to meet the mobile demands of tomorrow. The mobility study findings show a clear path for restaurateurs including applying mobile innovation to broader areas such as inventory efficiency, getting new customers in the door, serving them more efficiently, and keeping them coming back.

Methodology

For this survey, Oracle queried 279 leaders in the food and beverage industry who use mobile technology in their organizations during the summer of 2018. 45 percent of those surveyed were from full-service restaurants, 24 percent from fast casual and 23 percent from quick service. Seventy-one percent of respondents are director level or higher, with 45 percent hailing from companies that generate more than $500M in annual revenue.

Contact Info
Valerie Beaudett
Oracle
+1 650 400 7833
valerie.beaudett@oracle.com
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650 400 7833

Mobile Is Key to Boosting Guest Experiences Say Hoteliers

Oracle Press Releases - Tue, 2019-04-23 07:00
Press Release
Mobile Is Key to Boosting Guest Experiences Say Hoteliers But many not prepared to deliver forward-thinking mobile innovations shows new survey

Redwood Shores, Calif.—Apr 23, 2019

A whopping 91 percent of hotel executives surveyed said mobile technologies are critical to improving guest experience and cultivating loyalty. But only 69 percent were confident in their organization’s ability to adopt and deliver those mobile experiences.

“It’s clear that hotels need to provide mobile innovations to meet the requirements of today’s savvy consumers, yet some haven’t started their mobile journey. Customers want to be able to engage with brands wherever they are—booking a room from their child’s soccer game or ordering drinks while sitting poolside at the hotel. The properties that can’t deliver these kinds of mobile experiences will quickly lose to those that can make the engagement simple and seamless for their customers,” said Greg Webb, senior vice president and general manager of Oracle Hospitality.

The 2019 Hospitality Benchmark - Mobile Maturity Analysis study, which was conducted by Oracle, focused on three key areas of mobility:

  • The ability to offer WIFI to guests throughout the property
  • Guest-facing apps to enhance the customer experience; and
  • Staff-facing mobile to improve the hotel team’s daily operational workflow
 

Despite high self-ratings for mobile utilization prowess, 50 percent of respondents expressed fear that their organization would be disrupted by more mobile-friendly competitors. So it was not surprising that 90 percent of the hotel executives surveyed agreed that mobile was critical to maintaining a competitive advantage. Ninety percent also added that guest experience could be improved by the ability to use smartphones to manage basic services such as booking a room and managing the check-in and check-out processes. And 91 percent said their guest-facing mobile app is the preferred way they’d like guests to request service from hotel staff. 

In addition to enhancing guest experience, 66 percent of respondents said reducing operational costs was another major driver for embracing mobility.

Even with the high ratings for hotel mobile adoption, there is room for improvement in elevating the guest experience and providing personalized services via mobile—starting with awareness. Twenty-three percent of respondents agreed that they struggle to promote their guest-facing mobile app technology. The survey underscores the importance of offering guests incentives—such as free perks, drinks or discounted room service—to download and use hotel apps. In the absence of such mobile initiatives, it is essential for hoteliers to provide guests with other communication channels, such as texting, to quickly respond to their needs.

The majority of hotel executives believe that mobile technologies are critical to guest experiences, and Oracle believes that there are three areas they can focus on to improve the guest experience including empowering guests to take advantage of self-service tools, allowing guests to communicate with the hotel through their preferred channel, and continuing to invest in mobile technologies to reduce friction.

Methodology

199 executive leaders in the hospitality industry were surveyed regarding the current use of mobile technology within their organizations. Seventy seven percent of respondents were director level or higher, with 53% from companies whose annual revenue is greater than $500M.

Contact Info
Valerie Beaudett
Oracle
+1 650 400 7833
valerie.beaudett@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. 

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650 400 7833

Bitmap Index On Column With 212552698 Distinct Values, What Gives? (I’d Rather Be High)

Richard Foote - Mon, 2019-04-22 21:45
In my previous post on Indexing The Autonomous Warehouse, I highlighted how it might be necessary to create indexes to improve the performance and scalability of highly selective queries, as it might on any Data Warehouse running on an Exadata platform. In the post, I created a Bitmap Index and showed how improve SQL performance […]
Categories: DBA Blogs

Utilities Testing Accelerator 6.0.0.1.0 Now Available

Anthony Shorten - Mon, 2019-04-22 14:19

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Testing Accelerator Version 6.0.0.1.0 via the Oracle Software Delivery Cloud with exciting new features which provide improved test asset building and execution capabilities. This release is a foundation release for future releases with key new and improved features.

Last year the first release of the Oracle Utilities Testing Accelerator was released to replace the Oracle Functional Testing Advanced Pack for Oracle Utilities product to optimize the functional testing of Oracle Utilities products. The new version extends the existing feature set and adds new capabilities for the testing of Oracle Utilities products.

The key changes and new capabilities in this release include the following:

  • Accessible. This release is now accessible making the product available to a wider user audience.
  • Extensions to Test Accelerator Repository. The Oracle Utilities Testing Accelerator was shipped with a database repository, Test Accelerator Repository, to store test assets. This repository has been extended to accommodate new objects introduced in this release including a newly redesigned Test Results API to provide comprehensive test execution information. 
  • New! Server Execution Engine. In past releases, the only way to execute tests was using the provided Oracle Utilities Testing Accelerator Eclipse Plugin. Whilst that plugin is still available and will continue to be provided, an embedded scalable server execution engine has been implemented directly in the Oracle Utilities Testing Accelerator Workbench. This allows testers to build and execute test assets without leaving the browser. This engine will be the premier method of executing tests in this release and in future releases of the Oracle Utilities Testing Accelerator.
  • New! Test Data Management. One of the identified bottlenecks in automation is the provision and re-usability of test data for testing activities. The Oracle Utilities Testing Accelerator has added an additional capability to extend the original test data capabilities by allowing test users to extract data from non-production sources for reuse in test data. The principle is based upon the notion that it is quicker to update data than create it. The tester can specify a secure connection to a non-production source to pull the data from and allow manipulation at the data level for testing complex scenarios. This test data can be stored at the component level to create reusable test data banks or at the flow level to save a particular set of data for reuse. With this capability testers can quickly get sets of data to be reused within and across flows. The capability includes the ability to save and name test data within the extended Test Accelerator repository.
  • New! Flow Groups are now supported. The Oracle Utilities Testing Accelerator supports the concept of Flow Groups. These are groups of flows that can be executed as a set in parallel or serial to reduce test execution time. This capability is used by the Server Execution Engine to execute groups of flows efficiently. This capability is also foundation of future functionality.
  • New! Groovy Support for Validation. In this release, it is possible to use Groovy to express rules for validation in addition to the component validation language already supported. This capability allows partners and testers to add complex rule logic at the component and flow level. As with the Groovy support within the Oracle Utilities Application Framework, the language is whitelisted and does not support external Groovy frameworks.
  • Annotation Support. In the component API, it is possible to annotate each step in the process to make it more visible. This information, if populated, is now displayed on the flow tree for greater visibility. For backward compatibility, this information may be blank on the tree unless it is already populated.
  • New! Test Dashboard Zones. An additional set of test dashboard zones have been added to cover the majority of the queries needed for test execution and results.
  • New! Security Enhancements. For the Oracle Utilities SaaS Cloud releases of the product, the Oracle Utilities Testing Accelerator has been integrated with Oracle Identity Cloud Service to manage identity in the product as part of the related Oracle Utilities SaaS Cloud Services.

Note: This upgrade is backward compatible with test assets built with the previous Oracle Utilities Testing Accelerator releases so no rework is anticipated on existing assets as part of the upgrade process.

For more details of this release and the capabilities of the Oracle Utilities Testing Accelerator product refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

Automating DevSecOps for Java Apps with Oracle Developer Cloud

OTN TechBlog - Mon, 2019-04-22 11:32

Looking to improve your application's security? Automating vulnerability reporting helps you prevent attacks that leverage known security problems in code that you use. In this blog we'll show you how to achieve this with Oracle's Developer Cloud.

Most developers rely on third party libraries when developing applications. This helps them reduce the overall development timelines by providing working code for specific needs. But are you sure that the libraries you are using are secure? Are you keeping up to date with the latest reports about security vulnerabilities that were found in those libraries? What about apps that you developed a while back and are still running but might be using older versions of libraries that don't contain the latest security fixes?

DevSecOps aims to integrate security aspects into the DevOps cycle, ideally automating security checks as part of the dev to release lifecycle. The latest release of Oracle Developer Cloud Service - Oracle's cloud based DevOps and Agile team platform - includes a new capability to integrate security check into your DevOps pipelines.

Relying on the public National Vulnerability Database, the new dependency vulnerability analyzer scans the libraries used in your application against the database of known issues, and flags any security risks your app might have based on this data. The current version of DevCS support this for any Maven based Java project. Leveraging the pom files as a source of truth for the list of libraries used in your code.

Vulnerability Analyzer Step

When running the check, you can specify your level of tolerance to issues - for example defining that you are ok with low risk issues, but not with medium to high risk vulnerabilities. When a check finds issues you can fail the build pipeline, send notifications, and in addition add an issue into the issue tracking system provided for free with Developer Cloud.

Check out this demo video to see the process in action.

Having these type of vulnerability scans applied to your platform can save you from situation where hackers leverage publicly known issues and out of date libraries usage to break into your systems. These checks can be part of your regular build cycle, and can also be scheduled to run on a regular basis on systems that have already been deployed - to verify that we keep them up to date with the latest security checks.

 

I gave up my cell phone & laptop for the weekend: This is what I learned

Look Smarter Than You Are - Mon, 2019-04-22 10:10
It was time for a technology detox. When I left work on Good Friday, I left my laptop at the office. I got home at 3PM and put my mobile phone on a charger that I wouldn't see until Monday at 9AM. And my life free of external, involuntary, technological distraction began... along with the stress of being out of touch for the next 3 days. Here's what I learned.

Biggest Lessons
  1. It's really stressful at first, but you get over it.
  2. All those people you told "if it's an emergency, contact my significant other" will not have any emergencies suitable for contacting your significant other.
  3. It will leave you wanting more.
I learned far more about myself and we'll get to that in a second.

Why in the name of God?
Thanks to the cruel "Screen Time" tracking feature of my Apple iPhone, I found that on the average day, I lift up my phone more than 30 times before 11AM every day and then it gets worse from there. In general, I am using my phone 6+ hours per day and many days are a lot worse. I pay more attention to my phone than the people around me: if it's always within arm's reach and I use it for everything. As a CEO, my outward reason for my phone addiction is that I have to be connected: emails and text messages must be dealt with immediately and without my calendar, I might miss a Very Important Meeting. In reality, I am completely addicted to my cell phone and the whole "I have to stay connected" thing is largely rationalization.

But about a week ago, I looked around at the people in my life and realized that we're all addicted: for some of us, it's about communication. Others live in their games. Some people are on Instagram looking at puppies and kittens. Whatever your thing, you're getting it through either your phone or your laptop.

So why take a break? Mostly to find out 1) if I could make it for 42 hours; and 2) what I could learn from the experience. I settled on Easter weekend (April 19-22).

Things I thought I couldn't live without
Texting. According the aforementioned Evil Screen Time, I knew that I spent 1.5 hours a day on text messaging. To be clear, I'm not a tween: my company uses text messaging more than any other communication vehicle, it's how I stay in contact with friends (who has time for phone calls?), and it's about the only way my kids will talk to me.

Email. While texting is great for short communications and quick back-and-forths, I get around 200 non-spam emails on the average day and about 50 on the average weekend. When you have something longer to say or it's not urgent, email is the way to go.

Navigation. I have long since forgotten how to drive without the little blue dot directing me. There are about four places I felt I could find on my own (work, home, airport, grocery store), but I was sure that I would be lost without Google Maps or Waze.

Games. I am level 40 on Pokemon Go (humble brag) and I have played it every day since July 2016. It's literally the only game on my phone, but I have to keep my daily streak going lest... I don't know, actually, but the stress of missing out on my 7-day rewards was seriously getting to me.

Turns out, I didn't miss Pokemon Go, I'm actually a decent driver without a phone (it's like falling off a bike: you never forget how), and if you're off email, you never know what you're missing. I did miss texting, but not in the way I thought I would. So what did I actually miss?

Things I actually missed
Bitmoji. I genuinely missed sending cute pictures around to my friends of me as the Easter Bunny and receiving their pictures dressed up inside Easter eggs. I kept wanting to sneak peeks at my wife's phone to see if she was getting anything cute, though I did manage to resist.

Information. I had forgotten the days when questions didn't have answers. What's the address of Academy Sports? I didn't know, so I just had to drive in the general area where I thought it was. What time does Salata open? No idea, so I drove there and got to wander outside for a bit until they opened for the day (fun fact: stores still post actual opening/closing hours on their front doors!). What time is the movie Little playing at the AMC Grapevine Mills 30? Who won the Texas Rangers game (when in doubt, assume it's the team they're playing against)? Who is the actor that plays that one character in that movie, oh, come on, you know who I'm talking about, that guy, let me just look it up for you, oh, damn, I can't until Monday, FML?

Calendar. I worried all weekend about my schedule for the upcoming week: when was my first appointment on Monday, what did I have scheduled for after work, was there anything I should be preparing for, when was I leaving town next, where was I supposed to be for Memorial Day weekend? It went on-and-on, and it turns out that none of it matters.

Photos. I didn't realize how many photos I take of the world around me, until I couldn't take any photos at all. I had to use a long-forgotten mental trick called "memory." It made me pay a lot more attention to the world around me, and I genuinely remember more of how I experienced the weekend than if I had been trying to catalog everything through pictures. I'm sure photos would have made this blog more appealing, but I'm doing all this from memory, so all we have are words.

Connection. I wanted to know what my friends and family were doing and to let them know I was thinking of them. Without technology, this is almost impossible nowadays. I had to resort to seeing them in-person: I met a couple of them at a restaurant and we got together with another friend for cycling, a movie, and Game of Thrones. But it turns out that those friends - the ones I spent time with in-person - I felt more deeply connected to than before the weekend started. Texting is about surface-level connecting, but facetime (note that this is different than FaceTime) is about bonding.

What changed over the weekend?For one, I spent a lot more time outside. I played frisbee, went on a fourteen-mile bike ride, worked out at the gym, walked around some, went to the mall, saw a movie, and in general, I actually experienced more of the world than I normally do. I also didn't trip over a curb once, because unlike normal, I was looking up the whole time.

I read more instead of looking at my phone each night to fall asleep. I made it 100 pages into a book that I've been meaning to read for a year now. And in the morning I didn't reach for my phone on my bedside table either. I tend to forget how immersed you can get in a book when you don't have notifications popping up constantly telling you what you should be doing instead of reading in peace.

I spent a lot of time with my wife this weekend to the point that she was probably sick of me by Sunday night, but we spent real time with each other without any technological distractions. I finally gave her an Edward Break last night by heading off to take a long bath while reading more of my book (Stealing Snow, if you're curious). She fell asleep and I stayed up reading until midnight.

Any lasting effects?I thought I would be longing for my phone and my laptop (particularly text and emails) at exactly 9AM this morning. I waited until 9AM and opened up my laptop to see what appointment I had at 9AM. It turns out no one needs me - or loves me? - until 10:30AM, so I opened up a browser window to write my first blog entry in many, many months. My cell phone is still face down, and as of 10AM, I still have no idea who texted or emailed me all weekend. I'm blissfully writing away, and I have to admit, I'm not looking forward to going back to my constantly-connected world.

Will giving up your technology addiction for a weekend give you some sort of mystical clarity, a purity of soul that let's you know how the Dalai Lama must feel when he's between text messages? No, but it will help you find out just how addicted you are, and how strong your willpower is. It'll help you understand what you're missing when you're disconnected, and if you're like me, you'll find that in some ways, you actually like it.

Now will I ever do this again? I'll let you know after I log into my email, read all my texts, and see just how bad the world got over the weekend. Until then, I'm blissfully unaware.
Categories: BI & Warehousing

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Michael Dinh - Sun, 2019-04-21 22:46

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

Oracle Ksplice introduces Known Exploit Detection functionality

Wim Coekaerts - Sat, 2019-04-20 12:04

The Oracle Ksplice team has added some really cool new functionality in Oracle Ksplice. Instead of writing and copying the blog pretty much, just go directly to the source:

It's unique, it's awesome, it's part of Oracle Linux premier subscription and it's included in Oracle Cloud instances at no extra cost for all customers using Oracle Linux. 

https://blogs.oracle.com/linux/using-ksplice-to-detect-exploit-attempts

 

Pages

Subscribe to Oracle FAQ aggregator