Mary Ann Davidson

Subscribe to Mary Ann Davidson feed
Oracle Blogs
Updated: 11 hours 5 min ago

When Screen Scraping became API calling – Gathering Oracle OpenWorld Session Catalog with ...

Sun, 2018-05-20 03:16
image

A dataset with all sessions of the upcoming Oracle OpenWorld 2017 conference is nice to have – for experiments and demonstrations with many technologies. The session catalog is exposed at a website here.

With searching, filtering and scrolling, all available sessions can be inspected. If data is available in a browser, it can be retrieved programmatically and persisted locally in for example a JSON document. A typical approach for this is web scraping: having a server side program act like a browser, retrieve the HTML from the web site and query the data from the response. This process is described for example in this article – https://codeburst.io/an-introduction-to-web-scraping-with-node-js-1045b55c63f7 – for Node and the Cheerio library.

However, server side screen scraping of HTML will only be successful when the HTML is static. Dynamic HTML is constructed in the browser by executing JavaScript code that manipulates the browser DOM. If that is the mechanism behind a web site, server side scraping is at the very least considerably more complex (as it requires the server to emulate a modern web browser to a large degree). Selenium has been used in such cases – to provide a server side, programmatically accessible browser engine. Alternatively, screen scraping can also be performed inside the browser itself – as is supported for example by the Getsy library.

As you will find in this article – when server side scraping fails, client side scraping may be a much to complex solution. It is very well possible that the rich client web application is using a REST API that provides the data as a JSON document. An API that our server side program can also easily leverage. That turned out the case for the OOW 2017 website – so instead of complex HTML parsing and server side or even client side scraping, the challenge at hand resolves to nothing more than a little bit of REST calling. Read the complete article here.

PaaS Partner Community

For regular information on business process management and integration become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn image[7][2][2][2] Facebook clip_image002[8][4][2][2][2] Wiki

Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

Solve digital transformation challenges using Oracle Cloud

Sun, 2018-05-20 03:15

 

image

Digital transformation is an omnipresent topic today, providing a lot of challenges as well as chances. Due to that, customers are asking about how to deal with those challenges and how to leverage from the provided chances. Frequently asked questions in this area are:

  • How can we modernize existing applications?
  • What are the key elements for a future-proven strategy IT system architecture?
  • How can the flexibility as well as the agility of the IT system landscape be ensured?

But from our experience there’s no common answer for these questions, since every customer has individual requirements and businesses, but it is necessary to find pragmatic solutions, which leverage from existing best Practices – it is not necessary to completely re-invent the wheel.

With our new poster „Four Pillars of Digitalization based on Oracle Cloud“ (Download it here) , we try to deliver a set of harmonized reference models which we evolved based on our practical experience, while conceiving modern, future-oriented solutions in the area of modern application designs, integrative architectures, modern infrastructure solutions and analytical architectures. The guiding principle, which is the basis for our architectural thoughts is: Design for Change. If you want to learn more, you can refer to our corresponding Ebook (find the Ebook here, only available in German at the moment).

Usually the technological base for modern application architectures today is based on Cloud services, where the offerings of different vendors are constantly growing. Here it is important to know which Cloud services are the right ones to implement a specific use case. Our poster „Four Pillars of Digitalization based on Oracle Cloud“ shows the respective Cloud services of our strategic partner Oracle, which can be used to address specific challenges in the area of digitalization. Get the poster here.

 

Developer Partner Community

For regular information become a member in the Developer Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress

Oracle API Platform Cloud Service Overview by Rolando Carrasco

Sat, 2018-05-19 03:25

image

  Oracle API Platform Cloud Services - API Design This is the first video of a series to showcase the usage of Oracle API Platform Cloud Services. API Management Part 1 of 2. Oracle API Cloud Services This is the second video of a series to show case the usage of the brand new Oracle API Platform CS. This is part one of API Management Oracle API Platform Cloud Services - API Management part 2 This is the 3rd video of the series. In specific here we will see the second part of the API Management functionality focused on Documentation. Oracle API Platform CS - How to create an app This is the 4th video of this series. In this video you will learn how to create an application. Oracle API Plaform Cloud Services - API Usage This is the fifth video of this series. In this video I will showcase how you will interact with the APIs that are deployed in APIPCS.

 

PaaS Partner Community

For regular information on business process management and integration become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn image[7][2][2][2] Facebook clip_image002[8][4][2][2][2] Wiki

Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

Why are Universal Cloud Credit and Bring Your Own License a great opportunity for Oracle Partners?

Sat, 2018-05-19 03:24
image

Oracle simplified buying and consuming for PaaS and IaaS Cloud. Customer can purchase now Universal Cloud Credits. This universal cloud credits can be spend for any IaaS or PaaS service. Partners can start a PoC or project e.g. with Application Container Cloud Service and can add additional service when required e.g. Chabot Cloud Service. The customer can use the universal cloud credits for any available or even upcoming IaaS and PaaS services.

Thousands of customers use Oracle Fusion Middleware and Databases today. With Bring Your Own License they can move easy workload to the cloud. As they already own the license the customer needs to pay only a small uplift for the service portion of PaaS. This is a major opportunity for Oracle partners to offer services to this customers.

To learn more about Universal Cloud Credits and Bring Your Own License Attend the free on-demand training here

 

Developer Partner Community

For regular information become a member in the Developer Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress

Event Hub Cloud Service. Hello world

Sat, 2018-05-19 00:46

In early days, I've wrote a blog about Oracle Reference Architecture and concept of Schema on Read and Schema on Write. Schema on Read is well suitable for Data Lake, which may ingest any data as it is, without any transformation and preserve it for a long period of time. 

At the same time you have two types of data - Streaming Data and Batch. Batch could be log files, RDBMS archives. Streaming data could be IoT, Sensors, Golden Gate replication logs.

Apache Kafka is very popular engine for acquiring streaming data. It has multiple advantages, like scalability, fault tolerance and high throughput. Unfortunately, Kafka is hard to manage. Fortunately, Cloud simplifies many routine operations. Oracle Has three options for deploy Kafka in the Cloud:

1) Use Big Data Cloud Service, where you get full Cloudera cluster and there you could deploy Apache Kafka as part of CDH.

2) Event Hub Cloud Service Dedicated. Here you have to specify server shapes and some other parameters, but rest done by Cloud automagically. 

3) Event Hub Cloud Service. This service is fully managed by Oracle, you even don't need to specify any compute shapes or so. Only one thing to do is tell for how long you need to store data in this topic and tell how many partitions do you need (partitions = performance).

Today, I'm going to tell you about last option, which is fully managed cloud service.

It's really easy to provision it, just need to login into your Cloud account and choose "Event Hub" Cloud service.

after this go and choose open service console:

Next, click on "Create service":

Put some parameters - two key is Retention period and Number of partitions. First defines for how long will you store messages, second defines performance for read and write operations.

Click next after:

Confirm and wait a while (usually not more than few minutes):

after a short while, you will be able to see provisioned service:

 

 

Hello world flow.

Today I want to show "Hello world" flow. How to produce (write) and consume (read) message from Event Hub Cloud Service.

The flow is (step by step):

1) Obtain OAuth token

2) Produce message to a topic

3) Create consumer group

4) Subscribe to topic

5) Consume message

Now I'm going to show it in some details.

OAuth and Authentication token (Step 1)

For dealing with Event Hub Cloud Service you have to be familiar with concept of OAuth and OpenID. If you are not familiar, you could watch the short video or go through this step by step tutorial

In couple words OAuth token authorization (tells what I could access) method to restrict access to some resources.

One of the main idea is decouple Uses (real human - Resource Owner) and Application (Client). Real man knows login and password, but Client (Application) will not use it every time when need to reach Resource Server (which has some info or content). Instead of this, Application will get once a Authorization token and will use it for working with Resource Server. This is brief, here you may find more detailed explanation what is OAuth.

Obtain Token for Event Hub Cloud Service client.

As you could understand for get acsess to Resource Server (read as Event Hub messages) you need to obtain authorization token from Authorization Server (read as IDCS). Here, I'd like to show step by step flow how to obtain this token. I will start from the end and will show the command (REST call), which you have to run to get token:

#!/bin/bash curl -k -X POST -u "$CLIENT_ID:$CLIENT_SECRET" \ -d "grant_type=password&username=$THEUSERNAME&password=$THEPASSWORD&scope=$THESCOPE" \ "$IDCS_URL/oauth2/v1/token" \ -o access_token.json

as you can see there are many parameters required for obtain OAuth token.

Let's take a looks there you may get it. Go to the service and click on topic which you want to work with, there you will find IDCS Application, click on it:

After clicking on it, you will go be redirected to IDCS Application page. Most of the credentials you could find here. Click on Configuration:

On this page right away you will find ClientID and Client Secret (think of it like login and password):

 

look down and find point, called Resources:

Click on it

and you will find another two variables, which you need for OAuth token - Scope and Primary Audience.

One more required parameter - IDCS_URL, you may find in your browser:

you have almost everything you need, except login and password. Here implies oracle cloud login and password (it what you are using when login into http://myservices.us.oraclecloud.com):

Now you have all required credential and you are ready to write some script, which will automate all this stuff:

#!/bin/bash export CLIENT_ID=7EA06D3A99D944A5ADCE6C64CCF5C2AC_APPID export CLIENT_SECRET=0380f967-98d4-45e9-8f9a-45100f4638b2 export THEUSERNAME=john.dunbar export THEPASSWORD=MyPassword export SCOPE=/idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest export PRIMARY_AUDIENCE=https://7EA06D3A99D944A5ADCE6C64CCF5C2AC.uscom-central-1.oraclecloud.com:443 export THESCOPE=$PRIMARY_AUDIENCE$SCOPE export IDCS_URL=https://idcs-1d6cc7dae45b40a1b9ef42c7608b9afe.identity.oraclecloud.com curl -k -X POST -u "$CLIENT_ID:$CLIENT_SECRET" \ -d "grant_type=password&username=$THEUSERNAME&password=$THEPASSWORD&scope=$THESCOPE" \ "$IDCS_URL/oauth2/v1/token" \ -o access_token.json

after running this script, you will have new file called access_token.json. Field access_token it's what you need:

$ cat access_token.json {"access_token":"eyJ4NXQjUzI1NiI6InVUMy1YczRNZVZUZFhGbXFQX19GMFJsYmtoQjdCbXJBc3FtV2V4U2NQM3MiLCJ4NXQiOiJhQ25HQUpFSFdZdU9tQWhUMWR1dmFBVmpmd0UiLCJraWQiOiJTSUdOSU5HX0tFWSIsImFsZyI6IlJTMjU2In0.eyJ1c2VyX3R6IjoiQW1lcmljYVwvQ2hpY2FnbyIsInN1YiI6ImpvaG4uZHVuYmFyIiwidXNlcl9sb2NhbGUiOiJlbiIsInVzZXJfZGlzcGxheW5hbWUiOiJKb2huIER1bmJhciIsInVzZXIudGVuYW50Lm5hbWUiOiJpZGNzLTFkNmNjN2RhZTQ1YjQwYTFiOWVmNDJjNzYwOGI5YWZlIiwic3ViX21hcHBpbmdhdHRyIjoidXNlck5hbWUiLCJpc3MiOiJodHRwczpcL1wvaWRlbnRpdHkub3JhY2xlY2xvdWQuY29tXC8iLCJ0b2tfdHlwZSI6IkFUIiwidXNlcl90ZW5hbnRuYW1lIjoiaWRjcy0xZDZjYzdkYWU0NWI0MGExYjllZjQyYzc2MDhiOWFmZSIsImNsaWVudF9pZCI6IjdFQTA2RDNBOTlEOTQ0QTVBRENFNkM2NENDRjVDMkFDX0FQUElEIiwiYXVkIjpbInVybjpvcGM6bGJhYXM6bG9naWNhbGd1aWQ9N0VBMDZEM0E5OUQ5NDRBNUFEQ0U2QzY0Q0NGNUMyQUMiLCJodHRwczpcL1wvN0VBMDZEM0E5OUQ5NDRBNUFEQ0U2QzY0Q0NGNUMyQUMudXNjb20tY2VudHJhbC0xLm9yYWNsZWNsb3VkLmNvbTo0NDMiXSwidXNlcl9pZCI6IjM1Yzk2YWUyNTZjOTRhNTQ5ZWU0NWUyMDJjZThlY2IxIiwic3ViX3R5cGUiOiJ1c2VyIiwic2NvcGUiOiJcL2lkY3MtMWQ2Y2M3ZGFlNDViNDBhMWI5ZWY0MmM3NjA4YjlhZmUtb2VodGVzdCIsImNsaWVudF90ZW5hbnRuYW1lIjoiaWRjcy0xZDZjYzdkYWU0NWI0MGExYjllZjQyYzc2MDhiOWFmZSIsInVzZXJfbGFuZyI6ImVuIiwiZXhwIjoxNTI3Mjk5NjUyLCJpYXQiOjE1MjY2OTQ4NTIsImNsaWVudF9ndWlkIjoiZGVjN2E4ZGRhM2I4NDA1MDgzMjE4NWQ1MzZkNDdjYTAiLCJjbGllbnRfbmFtZSI6Ik9FSENTX29laHRlc3QiLCJ0ZW5hbnQiOiJpZGNzLTFkNmNjN2RhZTQ1YjQwYTFiOWVmNDJjNzYwOGI5YWZlIiwianRpIjoiMDkwYWI4ZGYtNjA0NC00OWRlLWFjMTEtOGE5ODIzYTEyNjI5In0.aNDRIM5Gv_fx8EZ54u4AXVNG9B_F8MuyXjQR-vdyHDyRFxTefwlR3gRsnpf0GwHPSJfZb56wEwOVLraRXz1vPHc7Gzk97tdYZ-Mrv7NjoLoxqQj-uGxwAvU3m8_T3ilHthvQ4t9tXPB5o7xPII-BoWa-CF4QC8480ThrBwbl1emTDtEpR9-4z4mm1Ps-rJ9L3BItGXWzNZ6PiNdVbuxCQaboWMQXJM9bSgTmWbAYURwqoyeD9gMw2JkwgNMSmljRnJ_yGRv5KAsaRguqyV-x-lyE9PyW9SiG4rM47t-lY-okMxzchDm8nco84J5XlpKp98kMcg65Ql5Y3TVYGNhTEg","token_type":"Bearer","expires_in":604800}

Create Linux variable for it:

#!/bin/bash export TOKEN=`cat access_token.json |jq .access_token|sed 's/\"//g'`

Well, now we have Authorization token and may work with our Resource Server (Event Hub Cloud Service). 

Note: you also may check documentation about how to obtain OAuth token.

Produce Messages (Write data) to Kafka (Step 2)

The first thing that we may want to do is produce messages (write data to a Kafka cluster). To make scripting easier, it's also better to use some environment variables for common resources. For this example, I'd recommend to parametrize topic's end point, topic name, type of content to be accepted and content type. Content type is completely up to developer, but you have to consume (read) the same format as you produce(write). The key parameter to define is REST endpoint. Go to PSM, click on topic name and copy everything till "restproxy":

Also, you will need topic name, which you could take from the same window:

now we could write a simple script for produce one message to Kafka:

#!/bin/bash export OEHCS_ENDPOINT=https://oehtest-gse00014957.uscom-central-1.oraclecloud.com:443/restproxy export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest export CONTENT_TYPE=application/vnd.kafka.json.v2+json curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ --data '{"records":[{"value":{"foo":"bar"}}]}' \ $OEHCS_ENDPOINT/topics/$TOPIC_NAME

if everything will be fine, Linux console will return something like:

{"offsets":[{"partition":1,"offset":8,"error_code":null,"error":null}],"key_schema_id":null,"value_schema_id":null}

Create Consumer Group (Step 3)

The first step to read data from OEHCS is create consumer group. We will reuse environment variables from previous step, but just in case I'll include it in this script:

#!/bin/bash export OEHCS_ENDPOINT=https://oehtest-gse00014957.uscom-central-1.oraclecloud.com:443/restproxy export CONTENT_TYPE=application/vnd.kafka.json.v2+json export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ --data '{"format": "json", "auto.offset.reset": "earliest"}' \ $OEHCS_ENDPOINT/consumers/oehcs-consumer-group \ -o consumer_group.json

this script will generate output file, which will contain variables, that we will need to consume messages.

Subscribe to a topic (Step 4)

Now you are ready to subscribe for this topic (export environment variable if you didn't do this before):

#!/bin/bash export BASE_URI=`cat consumer_group.json |jq .base_uri|sed 's/\"//g'` export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ -d "{\"topics\": [\"$TOPIC_NAME\"]}" \ $BASE_URI/subscription

If everything fine, this request will not return something. 

Consume (Read) messages (Step 5)

Finally, we approach last step - consuming messages.

and again, it's quite simple curl request:

#!/bin/bash export BASE_URI=`cat consumer_group.json |jq .base_uri|sed 's/\"//g'` export H_ACCEPT=application/vnd.kafka.json.v2+json curl -X GET \ -H "Authorization: Bearer $TOKEN" \ -H "Accept: $H_ACCEPT" \ $BASE_URI/records

if everything works, like it supposed to work, you will have output like:

[{"topic":"idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest","key":null,"value":{"foo":"bar"},"partition":1,"offset":17}]

Conclusion

Today we saw how easy to create fully managed Kafka Topic in Event Hub Cloud Service and also we made a first steps into it - write and read message. Kafka is really popular message bus engine, but it's hard to manage. Cloud simplifies this and allow customers concentrate on the development of their applications.

here I also want to give some useful links:

1) If you are not familiar with REST API, I'd recommend you to go through this blog

2) There is online tool, which helps to validate your curl requests

3) Here you could find some useful examples of producing and consuming messages

4) If you are not familiar with OAuth, here is nice tutorial, which show end to end example

Why Now Is the Time for ERP in the Cloud

Fri, 2018-05-18 20:20

“The movement to cloud is an inevitable destination; this is how computing will evolve over the next several years.” So said Oracle CEO Mark Hurd at Oracle OpenWorld 2017. Based on the results of new research, that inevitability is here, now.

In our first ERP Trends Report, we surveyed more than 400 finance and IT leaders. We found that 76% of respondents said they either have plans for ERP in the cloud or have made the move already. They are recognizing that waiting puts them at a disadvantage; the time to make the move is now.

The majority of respondents cited economic factors as the reason they made the leap, and it’s easy to see why: Nucleus Research recently published a report that cloud delivers 3.2x the return on investment (ROI) of on-premises systems, while the total cost of ownership (TCO) is 52% lower.  

But even more surprising were the benefits realized once our survey respondents got to the cloud. An astonishing 81% cited “Staying current on technology” as the main benefit of moving to cloud ERP. With a regular cadence of innovation delivered by the cloud, it is easier for companies to quickly incorporate game-changing technologies into everyday business processes—technologies like artificial intelligence, machine learning, the Internet of Things (IoT), blockchain and more. In the cloud, the risk of running their businesses on obsolete technology drops to zero. It’s the last upgrade they will ever need.

“One of the key value propositions in engaging with Oracle and implementing the cloud solutions has been the value of keeping current with technology and technological developments,” said Mick Murray, CFO of Blue Shield of California. “In addition to robotics, we’re looking at machine learning and artificial intelligence, and how do we apply that across the enterprise.”

As new capabilities are rolled out, cloud subscribers like Blue Shield can take advantage of them immediately. This gives them the agility to be both responsive and predictive. Uncertainty is the new normal in business and managing amid uncertainty is a must. It’s no longer enough to be quick-to-change; competitive companies must also have reliable insight into how potential future scenarios could impact performance.

So, what does that mean in terms of daily operations? Basically, it means people using knowledge to make good decisions in a fast, productive, and highly automated manner at all levels of the business. Cloud systems provide the data integration and ongoing technology refresh to incorporate best practices and technology advances.

The cloud also makes it easier to integrate external sources of valuable, contextual knowledge that helps improve the accuracy of data models. This is important considering the scope of threats to sustainable operations for businesses with large, global footprints. Political, environmental, and economic factors across multiple regions could impact business, such as limited travel capabilities slowing down delivery of key supplies.

Business uncertainty is everywhere, and organizations must be able to say, “What is our plan if X happens? What is our plan if X, Y, and Z happen, but W doesn’t?” And this insight must come quickly. Business moves too fast for reports to take days to compile.

ERP Replacement Effort Is Not What It Used to Be

One final stone on the scale in favor of ERP cloud is that migrating does not have to be painful. Don’t let memories of past onsite replacements haunt you. With the right products and the right expertise behind them, cloud migrations happen quickly, cause minimal business disruption, and don’t require intense user training.

For example, Blue Shield of California had set aside $600,000 on change management for the adoption of cloud; in the end, they barely spent anything. Change adoption, they reported, happened quickly and seamlessly.

Considering the benefits for cost savings, elimination of technology obsolescence, and ease of adopting emerging technologies, it is becoming harder to justify a wait on migration to cloud ERP. Disruption is not an issue, and long-term cost saving are substantial. Most importantly, modernizing ERP is an opportunity to modernize the business and embed an ever-refreshing technology infrastructure that enables higher performance on multiple levels.

 

7 Machine Learning Best Practices

Fri, 2018-05-18 20:11

Netflix’s famous algorithm challenge awarded a million dollars to the best algorithm for predicting user ratings for films. But did you know that the winning algorithm was never implemented into a functional model?

Netflix reported that the results of the algorithm just didn’t seem to justify the engineering effort needed to bring them to a production environment. That’s one of the big problems with machine learning.

At your company, you can create the most elegant machine learning model anyone has ever seen. It just won’t matter if you never deploy and operationalize it. That's no easy feat, which is why we're presenting you with seven machine learning best practices.

Download your free ebook, "Demystifying Machine Learning"

At the most recent Data and Analytics Summit, we caught up with Charlie Berger, Senior Director of Product Management for Data Mining and Advanced Analytics to find out more. This is article is based on what he had to say. 

Putting your model into practice might longer than you think. A TDWI report found that 28% of respondents took three to five months to put their model into operational use. And almost 15% needed longer than nine months.

Graph on Machine Learning Operational Use

So what can you do to start deploying your machine learning faster?

We’ve laid out our tips here:

1. Don’t Forget to Actually Get Started

In the following points, we’re going to give you a list of different ways to ensure your machine learning models are used in the best way. But we’re starting out with the most important point of all.

The truth is that at this point in machine learning, many people never get started at all. This happens for many reasons. The technology is complicated, the buy-in perhaps isn’t there, or people are just trying too hard to get everything e-x-a-c-t-l-y right. So here’s Charlie’s recommendation:

Get started, even if you know that you’ll have to rebuild the model once a month. The learning you gain from this will be invaluable.

2. Start with a Business Problem Statement and Establish the Right Success Metrics

Starting with a business problem is a common machine learning best practice. But it’s common precisely because it’s so essential and yet many people de-prioritize it.

Think about this quote, “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

Now be sure that you’re applying it to your machine learning scenarios. Below, we have a list of poorly defined problem statements and examples of ways to define them in a more specific way.

Machine Learning Problem Statements

Think about what your definition of profitability is. For example, we recently talked to a nation-wide chain of fast-casual restaurants that wanted to look at increasing their soft drinks sales. In that case, we had to consider carefully the implications of defining the basket. Is the transaction a single meal, or six meals for a family? This matters because it affects how you will display the results. You’ll have to think about how to approach the problem and ultimately operationalize it.

Beyond establishing success metrics, you need to establish the right ones. Metrics will help you establish progress, but does improving the metric actually improve the end user experience? For example, your traditional accuracy measures might encompass precision and square error. But if you’re trying to create a model that measures price optimization for airlines, that doesn’t matter if your cost per purchase and overall purchases isn’t going up.

3. Don’t Move Your Data – Move the Algorithms

The Achilles heel in predictive modeling is that it’s a 2-step process. First you build the model, generally on sample data that can run in numbers ranging from the hundreds to the millions. And then, once the predictive model is built, data scientists have to apply it. However, much of that data resides in a database somewhere.

Let’s say you want data on all of the people in the US. There are 360 million people in the US—where does that data reside? Probably in a database somewhere.

Where does your predictive model reside?

What usually happens is that people will take all of their data out of database so they can run their equations with their model. Then they’ll have to import the results back into the database to make those predictions. And that process takes hours and hours and days and days, thus reducing the efficacy of the models you’ve built.

However, growing your equations from inside the database has significant advantages. Running the equations through the kernel of the database takes a few seconds, versus the hours it would take to export your data. Then, the database can do all of your math too and build it inside the database. This means one world for the data scientist and the database administrator.

By keeping your data within your database and Hadoop or object storage, you can build models and score within the database, and use R packages with data-parallel invocations. This allows you to eliminate data duplications and separate analytical servers (by not moving data) and allows you to to score models, embed data prep, build models, and prepare data in just hours.

4. Assemble the Right Data

As James Taylor with Neil Raden wrote in Smart Enough Systems, cataloging everything you have and deciding what data is important is the wrong way to go about things. The right way is to work backward from the solution, define the problem explicitly, and map out the data needed to populate the investigation and models.

And then, it’s time for some collaboration with other teams.

Machine Learning Collaboration Teams

Here’s where you can potentially start to get bogged down. So we will refer to point number 1, which says, “Don’t forget to actually get started.” At the same time, assembling the right data is very important to your success.

For you to figure out the right data to use to populate your investigation and models, you will want to talk to people in the three major areas of business domain, information technology, and data analysts.

Business domain—these are the people who know the business.

  • Marketing and sales
  • Customer service
  • Operations

Information technology—the people who have access to data.

  • Database administrators

Data Analysts—people who know the business.

  • Statisticians
  • Data miners
  • Data scientists

You need the active participation. Without it, you’ll get comments like:

  • These leads are no good
  • That data is old
  • This model isn’t accurate enough
  • Why didn’t you use this data?

You’ve heard it all before.

5. Create New Derived Variables

You may think, I have all this data already at my fingertips. What more do I need?

But creating new derived variables can help you gain much more insightful information. For example, you might be trying to predict the amount of newspapers and magazines sold the next day. Here’s the information you already have:

  • Brick-and-mortar store or kiosk
  • Sell lottery tickets?
  • Amount of the current lottery prize

Sure, you can make a guess based off that information. But if you’re able to first compare the amount of the current lottery prize versus the typical prize amounts, and then compare that derived variable against the variables you already have, you’ll have a much more accurate answer.

6. Consider the Issues and Test Before Launch

Ideally, you should be able to A/B test with two or more models when you start out. Not only will you know how you’re doing it right, but you’ll also be able to feel more confident knowing that you’re doing it right.

But going further than thorough testing, you should also have a plan in place for when things go wrong. For example, your metrics start dropping. There are several things that will go into this. You’ll need an alert of some sort to ensure that this can be looked into ASAP. And when a VP comes into your office wanting to know what happened, you’re going to have to explain what happened to someone who likely doesn’t have an engineering background.

Then of course, there are the issues you need to plan for before launch. Complying with regulations is one of them. For example, let’s say you’re applying for an auto loan and are denied credit. Under the new regulations of GDPR, you have the right to know why. Of course, one of the problems with machine learning is that it can seem like a black box and even the engineers/data scientists can’t say why certain decisions have been made. However, certain companies will help you by ensuring your algorithms will give a prediction detail.

7. Deploy and Automate Enterprise-Wide

Once you deploy, it’s best to go beyond the data analyst or data scientist.

What we mean by that is, always, always think about how you can distribute predictions and actionable insights throughout the enterprise. It’s where the data is and when it’s available that makes it valuable; not the fact that it exists. You don’t want to be the one sitting in the ivory tower, occasionally sprinkling insights. You want to be everywhere, with everyone asking for more insights—in short, you want to make sure you’re indispensable and extremely valuable.

Given that we all only have so much time, it’s easiest if you can automate this. Create dashboards. Incorporate these insights into enterprise applications. See if you can become a part of customer touch points, like an ATM recognizing that a customer regularly withdraws $100 every Friday night and likes $500 after every payday.

Conclusion

Here are the core ingredients of good machine learning. You need good data, or you’re nowhere. You need to put it somewhere like a database or object storage. You need deep knowledge of the data and what to do with it, whether it’s creating new derived variables or the right algorithms to make use of them. Then you need to actually put them to work and get great insights and spread them across the information.

The hardest part of this is launching your machine learning project. We hope that by creating this article, we’ve helped you out with the steps to success. If you have any other questions or you’d like to see our machine learning software, feel free to contact us.

You can also refer back to some of the articles we’ve created on machine learning best practices and challenges concerning that. Or, download your free ebook, "Demystifying Machine Learning."

 

Announcing PeopleSoft Cloud Manager Support for Oracle Cloud Infrastructure

Fri, 2018-05-18 19:45

Oracle released PeopleSoft Cloud Manager in 2017 featuring in-depth automation to help accelerate adoption of Oracle Cloud (Classic) as an efficient deployment platform for PeopleSoft customers. With the excitement generated around Oracle Cloud Infrastructure (OCI)--a cloud designed for the enterprise customer--several customers and partners have been looking forward to taking advantage of the enhanced OCI with PeopleSoft Cloud Manager.  Oracle is pleased to announce Cloud Manager’s support for OCI beginning with today’s release of PeopleSoft Cloud Manager Version 6.

So, what is new and exciting in PeopleSoft Cloud Manager Version 6?  For the first time, there are two images provided: one for OCI, and the other for OCI Classic.  The Cloud Manager Image 6 for OCI supports a number of OCI features, including Regions, Virtual Cloud Networks, Subnets, Compute and DB System platforms.  With this image, instances will be provisioned on VM shapes.  Customers can lift and shift PeopleSoft environments from on-premises to OCI using the same approach they used OCI Classic.

For PeopleSoft Cloud Manager on OCI Classic, we have enabled support for the lift and shift of on-premises databases encrypted with Oracle Transparent Data Encryption (TDE).  TDE offers another level of data security that customers are looking for as data is migrated to the cloud.  A ‘Clone to template’ option is also available for encrypted databases. 

The lift utility requires a few parameters for TDE so that the encrypted database may be packaged and lifted to the cloud.

During shift process, the same parameters are required to deploy the lifted database.

Customers have also requested an enhancement to support non-Unicode databases for PeopleSoft environments.  PeopleSoft Cloud Manager Version 6 supports lift and shift of environments that use non-Unicode Databases.  Unlike image 5, a conversion of the on-premises database to Unicode is no longer required prior to using Cloud Manager’s Lift and Shift automation.

To get your hands on the new Cloud Manager images, go to the Oracle Marketplace and look for either the OCI-Classic image or the OCI image…or try both!   Be sure to review the documentation and additional important information mentioned in the Marketplace listings.

We are excited to combine the automation of provisioning and maintenance that PeopleSoft Cloud Manager provides with the robust benefits of Oracle Cloud Infrastructure.  Combining support for OCI with the additional features of non-Unicode databases and TDE encrypted databases, we expect all customers to benefit from the latest Cloud Manager image, using whichever Oracle Cloud is right for you. 

Stay tuned for additional information and more Cloud Manager features.  Now, off to the next image!

 

Emerging Tech Helps Progressive Companies Deliver Exceptional CX

Fri, 2018-05-18 19:18

It’s no secret that the art of delivering exceptional service to customers—whether they’re consumers or business buyers—is undergoing dramatic change. Customers routinely expect highly personalized experiences across all touchpoints, from marketing and sales to service and support. I call each of these engagements a moment of truth—because leaving customers feeling satisfied and valued at each touchpoint will have a direct bearing on their loyalty and future spending decisions.

This is why customer experience (CX) has become a strategic business imperative for modern companies. Organizations that provide effective, well-integrated CX across the entire customer journey achieved compound annual growth rates of 17%, versus the 3% growth rates logged by their peers who provided less-effective customer experiences, according to Forrester’s 2017 “Customer Experience Index.”

Fortunately, it’s becoming easier to enter the CX winner’s circle. AI, machine learning, IoT, behavioral analytics, and other innovations are helping progressive companies capitalize on internal and third-party data to deliver highly personalized communications, promotional offers, and service engagements.

How can companies fully leverage today’s tools to support exceptional CX? If they haven’t already done so, companies should start evolving away from cloud 1.0 infrastructures, where an amalgam of best-of-breed services runs various business units. These standalone cloud platforms might have initially provided quick on-ramps to modern capabilities, but now, many companies are paying a price for that expediency. Siloed data and workflows hinder the smooth sharing of customer information among departments. This hurts CX when a consumer who just purchased a high-end digital camera at a retail outlet, for example, webchats with that same company’s service department about a problem, and the service team has no idea this is a premium customer.

In contrast, cloud 2.0 is focused on achieving a holistic view of customers—thanks to simplified, well-integrated services that support each phase of the customer journey. Eliminating information silos benefits companies by giving employees all the information they need to provide a tailored experience for every customer.

Achieving modern CX requires the right vendor partnerships. That starts with evaluating cloud services according to how complete, integrated, and extensible the CX platform is for supporting the entire customer journey. One option is the Oracle Customer Experience Cloud (Oracle CX Cloud) suite, an integrated set of applications for the entire customer lifecycle. It’s complemented by native AI capabilities and Oracle Data Cloud, the world’s largest third-party data marketplace of consumer and business information, which manages anonymized information from more than a billion business and 5 billion consumer identifiers. This means that business leaders, besides understanding customers based on their direct interactions, can use Oracle Data Cloud for insights into social, web surfing, and buying habits at third-party sites and retailers and then apply AI to find profitable synergies.

As new disruptive technologies come to the market—whether that’s the mainstreaming of IoT or drones for business—companies will be under constant pressure to integrate these new capabilities to improve their CX strategies. Modern, integrated cloud services designed for CX don’t support just today’s innovations. With the right cloud choices, companies can continually evolve to meet tomorrow’s CX challenges.

(Photo of Des Cahill by Bob Adler, The Verbatim Agency)

5 Subjects Every Computer Science Student Should Learn

Fri, 2018-05-18 18:55

I was fortunate this year to attend the Association for Computer Machinery’s SIGCSE (Special Interest Group on Computer Science Education) conference, where there was a good deal of conversation about what a modern computer science curriculum should include.

Technology changes quickly and it can be difficult for academic programs to keep pace. Still, if computer science students are to contribute meaningfully to the field in either industry or research jobs, it’s critical that they learn modern computing skills. Here are five subjects I think every higher education institution should teach their undergraduate computer science majors:

1. Parallel Programming

The single, standalone server with one CPU has gone the way of the dodo bird, displaced by the cloud, server farms and multithreaded parallel processors. Yet colleges and universities are still mainly teaching their undergraduates sequential programming—programs that execute instructions one after the other—as they have for decades.

Modern computing environments and massive data sets demand not just that we process multiple instructions simultaneously across multiple servers (distributed computing), but also that programs be written to process multiple instructions simultaneously on multicore chips within multiple servers and devices.

Too often, parallel programming is relegated to a single chapter in a textbook, easily skipped when time in the semester runs short. To prepare students for high-performance computing, big data, machine learning, blockchain and more, we must teach them to both think and program in parallel.

2. Green Programming

With the ubiquity of battery-driven computers, energy efficiency is more important than ever. The more we ask our smart devices to do, the more energy they need to do it and the more quickly they exhaust their batteries. The same is true for massive server clusters, where fires related to energy-consumption are not uncommon as we demand faster and faster processing of more and more data.

How you architect a software program directly affects how much energy is needed to execute the program, yet few undergraduate programs teach students about this relationship. In a fast-warming world, one in which we dream big dreams about all the ways artificial intelligence and high-performance computing will make our lives better, it is imperative that we write energy-optimized software. Students will not be able to do that if we don’t teach them how.

3. Collaborative Development

Academia persists in trying to measure what individual students know. In most programming classes, students start from a blank screen and write clean code independently or, less often, with a partner.

But this isn’t how software is engineered in the real world. Professional software engineers almost always start with someone else’s code and work collaboratively in large groups to modify, improve and correct that code, which is then integrated with code written by other engineers in other groups.

It’s common for software development groups to include people from different countries, in different time zones. Working effectively requires team members to communicate well in different languages and across different cultures. It also means that someone else needs to be able to look at your code and know what it does, so following formatting standards and providing clear commenting are critical.

However, in our desire to ensure that each student understands every programming concept and rule of syntax, we overlook opportunities to teach collaborative software development and help students develop critical professional skills.

4. Hardware Architecture

In the minds of most college students, IBM, Intel, and AMD—the inventors and developers of the multicore processor—are old news…old companies founded by old guys. Mobile applications are where the action is.

But mobile apps are driven by data, usually by a lot of data, and they won’t be of much use without the processors, databases and networks that power them.

Computing works and advances based on the entire system, from the power source to the user interface, and students will be more successful if they know how to open the box and “kick the tires.” They can then optimize for energy efficiency and write parallel code that makes use of new hardware architectures. They can manage caching, memory architecture and resource allocation issues. They can explain and explore quantum computing.

Computer science doesn’t stop at software or coding. Students need foundations in hardware architecture, too, including electrical engineering and physics. We need computer scientists who can test and push the boundaries of hardware just as much as they push what can be achieved with software.

5. Computer History and Ethics

Something I heard at the Turing 50th Anniversary celebration last summer has stuck with me: Computing is not neutral. It can be used for good or evil. It can be used to help people and it can be used to manipulate and harm them.

For several decades now, we have been making computing advances for the sake of computing, because what we can make computers do is cool, because the challenge of the next thing is too alluring to pass up, because there is money to be made if we can do “X.”

Just because we can do something with computing, however, doesn’t mean we should. Computing power is so great that we need policies to regulate and manage it, in order to protect and benefit people.

It’s important for students of computing to understand its history and to take courses grounded in ethics so they can make responsible decisions and guide others. They should know computing’s historical villains and heroes, its inventors and detractors, and how it has been used to benefit and hurt people. The old saw applies here: If we do not learn our history, we are doomed to repeat it.

Even in a crowded curriculum, we must ensure students are gaining the skills and knowledge they need to become technology innovators, business leaders and positive contributors to society in the coming decades. This list is only a starting point.

Alison Derbenwick Miller is vice president of Oracle Academy.

How Blockchain Will Disrupt the Insurance Industry

Fri, 2018-05-18 18:49

The insurance industry relies heavily on the notion of trust among transacting parties. For example, when you go to buy car insurance you get asked for things like your zip code, name, age, daily mileage, and make & model of your car. Other than, maybe, the make & model of your car you can pretty much falsify other information about yourself for a better insurance quote. Underwriters trust that you are providing the correct information, which is one of the many risks in the underwriting business.

Enterprise blockchain platforms such as one from Oracle essentially enables trust-as-a-service in such interactions. Participants (insurer and insured) need to come together to do business, but they do not necessarily trust each other. Blockchain provides a scalable mechanism to securely and easily enable trust in such scenarios. There are 4 key properties of Blockchain that enable trust-as-a-service:

  1. Transparency of digital events and transactions it manages,
  2. Immutability of records stored on the blockchain. through append-only time-stamped and hashed records,
  3. Security and assurance that records stored on blockchain aren't compromised through built-in consensus and encryption mechanisms,
  4. Privacy through cryptography

Blockchain can be a good solution for a number of insurance use cases such as:

  • Reducing frauds in underwriting and claims by validating data from customers and suppliers in the value chain
  • Reducing claims by offering tokenized incentives to promote safer driving behavior by capturing data from insured entities like motor vehicles
  • Enabling pay-per-mile billing for insurance by keeping verifiable records of miles traveled
  • And, in the not so distant future, using blockchain to determine liability in case of an accident between two autonomous vehicles by using blockchain to manage timestamped immutable records of decisions made by deep-learning models from both autonomous vehicles right before the accident.

Besides these use cases, blockchain has potential to eliminate intermediaries, improve transparency of records, eliminate manual paperwork, and error-prone processes, which together can deliver orders of magnitude improvement in operational efficiency for businesses. Of course, there are other types of insurance such as healthcare, reinsurance, catastrophic events insurance, property and casualty insurance, which would have some unique flavor of use cases but they would similarly benefit from blockchain to reduce risk and improve business efficiency.

There is no question that blockchain can, potentially, be a disruptive force in the insurance industry. It would have to overcome legal and regulatory barriers before we see mass adoption of blockchain among the industry participants. 

If you are working on an interesting project related to the use of Blockchain for insurance industry feel free to get in touch by leaving a comment or contact us through social media or Oracle sales rep. We’d be glad to help you connect with our subject matter experts and with your industry peers who may be working on similar use cases with Oracle. For more information on Oracle Blockchain, please visit Oracle Blockchain home pages here, and here

 

 

Oracle's No-Cost Platinum-Level Support Is the New Baseline in the Cloud Market

Fri, 2018-05-18 18:36

Companies may give up their servers, storage, and entire data centers when they move to the cloud, but their need for support services doesn’t go away, it changes. Recognizing a growing need for enterprise-class support in the cloud, Oracle is making its Platinum-level support services available at no additional cost to all customers of Oracle Fusion software-as-a-service applications.

“Our objective is to put out a service capability that is simply the best—bar none,” said Oracle CEO Mark Hurd, in announcing that a range of support services would be available for Oracle Fusion enterprise resource planning, enterprise performance management, human capital management, supply chain, manufacturing, and sales and service cloud applications.

The SaaS support services include 24/7 rapid-response technical support, proactive technical monitoring, success planning, end-user adoption guidance, and education resources.

“Most of our customers are going to cloud,” Hurd said in a briefing with journalists at Oracle’s headquarters in Redwood Shores, California. As that happens, he said, “it’s important for someone in the industry, particularly an industry leader in these mission-critical applications, to take a position” on what level of service that transition demands.

“SaaS application support offerings need to become more agile and responsive,” Hurd added. “We need to provide our SaaS customers with everything they need for rapid, low-cost implementations and a successful rollout to their users.”

Catherine Blackmore, Oracle group vice president of North America Customer Success, said Oracle will also offer new advanced services, including dedicated support and certified expertise, for customers that need a higher level of support. “We have a shared interest in our customers’ success, so we’re going above and beyond to ensure our customers have everything they need to succeed,” she said.

Cloud Levels the Playing Field

Oracle also announced the names of first-time cloud customers and others that are expanding their use of Oracle Cloud services. They include Alsea, Broadcom, Exelon, Gonzaga University, Heineken Urban Polo, Providence St. Joseph Health, Sinclair Broadcast Group, and T-Mobile US.

In a Q&A with the journalists, Hurd was asked about his outlook for SaaS adoption outside of the United States and, in particular, in the Latin America region. He said modern cloud applications can be “game changing” for businesses in places where outdated software applications are still the norm.

“You don’t need armies of experts and system integrators,” Hurd said.

Oracle develops thousands of features that are made available regularly to its SaaS application customers. “That’s a feature stream you don’t have to manage from a data center that you don’t have to operate,” Hurd said.

Self-Driving Technology

Hurd pointed to Oracle’s development of autonomous technologies, including the recently introduced Oracle Autonomous Data Warehouse Cloud Service, as another big area of focus at the company. “It gets upgraded, optimized, secured, patched, and tuned, all automatically without any human intervention,” he said.

As the next step in the delivery of autonomous cloud services, Oracle announced the availability of three new Oracle Cloud Platform services with built-in artificial intelligence and machine learning algorithms: Oracle Autonomous Analytics CloudOracle Autonomous Integration Cloud, and Oracle Autonomous Visual Builder Cloud.

Modern Customer Experience 2018 was Legendary

Fri, 2018-05-18 17:53

During his keynote at Modern Customer Experience 2018, Des Cahill, Head CX Evangelist, stated that CX should stand for Continuous Experimentation. He encouraged 4,500 enthusiastic marketers, customer service, sales, and commerce professional us to try new strategies, to take risks, strive to be remarkable, and triumph through sheer determination.

Casey Neistat echoed Des, challenging us to “do what you can’t,” while best-selling author Cheryl Strayed inspired us to look past our fears and be brave. “Courage isn’t success,” she reminded us, “it’s doing what’s hard regardless of the outcome.”

CX professionals today face numerous challenges: the relentless rise of customer expectations, the accelerating pace of innovation, evolving regulations like GDPR, increase ROI, plus the constant pressure to raise the bar. Modern Customer Experience not only inspired attendees to become the heroes of their organization, but it armed each with the tools to do so.

If you missed Carolyne-Matseshe Crawford, VP of Fan Experience at Fanatics talk about how her company’s culture pervades the entire customer experience, or how Magen Hanrahan VP of Product Marketing at Kraft Heinz is obsessed with data driven marketing tactics, give them a watch. And don’t miss Comcast’s Executive VP, Chief Customer Experience Officer, Charlie Herrin, who wants to build proactive customer experience and dialogue into Comcast’s products themselves with artificial intelligence.

The Modern Customer Experience X Room showcased CX innovation, like augmented and virtual reality, artificial intelligence, and the Internet of Things. But it wasn’t all just mock-ups and demos, a Mack Truck, a Yamaha motorcycle, and an Elgin Street Sweeper were on display, showcasing how Oracle customers put innovation to use to create legendary customer experiences.

Attendees were able to let off some steam during morning yoga and group runs. They relived the 90s with Weezer during CX Fest, and our Canine Heroes from xxxxx were a highlight of everyone’s day.

But don’t just take it from us. Here’s what a few of our attendees had to say about the event.

“Modern Customer Experience gives me the ability to learn about new products on the horizon, discuss challenges, connect with other MCX participants, learn best practices and understand we’re not alone in our journey.” – Matt Adams, Sales Cloud Manager, ArcBest

 “Modern Customer Experience really allows me to do my job more effectively. Without it, I don’t know where I would be! It’s the best conference of the year.” – Joshua Parker, Digital Marketing and Automation Manager, Rosetta Stone

We’re still soaking it all in. You can watch all the highlights from Modern Customer Experience keynotes on YouTube, and peruse the event’s photo slideshow. Don’t forget to share your images on social media, with #ModernCX and sign up for alerts when registration for Modern Customer Experience 2019 opens!

The Most Important Stop on Your Java Journey

Fri, 2018-05-18 14:52

Howdy, Pardner. Have you moseyed over to JavaRanch lately? Pull up a stool at the OCJA or OCJP Wall of Fame and tell your tale or peruse the tales of others. 

Ok - I'm not so great at the cowboy talk, but if you're serious about a Java career and haven't visited JavaRanch, you are missing out! 

JavaRanch, a self-proclaimed "friendly place for Java greenhorns [beginners]" was created in 1997 by Kathy Sierra, co-author of at least 5 Java guides for Oracle Press. The ranch was taken over in subsequent years by Paul Wheaten who continues to run this space today.

In addition to a robust collection of discussion forums about all things Java, JavaRanch provides resources to learn and practice Java, book recommendations, and resources to create your first Java program and test your Java skills.

One of our favorite features of JavaRanch remains the Walls of Fame! This is where you can read the personal experiences of other candidates certified on Java. Learn from their processes and their mistakes. Be inspired by their accomplishments. Share your own experience. 

Visit the Oracle Certified Java Associate Wall of Fame

Visit the Oracle Certified Java Professional Well of Fame

Get the latest Java Certification from Oracle

Oracle Certified Associate, Java SE 8 Programmer

Oracle Certified Professional, Java SE 8 Programmer

Oracle Certified Professional, Java SE 8 Programmer (upgrade from Java SE 7)

Oracle Certified Professional, Java SE 8 Programmer (upgrade from Java SE 6 and all prior versions)

Related Content

Test Your Java Knowledge With FREE Sample Questions

Program Your Future With Java

What's New with Oracle Certification - May

Fri, 2018-05-18 14:49
Stay up to date with the Oracle Certification Program.
Keep informed with new exams released into production,
get information on current promotions, and learn about new program announcements. New Exams and Certifications

Oracle Mobile Cloud Enterprise 2018 Associate Developer | 1Z0-927: This certification covers implementation topics of related Oracle Paas Services such as: Visual Builder Cloud Service, Java Cloud Service, Developer Cloud Service, Application Container Cloud Service, and Container Native Apps. This certification validates understanding of the Application Development portfolio and capacity to configure the services.

Oracle Management Cloud 2018 Associate | 1Z0-930: Passing this exams demonstrates the skills and knowledge to architect and implement Oracle Management Cloud. This individual can configure Application Performance Monitoring, Oracle Infrastructure Monitoring, Oracle Log Analytics, Oracle IT Analytics, Oracle Orchestration, Oracle Security Monitoring and Analytics and Oracle Configuration and Compliance.

Oracle Cloud Security 2018 Associate | 1Z0-933: Passing this exam validates understanding of Oracle Cloud Security portfolio and capacity to configure the services. This certification covers topics such as: Identity Security Operations Center Framework, Identity Cloud Service, CASB Cloud Service, Security Monitoring and Analytics Cloud Service, Configuration and Compliance Service, and services Architecture and Deployment.

Oracle Data Integration Platform Cloud 2018 Associate | 1Z0-935: Passing this exam validates understanding of Oracle Application Integration to implement the service. This certification covers topics such as: Oracle Cloud Application Integration basics, Application Integration: Oracle Integration Cloud (OIC), Service-Oriented Architecture Cloud Service (SOACS), Integration API Platform Cloud Service, Internet of Things - Cloud Service (IOTCS), and Oracle's Process Cloud Service.

Oracle Analytics Cloud 2018 Associate | 1Z0-936: Passing this exam provides knowledge required to perform provisioning, build dimensional modelling and create data visualizations. The certified professional can use Advanced Analytics capabilities, create a machine learning model and configure Oracle Analytics Cloud Essbase.

Explore All Certifications

 

How Does the DBA Keep Their Role Relevant? 

By having the skills to meet the new demands for business optimization along with a reputation of continuous learning and improvement. Check out how training + certification keeps a DBA relevant. Read full article.

 

Benefits of Upgrading Your OCA certification to Database 12c Release 2

Building upon the competencies in the Oracle Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c includes the advanced knowledge and skills required of top-performing database administrators which includes development and deployment of backup, recovery and Cloud computing strategies. Find out how to upgrade with this exam!

Is Your Shellshocked Poodle Freaked Over Heartbleed?

Wed, 2015-03-25 17:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE



























Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.


For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:


The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues



A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.



Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.


When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).


Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.


The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.


The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarily
truly critical

It creates grounds for confusion (as opposed to using CVEs)

It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:


1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.


2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”


3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.


* e.g., businesses patching based on the level of hysteria rather than the level of threat


Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.






DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

Is Your Shellshocked Poodle Freaked Over Heartbleed?

Wed, 2015-03-25 17:43

Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.

For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:

The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues

A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.

Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.

When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).

Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.

The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.

The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarilytruly critical
It creates grounds for confusion (as opposed to using CVEs)

It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:

1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.

2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”

3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.

* e.g., businesses patching based on the level of hysteria rather than the level of threat

Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.



The Four Ps of Standards/Procurement Requirements/”Whatevahs”

Mon, 2015-03-23 19:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE




























I am a veteran – not merely a military veteran, but an information security veteran. I don’t get medals for the latter, but I do have battle scars. Many of the scars are relatively recent: a result of tearing my hair out from many, many, many mind-numbing reviews of publications, draft standards and other kinds of documents which are ostensibly meant to make security better, cybersecurity being “hot” and all. Alas, many of these documents have linguistic and operational difficulties that often make it highly unlikely that they will achieve their stated “better security” objectives.


After reviewing so many documents and running into common patterns, I decided to take a cue from my MBA days and categorize my concerns in a catchy way. Though not a marketing major, I vaguely recall the “four Ps” of marketing (product, price, place and promotion) and decided to adapt them to the world of standards/procurement requirements/whatevahs (which I will now refer to as SPW). They are:


Pr    Problem Statement
Precise Language and Scope
Pragmatic Solutions
Prescriptive Minimizations


I t     I offer the "four Ps of SPW" for those who are attempting to improve cybersecurity by fiat, or in other ways intended to compel the market, in hopes that we may collectively get to better security without sinking into the swamp of despair, dallying in the desert of dashed hopes, trekking through the tundra of too-obscure requirements (nice use of alliteration, no?) … you get the point. While I think my advice is generally applicable in the SPW (say “spew”) realm, the context for my discussion is assurance slash supply chain risk mitigation since that’s what I seem to review most often.


Problem Statement


I cannot tell you how many SPW documents I have read in which Someone Was Attempting to Make Someone Else Do Something More Securely, only it wasn’t clear what, exactly, or more importantly, why (or even that the requirements would result in “better security”). Anything that seeks to impose Something Security-Oriented On Someone needs a clear problem statement. Without this, a proposed SPW becomes an expensive wish list with no associated benefits to it. Ultimately, the seller has no idea what the buyer really wants or needs. If a government agency cannot explain what they are really worried about, in language the “comply-ee” can understand, they shouldn’t be surprised if they get a chocolate-covered cockroach (eew) when they ask for something sweet, crunchy and locally sourced. (I’d add “sustainable,” as there seems to be no shortages of cockroaches.)


With regard to security, “supply chain” has become the mantra for attempting to regulate almost 100% of what businesses do. Poor quality, “backdoor boogiemen,” assurance, “supply chain shutdown” are all very (very!) different problems. Worse, the ambiguity around proposing a standard for “supply chain security” may encompass 100% of business operations. Example: my employer does not make their own paper clips or wood stirrers for coffee cups. Do we really need to worry about a shortage of either? No? Then don’t describe “supply chain requirements” that ask technology suppliers to track the wood sourced for our coffee stirrers. Buying a poor quality product, for example, is a business risk. It’s not, per se, a supply chain risk. Furthermore, while poor quality may lead to poor security, not all security problems are a result of quality issues. Some are a result of buyers not understanding that commercial off-the-shelf (COTS) software, while general purpose and often very good, is not “all purpose” and not designed for all threat environments.


The second aspect of a problem statement is the provision of use cases. A use cases is a fancy way of saying, “for example.” Use cases are very important to help turn a problem statement into an “aha” moment for the reader. Moreover, use cases are important to limit scope and ensure that the SPW requirements are appropriate to serve its stated objectives. Absent a use case, you never really know what’s being asked for (and where it applies and where it does not apply). Use cases absolutely need to be contained within a requirements document.


For example, consider the US National Institute of Standards and Technology (NIST) Special Publication 800-152 A Profile for U.S. Federal Cryptographic Key Management Systems Draft 3 (December 2014). This special pub describes a combination of technical standards and policies around cryptographic key management systems. The problem is, nowhere in reading the document is it evident what, exactly, this applies to. Is this just “special, super secret key management systems for classified US government systems?” Or, does it apply to key management for things like Transport Layer Security (TLS) (or other cryptographic protocols that are well-established standards)? Why it matters: because if there are not use cases that define applicability, someone will assume it applies to everything. And, applying these requirements may conflict with (if not break) other standards.


90% of life isn’t showing up, it’s solving the right problem. You can’t solve the right problem if you don’t know (or cannot articulate) what it is, with some “for instances.”


Precise Language and Scope


It is astonishing to me how many SPW documents do not define core terminology used therein. Without a precise set of definitions, nobody really knows what is meant, and if something is vague, it’s going to be misinterpreted. (Worse, an undefined term may end up meaning whatever a “certifier” or other compliance overlord thinks it means: nobody ever really knows if they are compliant if compliant depends on what the certifier thinks it means.) Core terminology must be precisely and narrowly defined within the document. As the famous line goes from Let’s Call The Whole Thing Off,


“You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, tomato, tomahto
Let’s call the whole thing off.” (Lyrics by Ira Gershwin, melody by George
Gershwin)


The problem is, if a SPW is enshrined and applied, you can’t call it off. At least until the next revision. Figure out what to call a spud and make it clear, please!


For example, in the context of software, what is a vulnerability? A configuration error (leading to a security weakness)? A defect in software (that leads to a security weakness)? Any defect in software (regardless of the impact)? What if the design was intentional? Is a policy violation a vulnerability? A vulnerability cannot, surely, be all the above! And in fact, it isn’t, but just saying “vulnerability” and conflating all the above means that nobody will be able to come up with a remedy that works for all cases. (Note: for configurable software, if you configure it so my grandmother can hack into it, it’s not a “vulnerability,” it’s “user error.” There is only so much you can do to prevent a user shooting self in the foot when we are talking about firearms that allow you to point them at your feet.) Another example, what is a “module?” The answer may be very different depending on whether you are a hardware person or a software person.


If ‘it’ is not clear, ‘it’ is going to be misinterpreted.


Pragmatic Solutions


One of my biggest concerns with a lot of SPW documents is that they almost never take into account the value of pragmatism over perfection. Perfection is not achievable (much less at an acceptable cost) while “better” usually is achievable. (Surely “better” that everyone can do is better than “perfect” that is unachievable?) To those who insist, “evil slug vendors are profit driven and always want to do the minimum,” my response is that economics rules the world and doesn’t necessarily argue for the minimum. Generally speaking, it’s more profitable to find security vulnerabilities and fix them earlier in a product release cycle than waiting until you ship six affected versions of product and now have to produce 120 patches for a single issue (or patch 120 cloud instances). Most vendors know this (or find out the hard way). Customers certainly know this and complain if they have to apply too many patches (or if their cloud service uptime is negatively impacted by a lot of patch-related downtime).


More to the point, unless you can print money, invent a time machine or perfect cloning, time, money and people are always constrained resources so using them well is a must. Doing more X means – often – doing less of Y, because you can’t add more resource you don’t have or can’t find. Worse, doing more of X required for compliance may mean doing less of the Y that actually improves security, since they are mutually exclusive as long as resources are constrained and regulations are written by (or interpreted by) the Knights Who Say Ni.


In particular, I see little evidence that people proposing SPW have done much or any economic analysis of the cost of compliance. I know the government knows how to do this kind of analysis because – for example – the US Department of Defense does resource planning that among other things looks at “how many conflicts are we prepared to fight simultaneously?” rather than, “in a perfect world with unlimited resources and cyborg soldiers, we could take on Frabistatians, the Foobarians, and open a third front combating the Little Green Men from Marsians.” How I wish that other entities – any other entity – would analyze (e.g., do a reality check) on what the impact of X is before it becomes part of a SPW.


Any SPW should include an economic analysis of impact – and look at options. Included in that analysis should be the bane of (quasi-)regulatory ambition, “unintended consequences.” There are almost always unintended consequences of SPW, even those created with good motives. One of the big ones is, if you make it too expensive for suppliers to deal with you, there will be fewer suppliers. And that means choice will decrease and cost will increase. Any SPW should explicitly ask the question, “What would matter the most, be broadly implementable and cost the least (or be the most cost effective for all parties)?”


To provide an example, the NIST Interagency Report 7622 Notional Supply Chain Risk Management Practices for Federal Information Systems (the draft requirement has, I believe, since been excised) at one time wanted the “supplier” (e.g., a vendor) to notify the acquirer (e.g., a government agency) of “all personnel changes involving maintenance.” I suspect that the intent was something to the effect that, if the acquirer (let’s say, DoD) outsources a service, and that service involves a fundamental change of venue – e.g., the maintenance for the US Department of Defense manpower system is outsourced to Hostile Foreign Country, DoD wants to be notified. However, that is not what the requirement stated. One interpretation would be that any time someone touched code who didn’t write the original code (“a personnel change involving maintenance”) that a vendor would have to notify the government. Ok, Oracle has almost 5000 products (and lots and lots of clouds), billions of lines of code, and every day there are a lot of code checkouts where someone is changing something he or she did not write. Are we supposed to tweet all that stuff? What is that going to do for the acquirer? “Kaitlyn checked out and changed code that, like, Ashley wrote, LOL, OMG!”


Figure out what you really want, and what it is worth to you to get it.


Prescriptive Minimization


With rare exceptions, non-technical* process or management standards should not tell industry how exactly to do something, if for no other reason than there is no such thing as “best practice.” There are certainly better or worse practices, but arguably no single practice that everyone does, exactly the same way, that will work equally well for everyone subject to the requirements, for any length of time. Worse, SPW diktats often stifle innovation, drive up costs (without commensurate benefit) and fall prey to the buggy whip effect (where you are specifying how to use buggy whips long after people have moved from horse-and-buggy to Model Ts - or better). Add to all these reasons the economic impact referenced above.


To provide one example, consider (draft) NIST Special Publication 800-160 Systems Security Engineering, containing a requirement that, in the event of a discovered security bug, the engineering team should conduct root cause analysis. This sounds like a Mom and Apple Pie requirement on the face of it, so what could possibly be wrong with that? A clear Best Practice, right? Well, no, not really, on grounds of pragmatism and context.


Consider a security bug that is not only high impact but for which there is an exploit circulating in the wild. For commercial software vendors, job 1 will be getting a patch into customers’ hands (or at least the hands of their customers’ system administrators) and/or patching their cloud instances, as the case may be. Protection of customers under these circumstances is initially way more important than determining causation.


Second, it doesn’t necessarily make sense to do a root cause analysis on every single security bug of every severity. What does make sense is to deep dive on the more severe bugs (e.g., high Common Vulnerability Scoring System (CVSS) Base Score bugs), because those are the ones you really want to ensure you fixed completely (and avoid in the future). You might want to ask the following as part of your analysis:


“How/when did this get into the code base?”
“What is the resulting vulnerability (how can it be exploited)?”
“Have we looked elsewhere for similar problems?”
“Have we added test cases to regression tests and other test suites (like static analysis tools) to ensure that we can automate finding other instances?”
“Have we fixed it everywhere (or everywhere that is relevant?)”and
“Have we attempted to enshrine/transfer knowledge of the severity and impact of this bug across the development organization (so everyone knows why it’s a big deal and how to avoid it in future)?”


Given scarce resources, I’d argue that root cause analysis on a CVSS 0 bug is not as important as thoroughly addressing – and in future avoiding – a CVSS 9.0 or 10.0 bug, along the lines of the above analysis. If a standard enshrines the former, it leads to suboptimal resource allocation (like spreading peanut butter over too many slices of bread). Worse, any company doing the “better” thing will get dinged as being non-standards compliant if there is a Best Practice enshrined in SPW that calls for root cause analysis of everything, regardless of severity. Perfection works against actual security improvement.


Another “best practice” I see shilled relentlessly is third party static analysis. I’ve opined on why that is not a best practice in previous blogs, but I have new reasons to avoid it like the plague it is, which is a real world example of the high cost and low utility. Recently, we were made aware that a customer of Oracle (without asking our permission, that we would not have given if asked) submitted our software to a third party that does static analysis on binaries. Where to start with how extremely bad this is? Numero uno: the customer violated their license agreement with Oracle, which alone made their actions completely unacceptable. Add to that, the report we were furnished included alleged vulnerabilities not merely in Oracle but in another product Not Made By Oracle. (Needless to say, we could neither analyze those issues nor fix them in the event they turned out to be actual vulnerabilities and really, we did not want to see alleged vulnerabilities in Someone Else’s Code. That information is extremely sensitive and should not have been given to us.) Last but far from least was the fact that – drum roll – not one of the alleged security issues the third party reported was, in fact, an actual security vulnerability. 0% accuracy: zilch, zip, nada, bubkes, a’ohe mea. Further, one of our best security leads (I’d bill him out at least $2,000 bucks an hour) wasted his very valuable time determining that there was “no there, there.”


Running a tool (if and only if you have permission to do it) is nothing; the ability to analyze the results is everything. Third parties cannot do that since they have no actual code knowledge of what they are running the tool on, especially not on a code base as big as Oracle’s is. Third party static analysis is thus only a best practice if you want to waste time and money. But it’s the vendor’s time that is being wasted (maybe that third party should reimburse us the $2K an hour our kahuna spent analyzing their errata?), and the customer’s money. And last, but really first, violating licensing terms is unacceptable business conduct.


Summary


Nobody is perfect, but with all the attention being focused on cybersecurity, it would be really helpful if attempted problem solvers writing SPW could sharpen their – I was going to say, knives, but I am not sure I mean that! – focus. Yes, a sharpened focus is what is needed. Cybersecurity is an important area. Better security is achievable, but only if we know what we are worried about, we speak the same language, we can look at relative costs and benefits, and we allow for latitude in how we get to better. We can’t do everything, but everybody can do something. Let’s do the some of the things that matter – and that won’t make us spend resources checking boxes instead of making sure nobody can break into the boxes.


· I    * I note that one reason for technical standards is, of course, interoperability. In which case, people do need to implement, say, the Secure Whateverworks Protocol (SWP) a particular way, or it won’t work with another vendor’s implementation of SWP.


For More Information


Ruthlessly self-serving announcement follows: my sister and I, writing as Maddi Davidson, are pleased to announce that we have completed our third book in the Miss-Information Technology Mystery Series, With Murder You Get Sushi. (Also, our short story “Heartfelt” will appear in Mystery Times Ten this month, published by Buddhapuss Ink.)


Apropos of nothing having to do with security, I have discovered and become totally addicted to The Palliser Novels by Anthony Trollope. Like high class soap opera, only you get classics points for reading them. (Best of all, nobody in the book is named “Kardashian.”)



DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

The Four Ps of Standards/Procurement Requirements/”Whatevahs”

Mon, 2015-03-23 19:43

I am a veteran – not merely a military veteran, but an information security veteran. I don’t get medals for the latter, but I do have battle scars. Many of the scars are relatively recent: a result of tearing my hair out from many, many, many mind-numbing reviews of publications, draft standards and other kinds of documents which are ostensibly meant to make security better, cybersecurity being “hot” and all. Alas, many of these documents have linguistic and operational difficulties that often make it highly unlikely that they will achieve their stated “better security” objectives.

After reviewing so many documents and running into common patterns, I decided to take a cue from my MBA days and categorize my concerns in a catchy way. Though not a marketing major, I vaguely recall the “four Ps” of marketing (product, price, place and promotion) and decided to adapt them to the world of standards/procurement requirements/whatevahs (which I will now refer to as SPW). They are:

Pr    Problem Statement
Precise Language and Scope
Pragmatic Solutions
Prescriptive Minimizations

I t     I offer the "four Ps of SPW" for those who are attempting to improve cybersecurity by fiat, or in other ways intended to compel the market, in hopes that we may collectively get to better security without sinking into the swamp of despair, dallying in the desert of dashed hopes, trekking through the tundra of too-obscure requirements (nice use of alliteration, no?) … you get the point. While I think my advice is generally applicable in the SPW (say “spew”) realm, the context for my discussion is assurance slash supply chain risk mitigation since that’s what I seem to review most often.

Problem Statement

I cannot tell you how many SPW documents I have read in which Someone Was Attempting to Make Someone Else Do Something More Securely, only it wasn’t clear what, exactly, or more importantly, why (or even that the requirements would result in “better security”). Anything that seeks to impose Something Security-Oriented On Someone needs a clear problem statement. Without this, a proposed SPW becomes an expensive wish list with no associated benefits to it. Ultimately, the seller has no idea what the buyer really wants or needs. If a government agency cannot explain what they are really worried about, in language the “comply-ee” can understand, they shouldn’t be surprised if they get a chocolate-covered cockroach (eew) when they ask for something sweet, crunchy and locally sourced. (I’d add “sustainable,” as there seems to be no shortages of cockroaches.)

With regard to security, “supply chain” has become the mantra for attempting to regulate almost 100% of what businesses do. Poor quality, “backdoor boogiemen,” assurance, “supply chain shutdown” are all very (very!) different problems. Worse, the ambiguity around proposing a standard for “supply chain security” may encompass 100% of business operations. Example: my employer does not make their own paper clips or wood stirrers for coffee cups. Do we really need to worry about a shortage of either? No? Then don’t describe “supply chain requirements” that ask technology suppliers to track the wood sourced for our coffee stirrers. Buying a poor quality product, for example, is a business risk. It’s not, per se, a supply chain risk. Furthermore, while poor quality may lead to poor security, not all security problems are a result of quality issues. Some are a result of buyers not understanding that commercial off-the-shelf (COTS) software, while general purpose and often very good, is not “all purpose” and not designed for all threat environments.

The second aspect of a problem statement is the provision of use cases. A use cases is a fancy way of saying, “for example.” Use cases are very important to help turn a problem statement into an “aha” moment for the reader. Moreover, use cases are important to limit scope and ensure that the SPW requirements are appropriate to serve its stated objectives. Absent a use case, you never really know what’s being asked for (and where it applies and where it does not apply). Use cases absolutely need to be contained within a requirements document.

For example, consider the US National Institute of Standards and Technology (NIST) Special Publication 800-152 A Profile for U.S. Federal Cryptographic Key Management Systems Draft 3 (December 2014). This special pub describes a combination of technical standards and policies around cryptographic key management systems. The problem is, nowhere in reading the document is it evident what, exactly, this applies to. Is this just “special, super secret key management systems for classified US government systems?” Or, does it apply to key management for things like Transport Layer Security (TLS) (or other cryptographic protocols that are well-established standards)? Why it matters: because if there are not use cases that define applicability, someone will assume it applies to everything. And, applying these requirements may conflict with (if not break) other standards.

90% of life isn’t showing up, it’s solving the right problem. You can’t solve the right problem if you don’t know (or cannot articulate) what it is, with some “for instances.”

Precise Language and Scope

It is astonishing to me how many SPW documents do not define core terminology used therein. Without a precise set of definitions, nobody really knows what is meant, and if something is vague, it’s going to be misinterpreted. (Worse, an undefined term may end up meaning whatever a “certifier” or other compliance overlord thinks it means: nobody ever really knows if they are compliant if compliant depends on what the certifier thinks it means.) Core terminology must be precisely and narrowly defined within the document. As the famous line goes from Let’s Call The Whole Thing Off,

“You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, tomato, tomahto
Let’s call the whole thing off.” (Lyrics by Ira Gershwin, melody by GeorgeGershwin)

The problem is, if a SPW is enshrined and applied, you can’t call it off. At least until the next revision. Figure out what to call a spud and make it clear, please!

For example, in the context of software, what is a vulnerability? A configuration error (leading to a security weakness)? A defect in software (that leads to a security weakness)? Any defect in software (regardless of the impact)? What if the design was intentional? Is a policy violation a vulnerability? A vulnerability cannot, surely, be all the above! And in fact, it isn’t, but just saying “vulnerability” and conflating all the above means that nobody will be able to come up with a remedy that works for all cases. (Note: for configurable software, if you configure it so my grandmother can hack into it, it’s not a “vulnerability,” it’s “user error.” There is only so much you can do to prevent a user shooting self in the foot when we are talking about firearms that allow you to point them at your feet.) Another example, what is a “module?” The answer may be very different depending on whether you are a hardware person or a software person.

If ‘it’ is not clear, ‘it’ is going to be misinterpreted.

Pragmatic Solutions

One of my biggest concerns with a lot of SPW documents is that they almost never take into account the value of pragmatism over perfection. Perfection is not achievable (much less at an acceptable cost) while “better” usually is achievable. (Surely “better” that everyone can do is better than “perfect” that is unachievable?) To those who insist, “evil slug vendors are profit driven and always want to do the minimum,” my response is that economics rules the world and doesn’t necessarily argue for the minimum. Generally speaking, it’s more profitable to find security vulnerabilities and fix them earlier in a product release cycle than waiting until you ship six affected versions of product and now have to produce 120 patches for a single issue (or patch 120 cloud instances). Most vendors know this (or find out the hard way). Customers certainly know this and complain if they have to apply too many patches (or if their cloud service uptime is negatively impacted by a lot of patch-related downtime).

More to the point, unless you can print money, invent a time machine or perfect cloning, time, money and people are always constrained resources so using them well is a must. Doing more X means – often – doing less of Y, because you can’t add more resource you don’t have or can’t find. Worse, doing more of X required for compliance may mean doing less of the Y that actually improves security, since they are mutually exclusive as long as resources are constrained and regulations are written by (or interpreted by) the Knights Who Say Ni.

In particular, I see little evidence that people proposing SPW have done much or any economic analysis of the cost of compliance. I know the government knows how to do this kind of analysis because – for example – the US Department of Defense does resource planning that among other things looks at “how many conflicts are we prepared to fight simultaneously?” rather than, “in a perfect world with unlimited resources and cyborg soldiers, we could take on Frabistatians, the Foobarians, and open a third front combating the Little Green Men from Marsians.” How I wish that other entities – any other entity – would analyze (e.g., do a reality check) on what the impact of X is before it becomes part of a SPW.

Any SPW should include an economic analysis of impact – and look at options. Included in that analysis should be the bane of (quasi-)regulatory ambition, “unintended consequences.” There are almost always unintended consequences of SPW, even those created with good motives. One of the big ones is, if you make it too expensive for suppliers to deal with you, there will be fewer suppliers. And that means choice will decrease and cost will increase. Any SPW should explicitly ask the question, “What would matter the most, be broadly implementable and cost the least (or be the most cost effective for all parties)?”

To provide an example, the NIST Interagency Report 7622 Notional Supply Chain Risk Management Practices for Federal Information Systems (the draft requirement has, I believe, since been excised) at one time wanted the “supplier” (e.g., a vendor) to notify the acquirer (e.g., a government agency) of “all personnel changes involving maintenance.” I suspect that the intent was something to the effect that, if the acquirer (let’s say, DoD) outsources a service, and that service involves a fundamental change of venue – e.g., the maintenance for the US Department of Defense manpower system is outsourced to Hostile Foreign Country, DoD wants to be notified. However, that is not what the requirement stated. One interpretation would be that any time someone touched code who didn’t write the original code (“a personnel change involving maintenance”) that a vendor would have to notify the government. Ok, Oracle has almost 5000 products (and lots and lots of clouds), billions of lines of code, and every day there are a lot of code checkouts where someone is changing something he or she did not write. Are we supposed to tweet all that stuff? What is that going to do for the acquirer? “Kaitlyn checked out and changed code that, like, Ashley wrote, LOL, OMG!”

Figure out what you really want, and what it is worth to you to get it.

Prescriptive Minimization

With rare exceptions, non-technical* process or management standards should not tell industry how exactly to do something, if for no other reason than there is no such thing as “best practice.” There are certainly better or worse practices, but arguably no single practice that everyone does, exactly the same way, that will work equally well for everyone subject to the requirements, for any length of time. Worse, SPW diktats often stifle innovation, drive up costs (without commensurate benefit) and fall prey to the buggy whip effect (where you are specifying how to use buggy whips long after people have moved from horse-and-buggy to Model Ts - or better). Add to all these reasons the economic impact referenced above.

To provide one example, consider (draft) NIST Special Publication 800-160 Systems Security Engineering, containing a requirement that, in the event of a discovered security bug, the engineering team should conduct root cause analysis. This sounds like a Mom and Apple Pie requirement on the face of it, so what could possibly be wrong with that? A clear Best Practice, right? Well, no, not really, on grounds of pragmatism and context.

Consider a security bug that is not only high impact but for which there is an exploit circulating in the wild. For commercial software vendors, job 1 will be getting a patch into customers’ hands (or at least the hands of their customers’ system administrators) and/or patching their cloud instances, as the case may be. Protection of customers under these circumstances is initially way more important than determining causation.

Second, it doesn’t necessarily make sense to do a root cause analysis on every single security bug of every severity. What does make sense is to deep dive on the more severe bugs (e.g., high Common Vulnerability Scoring System (CVSS) Base Score bugs), because those are the ones you really want to ensure you fixed completely (and avoid in the future). You might want to ask the following as part of your analysis:

“How/when did this get into the code base?”
“What is the resulting vulnerability (how can it be exploited)?”
“Have we looked elsewhere for similar problems?”
“Have we added test cases to regression tests and other test suites (like static analysis tools) to ensure that we can automate finding other instances?”
“Have we fixed it everywhere (or everywhere that is relevant?)”and
“Have we attempted to enshrine/transfer knowledge of the severity and impact of this bug across the development organization (so everyone knows why it’s a big deal and how to avoid it in future)?”

Given scarce resources, I’d argue that root cause analysis on a CVSS 0 bug is not as important as thoroughly addressing – and in future avoiding – a CVSS 9.0 or 10.0 bug, along the lines of the above analysis. If a standard enshrines the former, it leads to suboptimal resource allocation (like spreading peanut butter over too many slices of bread). Worse, any company doing the “better” thing will get dinged as being non-standards compliant if there is a Best Practice enshrined in SPW that calls for root cause analysis of everything, regardless of severity. Perfection works against actual security improvement.

Another “best practice” I see shilled relentlessly is third party static analysis. I’ve opined on why that is not a best practice in previous blogs, but I have new reasons to avoid it like the plague it is, which is a real world example of the high cost and low utility. Recently, we were made aware that a customer of Oracle (without asking our permission, that we would not have given if asked) submitted our software to a third party that does static analysis on binaries. Where to start with how extremely bad this is? Numero uno: the customer violated their license agreement with Oracle, which alone made their actions completely unacceptable. Add to that, the report we were furnished included alleged vulnerabilities not merely in Oracle but in another product Not Made By Oracle. (Needless to say, we could neither analyze those issues nor fix them in the event they turned out to be actual vulnerabilities and really, we did not want to see alleged vulnerabilities in Someone Else’s Code. That information is extremely sensitive and should not have been given to us.) Last but far from least was the fact that – drum roll – not one of the alleged security issues the third party reported was, in fact, an actual security vulnerability. 0% accuracy: zilch, zip, nada, bubkes, a’ohe mea. Further, one of our best security leads (I’d bill him out at least $2,000 bucks an hour) wasted his very valuable time determining that there was “no there, there.”

Running a tool (if and only if you have permission to do it) is nothing; the ability to analyze the results is everything. Third parties cannot do that since they have no actual code knowledge of what they are running the tool on, especially not on a code base as big as Oracle’s is. Third party static analysis is thus only a best practice if you want to waste time and money. But it’s the vendor’s time that is being wasted (maybe that third party should reimburse us the $2K an hour our kahuna spent analyzing their errata?), and the customer’s money. And last, but really first, violating licensing terms is unacceptable business conduct.

Summary

Nobody is perfect, but with all the attention being focused on cybersecurity, it would be really helpful if attempted problem solvers writing SPW could sharpen their – I was going to say, knives, but I am not sure I mean that! – focus. Yes, a sharpened focus is what is needed. Cybersecurity is an important area. Better security is achievable, but only if we know what we are worried about, we speak the same language, we can look at relative costs and benefits, and we allow for latitude in how we get to better. We can’t do everything, but everybody can do something. Let’s do the some of the things that matter – and that won’t make us spend resources checking boxes instead of making sure nobody can break into the boxes.

· I    * I note that one reason for technical standards is, of course, interoperability. In which case, people do need to implement, say, the Secure Whateverworks Protocol (SWP) a particular way, or it won’t work with another vendor’s implementation of SWP.

For More Information

Ruthlessly self-serving announcement follows: my sister and I, writing as Maddi Davidson, are pleased to announce that we have completed our third book in the Miss-Information Technology Mystery Series, With Murder You Get Sushi. (Also, our short story “Heartfelt” will appear in Mystery Times Ten this month, published by Buddhapuss Ink.)

Apropos of nothing having to do with security, I have discovered and become totally addicted to The Palliser Novels by Anthony Trollope. Like high class soap opera, only you get classics points for reading them. (Best of all, nobody in the book is named “Kardashian.”)

Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Tue, 2014-03-11 16:08

Many commercial off-the-shelf (COTS) vendors have recently seen an uptick of interest by their customers in third party static analysis or static analysis of binaries (compiled code). Customers who are insisting upon this in some cases have referenced the (then-)SANS Top Twenty Critical Controls (http://www.sans.org/critical-security-controls/) to support their position, specifically, Critical Control 6, Application Software Security:

"Configuration/Hygiene: Test in-house developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools (emphases added). In particular, input validation and output encoding routines of application software should be carefully reviewed and tested."


Recently, the "ownership" of the 20 Critical Controls has passed to the Council on CyberSecurity and the particular provision on third party static analysis has been excised. Oracle provided feedback on the provision (more on which below) and appreciates the responsiveness of the editors of the 20 Critical Controls to our comments.


The argument for third party code analysis is that customers would like to know that they are getting “reasonably defect-free” code in a product. All things being equal, knowing you are getting better quality code than not is a good thing, while noting that there is no defect-free or even security defect-free software – no tool finds all problems and they generally don’t find design defects. Also, a product that is “testably free” of obvious defects may still have significant security flaws – like not having any authentication, access control, or auditing. Nobody is arguing that static analysis isn’t a good thing – that’s why a lot of vendors already do it and don’t need a third party to “attest” to their code (assuming there is a basis for trusting the third party other than their saying “trust us”).


Oracle believes third party static analysis is at best infeasible for organizations with mature security assurance practices and – well, a bad idea, not to put too fine a point on it. The reasons why it is a bad idea are expanded upon in detail below, and include: 1) worse, not better security 2) increased security risk to customers, 3) an increased risk of intellectual property theft and 4) increased costs for commercial software providers without a commensurate increase in security. Note that this discussion does not address the use of other tools - such as so-called web vulnerability analysis tools – that operate against “as installed” object code. These tools also have challenges (i.e., a high rate of false positives) but do not in general pose the same security threats, risks and high costs that static analysis as conducted by third parties does.


Discussion: Static analysis tools are one of many means by which vendors of software, including commercial off-the-shelf (COTS) software, can find coding defects that may lead to exploitable security vulnerabilities. Many vendors – especially large COTS providers - do static analysis of their own code as part of a robust, secure software development process. In fact, there are many different types of testing that can be done to improve security and reliability of code, to include regression testing (ensuring that changes to code do not break something else, and that code operates correctly after it has been modified), “fuzzing” tools, web application vulnerability tools and more. No one tool finds all issues or is necessarily even suitable for all technologies or all programming languages. Most companies use a multiplicity of tools that they select based on factors such as cost, ease-of-use, what the tools find, how well and how accurately, programming languages the tool understands, and other factors. Note of course that these tools must be used in a greater security assurance context (security training, ethical hacking, threat modeling, etc.), echoing the popular nostrum that security has to be “baked in, not bolted on.” Static analysis and other tools can’t “bake in” security – just find coding errors that may lead to security weaknesses. More to the point, static analysis tools should correctly be categorized as “code analysis tools” rather than “code testing tools,” because they do not automatically produce accurate or actionable results when run and cannot be used, typically, by a junior developer or quality assurance (QA) person.


These tools must in general be “tuned” or “trained” or in some cases “programmed” to work against a particular code base, and thus the people using them need to be skilled developers, security analysts or QA experts. Oracle has spent many person years evaluating the tools we use, and have made a significant commitment to a particular static analysis tool which works the best against much – but not all – of our code base. We have found that results are not typically repeatable from code base to code base even within a company. That is, just because the tool works well on one code base does not mean it will work equally well on another product -- another reason to work with a strong vendor who will consider improving the tool to address weaknesses. In short, static analysis tools are not a magic bullet for all security ills, and the odds of a third party being able to do meaningful, accurate and cost-effective static code analysis are slim to none.


1. Third party static analysis is not industry-standard practice.
Despite the marketing claims of the third parties that do this, “third party code review” is not “industry best practice.” As it happens, it is certainly not industry-standard practice for multiple reasons, not the least of which is the lack of validation of the entities and tools used to do such “validation” and the lack of standards to measure efficacy, such as what does the tool find, how well, and how cost effectively? As Juvenal so famously remarked, “Quis custodiet ipsos custodes?” (Who watches the watchmen?) Any third party can claim, and many do, that “we have zero false positives” but there is no way to validate such puffery – and it is puffery. (Sarcasm on: I think any company that does static analysis as a service should agree to have their code analyzed by a competitor. After all, we only have Company X’s say-so that they can find all defects known to mankind with zero false positives, whiten your teeth and get rid of ring-around-the-collar, all with a morning-fresh scent!)


The current International Standards Organization (ISO) standard for assurance (which encompasses the validation of secure code development), the international Common Criteria (ISO-15408), is, in fact, retreating from the need for source code access currently required at higher assurance levels (e.g., Evaluation Assurance Level (EAL) 4). While limited vulnerability analysis has been part of higher assurance evaluations currently being deprecated by the U.S. National Information Assurance Partnership (NIAP), static analysis has not been a requirement at commercial assurance levels. Hence, “the current ISO assurance standard” does not include third party static code analysis and thus, “third party static analysis” is not standard industry practice. Lastly, “third party code analysis” is clearly not “industry best practice” if for no other reason than all the major COTS vendors are opposed to it and will not agree to it. We are already analyzing our own code, thanks very much.


(It should be noted that third party systematic manual code review is equally impractical for the code bases of most commercial software. The Oracle database, for example, has multiple millions of lines of code. Manual code review for the scale of code most COTS vendors produce would accomplish little except pad the bank accounts of the consultants doing it without commensurate value provided (or risk reduction) for either the vendor or the customers of the product. Lastly, the nature of commercial development is that the code is continuously in development: the code base literally changes daily. Third party manual code review in these circumstances would accomplish absolutely nothing. It would be like painting a house while it is under construction.)


2. Many vendors already use third party tools to find coding errors that may lead to exploitable security vulnerabilities.
As noted, many large COTS vendors have well-established assurance programs that include the use of a multiplicity of tools to attempt to find not merely defects in their code, but defects that lead to exploitable security vulnerabilities. Since only a vendor can actually fix a product defect in their proprietary code, and generally most vulnerabilities need a “code fix” to eliminate the vulnerability, it makes sense for vendors to run these tools themselves. Many do.


Oracle, for example, has a site license for a COTS static analysis tool and Oracle also produces a static analysis tool in-house (Parfait, which was originally developed by Sun Labs). With Parfait, Oracle has the luxury of enhancing the tool quickly to meet Oracle-specific needs. Oracle has also licensed a web application vulnerability testing tool, and has produced a number of in-house tools that focus on Oracle’s own (proprietary) technologies. It is unlikely that any third party tool can fuzz Oracle PL/SQL as well as Oracle’s own tools, or analyze Oracle’s proprietary SQL networking protocol as well as Oracle’s in-house tools do. The Oracle Ethical Hacking Team (EHT) also develops tools that they use to “hack” Oracle products, some of which are “productized” for use by other development and QA teams. As Oracle runs Oracle Corporation on Oracle products, Oracle has a built-in incentive to write and deliver secure code. (In fact, this is not unusual: many COTS vendors run their own businesses on their own products and are thus highly motivated to build secure products. Third party code testers typically do not build anything that they run their own enterprises on.)


The above tool usage within Oracle is in addition to extensive regression testing of functionality to high levels of code coverage, including regression testing of security functionality. Oracle also uses other third party security tools (many of which are open source) that are vetted and recommended by the Oracle Software Security Assurance (OSSA) team. Additionally, Oracle measures compliance with “use of automated tools” as part of the OSSA program. Compliance against OSSA is reported quarterly to development line-of-business owners as well as executive management (the company president and the CEO). Many vendors have similarly robust assurance programs that include static analysis as one of many means to improve product security.


Several large software vendors have acquired static analysis (or other) code analysis tools. HP, for example, acquired both Fortify and WebInspect and IBM acquired Coverity. This is indicative both of these vendors’ commitment to “the secure code marketplace” but also, one assumes, to secure development within their own organizations. Note that while both vendors have service offerings for the tools, neither is pushing “third party code testing,” which says a lot. Everything, actually.


Note that most vendors will not provide static analysis results to customers for valid business reasons, including ensuring the security of all customers. For example, a vendor who finds a vulnerability may often fix the issue in the version of product that is under development (i.e., the “next product train leaving the station”). Newer versions are more secure (and less costly to maintain since the issue is already fixed and no patch is required). However, most vendors do not - or cannot - fix an issue in all shipping versions of product and certainly not in versions that have been deprecated. Telling customers the specifics of a vulnerability (i.e., by showing them scan results) would put all customers on older, unfixed or deprecated versions at risk.


3. Testing COTS for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software increases costs without a commensurate return on investment (ROI).
The use of static code analysis software is a highly technical endeavor requiring skilled development personnel. There are skill requirements and a necessity for detailed operational knowledge of how the software is built to help eliminate false positives, factors that raise the cost of this form of “testing.” Additionally, static code analysis tools are not the tool of choice for detecting malware or backdoors. (It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look.)


If the real concern of a customer insisting on a third party code scan is malware and backdoor detection: it won’t work and thus represents an extremely expensive – and useless – distraction.


4. Third party code analysis will diminish overall product security.
It is precisely leading vendors’ experience with static analysis tools that contributes to their unwillingness to have third parties attempt to analyze code – emphasis on “attempt.” None of these tools are “plug and play": in some cases, it has taken years, not months, to be able to achieve actionable results, even from the best available static analysis tools. These are in fact code analysis tools and must be “tuned” – and in some cases actually “programmed”- to understand code, and must typically be run by an experienced developer (that is, a developer who understand the particular code base being analyzed) for results to be useful and actionable. There are many reasons why static analysis tools either raise many false positives, or skip entire bodies of code. For example, because of the way Oracle implements particular functionality (memory management) in the database, static analysis tools that look for buffer overflows either do not work, or raise false positives (Oracle writes its own checks to look for those issues).


The rate of false positives from use of a “random” tool run by inexperienced operators – especially on a code base as large as that of most commercial products – would put a vendor in the position of responding to unsubstantiated fear, uncertainty, and doubt (FUD). In a code base of 10,000,000 lines of code, even a false positive rate of one per 1000 lines of code would yield 10,000 “false positives” to chase down. The cost of doing this is prohibitive. (One such tool run against a large Oracle code base generated a false positive for every 3.4 lines of code, or about 160,000 false positives in toto due to the size of the code base.)


This is why most people using these tools must “tune” them to drown out “noise.” Many vendors have already had this false positive issue with customers running web application vulnerability tools and delivering in some cases hundreds of pages of “alarms” in which there were, perhaps, a half page of actionable issues. The rate of false positives is the single biggest determinant whether these tools are worth using or an expensive distraction (aka “rathole”).


No third party firm has to prove that their tool is accurate – especially not if the vendor is forced to use a third party to validate their code – and thus there is little to no incentive to improve their tool. Consultants get paid more the longer they are on site and working. A legislative or “standards” requirement for “third party code analysis” is therefore a license for the third party doing it to print money. Putting it differently, if the use of third party static analysis was accurate and cost effective, why wouldn’t vendors already be doing it? Instead, many vendors use static analysis tools in-house, because they own the code, and are willing to assume the cost of going up the learning curve for a long term benefit to them of reduced defects (and reduced cost of fixing these defects as more vulnerabilities are found earlier in the development cycle).


In short, the use of a third party is the most expensive, non-useful, high-cost attempt at “better code” most vendors could possibly use, and would result in worse security, not better security as in-house “security boots on the ground” are diverted to working with the third party. It is unreasonable to expect any vendor to in effect tune a third party tool and train the third party on their code – and then have to pay the third party for the privilege of doing it. Third party static analysis represents an unacceptably high opportunity cost caused by the “crowding out effect” of taking scarce security resources and using them on activity of low value to the vendor and to their customers. The only “winner” here is the third party. Ka-chink. Ka-chink.


5. Third party code analysis puts customers at increased risk.
As noted, there is no standard for what third party static analysis tools find, let alone how well and how economically they find it. More problematically, there are no standards for protection of any actual vulnerabilities these tools find. In effect, third party code analysis allows the third party to amass a database of unfixed vulnerabilities in products without any requirements for data protection or any recourse should that information be sold, incorporated into a hacking tool or breached. The mere fact of a third party amassing such sensitive information makes the third party a hacker target. Why attempt to hack products one by one if you can break into a third party’s network and get a listing of defects across multiple products – in the handy “economy size?” The problem is magnified if the “decompiled” source code is stored at the third party: such source code would be an even larger hacker target than the list of vulnerabilities the third party found.


Most vendors have very strict controls not merely on their source code, but on the information about product vulnerabilities that they know about and are triaging and fixing. Oracle Corporation, for example, has stringent security vulnerability handling policies that are promulgated and “scored” as part of Oracle’s software and hardware assurance program. Oracle uses its own secure database technology (row level access control) to enforce “need to know” on security vulnerabilities, information that is considered among the most sensitive information the company has. Security bugs are not published (meaning, they are not generally searchable and readable across the company or accessible by customers). Also, security bug access is stringently limited to those working on a bug fix (and selected others, such as security analysts and the security point of contact (SPOC) for the development area).


One of the reasons Oracle is stringent about limiting access to security vulnerability information is that this information often does leak when “managed” by third parties, even third parties with presumed expertise in secret-keeping. In the past, MI5 circulated information about a non-public Oracle database vulnerability among UK defense and intelligence entities (it should be noted that nobody reported this to Oracle, despite the fact that only Oracle could issue a patch for the issue). Oracle was only notified about the bug by a US commercial company to whom the information had leaked. As the saying goes, two people can’t keep a secret.


There is another risk that has not generally been considered in third party static analysis, and that is the increased interest in cyber-offense. There is evidence that the market for so-called zero-day vulnerabilities is being fueled in part by governments seeking to develop cyber-offense tools. (StuxNet, for example, allegedly made use of at least four “zero-day” vulnerabilities: that is, vulnerabilities not previously reported to a vendor.) Coupled with the increased interest in military suppliers/system integrators in getting into the “cyber security business,” it is not a stretch to think that at last some third parties getting into the “code analysis” business can and would use that as an opportunity to “sell to both sides” – use legislative fiat or customer pressure to force vendors to consent to static analysis, and then surreptitiously sell the vulnerabilities they found to the highest bidder as zero-days. Who would know?


Governments in particular cannot reasonably simultaneously fuel the market in zero days, complain at how irresponsible their COTS vendors are for not building better code and/or insist on third party static analysis. This is like stoking the fire and then complaining that the room is too hot.


6. Equality of access to vulnerability information protects all customers.
Most vendors do not provide advance information on security vulnerabilities to some customers but not others, or more information about security vulnerabilities to some customers but not others. As noted above, one reason for this is the heightened risk that such information will leak, and put the customers “not in the know” at increased risk. Not to mention, all customers believe their secrets are as worthy of protection as any other customer: nobody wants to be on the “Last Notified” list.


Thus, third party static analysis is problematic because it may result in violating basic fairness and equality in terms of vulnerability disclosure in the rare instances where these scans actually find exploitable vulnerabilities. The business model for some vendors offering static analysis as a service is to convince the customers of the vendor that the vendor is an evil slug and cannot be trusted, and thus the customer should insist on the third party analyzing the vendors’ code base.


There is an implicit assumption that the vendor will fix vulnerabilities that the third party static analysis finds immediately, or at least, before the customer buys/installs the product. However, the reality is more subtle than that. For one thing, it is almost never the case that a vulnerability exists in one and only one version of product: it may also exist on older versions. Complicating the matter: some issues cannot be “fixed” via a patch to the software but require the vendor to rearchitect functionality. This typically can only be done in so-called major product releases, which may only occur every two to three years. Furthermore, such issues often cannot be fixed on older versions because the scope of change is so drastic it will break dependent applications. Thus, a customer (as well as the third party) has information about a “not-easily-fixed” vulnerability which puts other customers at a disadvantage and at risk to the extent that information may leak.


Therefore, allowing some customers access to the results of a third party code scan in advance of a product release would violate most vendors’ disclosure policies as well as actually increasing risk to many, many customers, and potentially that increased risk could exist for a long period of time.


7. Third party code analysis sets an unacceptable precedent that risks vendors’ core intellectual property (IP).
COTS vendors maintain very tight control over their proprietary source code because it is core, high-value IP. As such, most COTS vendors will not allow third parties to conduct static analysis against source code (and for purposes of this discussion, this includes static analysis against binaries, which typically violates standard license agreements).


Virtually all companies are aware of the tremendous cost of intellectual property theft: billions of dollars per year, according to published reports. Many nation states, including those that condone if not encourage wholesale intellectual property theft, are now asking for source code access as a condition of selling COTS products into their markets. Most COTS vendors have refused these requests. One can easily imagine that for some nation states, the primary reason to request source code access (or, alternatively, “third party analysis of code”) is for intellectual property theft or economic espionage. Once a government-sanctioned third party has access to source code, so may the government. (Why steal source code if you can get a vendor to gift wrap it and hand it to you under the rubric of “third party code analysis?”)


Another likely reason some governments may insist on source code access (or third party code analysis) is to analyze the code for weaknesses they then exploit for their own national security purposes (e.g., more intellectual property theft). All things being equal, it is easier to find defects in source code than in object code. Refusing to accede to these requests – in addition to, of course, a vendor doing its own code analysis and defect remediation – thus protects all customers. In short, agreeing to any third party code analysis involving source code – either static analysis or static analysis of binaries - would make it very difficult if not impossible for a vendor to refuse any other similar requests for source code access, which would put their core intellectual property at risk. Third party code analysis is a very bad idea because there is no way to “undo” a precedent once it is set.


Summary
Software should have a wide variety of tests performed before it is shipped and additional security tests (such as penetration tests) should be used against “as-deployed” software. However, the level of testing should be commensurate with the risk, which is both basic risk management and appropriate (scarce) resource management. A typical firm has many software elements, most probably COTS, and to suggest that they all be tested with static analysis tools begs a sanity check. The scope of COTS alone argues against this requirement: COTS products run the gamut from operating systems to databases to middleware, business intelligence and other analytic tools, business applications (accounting, supply chain management, manufacturing) as well as specialized vertical market applications (e.g., clinical trial software),
representing a number of programming languages and billions – no, hundreds of billions - of lines of code.


The use of static analysis tools in development to help find and remediate security vulnerabilities is a good assurance practice, albeit a difficult one because of the complexity of software and the difficulty of using these tools. The only utility of these tools is that they be used by the producer of software in a cost- effective way geared towards sustained vulnerability reduction over time. The mandated use of third party static analysis to “validate” or “test” code is unsupportable, for reasons of cost (especially opportunity cost), precedence, increased risk to vendors’ IP and increased security risk to customers. The third party static code analysis market is little more than a subterfuge for enabling the zero-day vulnerability market: bad security, at a high cost, and very bad public policy.


Book of the Month
It’s been so long since I blogged, it’s hard to pick out just a few books to recommend. Here are three, and a "freebie":


Hawaiki Rising: Hōkūle’a, Nainoa Thompson and the Hawaiian Renaissance by Sam Low
Among the most amazing tidbits of history are the vast voyages that the Polynesians made to settle (and travel among) Tahiti, Hawai’i and Aotearoa (New Zealand) using navigational methods largely lost to history. (Magellan – meh – he had a compass and sextant.) This book describes the re-creation of Polynesian wayfinding in Hawai’i in the 1970s via the building of a double-hulled Polynesian voyaging canoe, the Hōkūle’a, and how one amazing Hawaiian (Nainoa Thompson) – under the tutelage of one of the last practitioners of wayfinding (Mau Piailug) – made an amazing voyage from Hawai’i to Tahiti using only his knowledge of the stars, the winds, and the currents. (Aside: one of my favorite songs is “H
ōkūle’a Hula,” which describes this voyage, and is so nicely performed by Erik Lee.) Note: the Hōkūle’a is currently on a voyage around the world.


The Korean War by Max Hastings
Max Hastings is one of the few historians whom I think is truly balanced: he looks at the moral issues of history, weighs them, and presents a fair analysis – not “shove-it-down-your-throat revisionism.” He also makes use of a lot of first-person accounts, which makes history come alive. The Korean War is in so many cases a forgotten war, especially the fact that it literally is a war that never ended. It’s a good lesson of history, as it is made clear that the US drew down their military so rapidly and drastically after the World War II that we were largely (I am trying not to say “completely”) unprepared for Korea. (Moral: there is always another war.)


Code Talker by Chester Nez
Many people now know of the crucial role that members of the Navajo Nation played in the Pacific War: the code they created that provided a crucial advantage (and was never broken). This book is a first-person account of the experiences of one Navajo code talker, from his experiences growing up on the reservation to his training as a Marine, and his experiences in the Pacific Theater. Fascinating.


Securing Oracle Database 12c: A Technical Primer
If you are a DBA or security professional looking for more information on Oracle database security, then you will be interested in this book. Written by members of Oracle's engineering team and the President of the International Oracle User Group (IOUG), Michelle Malcher, the book provides a primer on capabilities such as data redaction, privilege analysis and conditional auditing. If you have Oracle databases in your environment, you will want to add this book to your collection of professional information. Register now for the complimentary eBook and learn from the experts.




Pages